How to call Python modules from Node using Pythonia

Julian Bilcke
4 min readJun 22, 2023

--

While the Node ecosystem is pretty mature, many AI/ML and research tools are developed using Python. This small tutorial presents you a simple way to use Python libraries from Node without calling external processes yourself or making HTTP API calls.

Intro to Pythonia

Recently I came across a project called Pythonia, which allows a Node program to call Python (and Python to call Node).

Given already working Python and Node environments on your machine, then you can install it like any other NPM library:

npm install pythonia

Importing built-in Python modules is pretty straightforward and feels like using a dynamic “await import(..)”, except here it is “await python(..)”

import { python } from 'pythonia' // import Pythonia

const { date } = await python('datetime') // import a built-in Python module

Then you can call functions as you would do in Python:

const today = await date.today() // get current date

console.log(await today.strftime("%d/%m/%Y")) // format to console

As you can see calls are asynchronous, which plays well with Node async model.

Tips and tricks

Asynchronous generators

Python has its own mechanism of generators yielding values, but using it asynchronously from Node is relatively straightforward:

const iterator = await someGenerator(input)
for await (const value of iterator) {
console.log(value)
}

Using *kwargs

A common pattern in Python is to have named parameters like this:

dosomething("stuff", x=42, y=64)

To do this from JS, add $ to the end of the function’s name:

await dosomething$('stuff', { x: 42, y: 64 })

Using modules installed through PIP

This actually works like any other Python module installation.

One issue I encountered is that due to my multiple Python environments, Pythonia couldn’t find my module when I installed it through pip, so I had to use pip3.

Another solution for you can be to configure the path to the Python executable like so (before you import Pythonia), or define it before calling your Node script:

process.env.PYTHON_BIN = '/path/to/python/executable';

import { python } from 'pythonia';

Using local modules

If you want to use files residing inside a local project folder, you will want to add your folder to the path where Python is looking for modules:

import { python } from 'pythonia'

// make it easier to import local Python modules
const sys = await python('sys')
await sys.path.insert(0, '.')

Avoid long async calls

There is a limitation of Pythonia which prevent performing very long async calls (they will timeout).

On one of my projects this was blocking to me, as I called a Python function which downloaded a large model (10 Gb).

/home/user/app/node_modules/pythonia/src/pythonia/Bridge.js:123
if (ret === 'timeout') onTimeout()
^
BridgeException [Error]: Attempt to access '' failed.
Python didn't respond in time (100000ms), look above for any Python errors.
If no errors, the API call hung.

There are multiple approaches to this (the Pythonia README mentions the “need to manually create new thread”), in my case I simply prefetched the model before calling Pythonia.

Python.exit()

It is important to call python.exit() before you exit your program, or else a Python process might still be running in the background and consume precious memory:

process.on('SIGINT', () => {
try {
(python as any).exit()
} catch (err) {
// exiting Pythonia can get a bit messy: try/catch or not,
// you *will* see warnings and tracebacks in the console
}
process.exit(0)
})

The warnings and stack traces in the console might look scary, but won’t prevent the server from working properly.

Live example

I wanted to try a real scenario involving an actual library with more complex dependencies (native binding etc) and complex tasks.

So I ported my previous node project to generate HTML to use Pythonia, CTransformers and GPT-2 instead, and to my surprise the code stays quite compact and readable:

import express from 'express'
import { python } from 'pythonia'

const { AutoModelForCausalLM } = await python('ctransformers')
const llm = await AutoModelForCausalLM.from_pretrained('marella/gpt-2-ggml')

const app = express()
const port = 7860

app.get('/', async (req, res) => {
const prompt = '<html><head><title>My Favorite Cookie Recipe</title></head><body><div><p>'
res.write(prompt)
const raw = await llm(prompt)
const output = raw.split('</html>')
res.write(output + '</html>')
res.end()
})

app.listen(port, () => { console.log(`Open http://localhost:${port}`) })

You can find the code here and the live Hugging Face space here (it takes about 30s to load).

This is just a demo made using lightweight model (GPT-2, 250 Mb). The quality of the output is an order of magnitude below what GPT-3 or llama derivatives can do.

Make it smooth: use Python generators

In my previous example we waited for the end of the “await llm()” before sending the output to the browser, but after looking at the doc of our library, we can see it also supports Python generators!

Let’s refactor the previous code to stream each chunk yielded by the Python generator using Node’s iterator mechanism:


// convert our input string into tokens (a sequence of integer)
const inputTokens = await llm.tokenize(prompt)

// initialize the generator (will take some time)
const generator = await llm.generate(inputTokens)

// iterate over each token asynchronously
for await (const token of generator) {

// convert the token number integer into a string chunk
res.write(await llm.detokenize(token))
}

Much better!

Deploying a Python + Node app using Docker

You can find the Dockerfile to deploy the app to a Hugging Face space:

Final words

Let me know in the comments if you build any interesting Node app using this technique, and happy hacking!

--

--