React Native

Behind the Scenes of React Native Multithreading: Vision Camera V5 x React Native Worklets

Tomasz J. ŻelawskiJan 30, 20265 min read

If you’ve been working with React Native for a while, you’ve probably hit a wall where animations, gestures, or app performance should have been smooth, but weren’t.

For a long time, React Native developers had to choose between developer comfort and performance. JavaScript was easy and expressive, but pushing it too far was challenging. As apps grew more ambitious, adding real‑time camera features, advanced graphics, or audio processing, it became clear that we needed better tools for concurrency.

In this article, we tell the story of how the React Native Worklets library was born, what it does, what it brings to the ecosystem, and how it powers tools like the new VisionCamera v5. Let’s dive in!

How React Native Worklets was born

At first React Native Worklets wasn’t a separate project. The library was originally developed as an internal component for tools like React Native Reanimated or React Native Gesture Handler, allowing developers to create smooth animations and interactions that felt truly native.

In the early days of React Native, your animation logic would exist solely in JavaScript, running on the same thread, the JS thread, as the React part of your application. This meant that any expensive operation, such as data processing, network activity, or even unrelated JavaScript work, could negatively impact animation performance. Reanimated addressed this by introducing a way to execute animation logic outside of React’s render cycle, safe from JS thread bottlenecks.

The technical mechanism behind this approach became known as worklets — small, isolated pieces of JavaScript code designed to run on the UI thread, on a separate JavaScript runtime. They had a restricted execution environment by design, which made their behavior predictable and safe to run concurrently.

An example of how worklets can be used to react to gestures synchronously on the UI thread to trigger animations and layout changes:

const pan = Gesture.Pan()
    .onChange((event) => {
      'worklet';
      offset.value = event.translationX;
    })
    .onFinalize(() => {
      'worklet';
      offset.value = withSpring(0);
    });

As Reanimated evolved, it became clear that this execution model was useful far beyond animations. The same characteristics that made worklets good for animations (low latency, independence from React, and concurrent execution) also applied to gesture handling, graphics rendering, camera frame processing, audio processing, and other performance-critical tasks.

For example, react-native-vision-camera introduced “Frame Processors” — a mechanism to run a JS function on every Frame the Camera “sees”:

```ts
const frameProcessor = useFrameProcessor((frame) => {
  'worklet'
  const objects = detectObjects(frame)
  const label = objects[0].name
  console.log(`You're looking at a ${label}.`)
}, [])
return <Camera frameProcessor={frameProcessor} />
```

At that point, worklets started to outgrow their original context. Developers and library authors needed access to multithreading capabilities even when animations weren’t involved. So, we realized that keeping worklets tightly coupled to Reanimated equaled unnecessary dependencies and limited adoption.

Extracting worklets into a standalone library was a practical decision that allowed us to focus more on multithreading and provide a general-purpose solution for concurrency in React Native.

React Native Worklets as a multithreading engine

Today, React Native Worklets enables full-scale multithreading within the React Native framework. You can think of it as an engine that lets you run JavaScript logic concurrently, without writing native code.

The key idea is simple: purely native apps can utilize multiple CPU cores to increase performance — this should be easily done in React Native too. Heavy calculations, continuous processing, or real-time data handling don’t have to execute within your components. With React Native Worklets, you define what pieces of your JavaScript code should run independently, keeping your UI responsive and predictable.

Thanks to it, instead of jumping into native modules or complex bridges, developers can stay in JavaScript and use familiar APIs. This is exactly why worklets are such a good fit for many specialized libraries.

Integrating React Native Worklets and VisionCamera

react-native-vision-camera (often referred to simply as “VisionCamera”) is a powerful, high-performance Camera library for React Native. Next to photo, video, or snapshot capture, it also supports real-time “Frame Processing” via useFrameProcessor(…).

Since the Camera runs on a separate Camera Thread, VisionCamera conventionally would have to perform a Thread-hop to call the JS Frame Processor, which can cause stalling, or even worse — requires serialization for a Frame — at 4k, that is ~33MB per Frame, or ~1GB/s.

Thanks to worklets, VisionCamera instead synchronously calls your JS Frame Processor on the Camera Thread without a Thread-hop nor serialization! Without worklets, VisionCamera Frame Processors would’ve never existed.

As a user, you don’t need to configure anything — if react-native-worklets is installed, VisionCamera V5 can use it automatically thanks to Nitro’s worklets integration.

What’s next?

The react-native-worklets library is still young, and there’s plenty of room to grow. We’re actively working on making it easier to adopt, as multithreading shouldn’t feel like an advanced trick; it should be something devs reach for naturally when they want to improve app performance.

We’re also really excited to see what the community will build on this foundation. Rendering, audio, automation, and real-time processing: these are all areas where react-native-worklets can simplify work and make limitations disappear. So, take a look at our docs and get started!

We’re Software Mansion: multimedia experts, AI explorers, React Native core contributors, community builders, and software development consultants.