Get monthly updates on React Native, AI, and WebRTC.
By providing your email address, you agree to receive our newsletter and marketing emails. You can unsubscribe at any time. For details, check thePrivacy Policy.
Form Recovery: To prevent users from losing their input, LiveView has a built-inform auto-recovery mechanism. It essentially "replays" the form data back to the server once the connection is re-established.
Query Params: For things like pagination, filters, or search terms, keeping state in the URL query parameters is the gold standard. It ensures the user stays in the same context after a reload or reconnect.
The shift in perspective
Essentially, if you want your LiveView app to be resilient, you have to treat assigns as a temporary cache for the diffing system rather than a reliable source of truth. You either move the state to the browser or provide the browser with a "recipe" (like a URL) on how to recreate that state from scratch.
The thundering herd
Beyond the user experience, there is also a performance cost. During a server redeploy, you might face the thundering herd problem. When thousands of clients disconnect and reconnect simultaneously, they all hit your mount/3 callbacks at once. If every single one of those processes starts querying the database to rebuild its state, your infrastructure can quickly become overwhelmed.
The new way: LiveStash
To change the status quo, we built LiveStash : a library that provides a dead-simple API for persisting your LiveView assigns. Our goal was to make state recovery feel like a native part of the LiveView lifecycle.
Here is a basic counter implementation that stays resilient even if you lose your connection:
For copyable version of this snippet check this Gist
stash_assigns/2: Takes your socket and a list of assigns you want to back up. In the example above, every time the counter increments, we ensure the new value is safely tucked away in the browser.
recover_state/1: You call this in your mount/3 callback. It returns a tuple {status, socket}. The status tells you exactly what happened (it can be one of :recovered, :new,:not_found, or :error ). The socket comes back with your stashed assigns already restored and ready to use.
If the "elevator moment" happens, the user's browser will hold onto the counter value and hand it right back to the new LiveView process as soon as it reconnects.
Choosing your strategy
There is no "one size fits all" answer to where your state should live when a LiveView process dies. The right choice depends on your specific use case: the size of your assigns, how long they need to persist, and your infrastructure.
In fact, the problem of temporarily persisting assigns is essentially a distributed systems challenge. You are moving state out of a short-lived process into a separate storage layer to reconcile it later. This introduces classic concerns: data integrity, synchronization, and the latency involved in fetching that state back.
To address this, LiveStash provides two distinct strategies:
Erlang Term Storage
The ETS Adapter is designed for server-side stability. Instead of sending your entire state (the payload) over the wire, LiveStash keeps it on the server in an ETS table. The client only receives a lightweight reference — essentially a "luggage tag" consisting of a stash ID and a node hint.
Browser Memory
The Browser Memory adapter focuses on pure resilience. In this mode, the stashed state is sent to the client via push_event and kept in a JavaScript variable. When the LiveView reconnects, the browser simply hands that state back via connection parameters.
The primary advantage here is that it survives server redeploys. Since the data lives in the user's browser, you can restart your Elixir nodes or push a new version of your app without your users losing their progress or halfway-filled forms.
Comparison: Which one should you use?
To help you decide which strategy fits your current architecture, here is a quick breakdown of the technical trade-offs:
Characteristic
ETS Adapter
Browser Memory Adapter
Storage Location
Server RAM (ETS)
Browser Memory (JS variable)
Survives Redeploys
No
Yes
Network Overhead
Minimal (Reference ID only)
Proportional to state size
Security
State stays on-server
Signed/Encrypted on client
Time To Live (TTL)
Shorter (minutes)
Longer (hours)
The road ahead
LiveStash is not a silver bullet. Whenever possible, you should still rely on standard Phoenix LiveView practices. Keeping state in URL parameters for context or using built-in form recovery should still be your first line of defense.
However, in scenarios where you must rely on assigns for complex UI states that are too ephemeral for a database and too large for a URL. In those cases, where you absolutely want that state to survive a flicker in the connection, LiveStash is there to bridge the gap.
Shaping the future
Our roadmap is driven by real-world usage. While we have plans for new adapters and further API refinements, we want the library to evolve based on actual developer feedback. We are especially interested in:
New Adapters: Exploring more ways to park state depending on different infrastructure needs.
API Polish: Making the integration even more seamless and "invisible" in your codebase.
Edge Cases: Handling complex nested data structures even more efficiently.
If you've ever struggled with disappearing state or simply want to make your LiveView app feel more robust, we invite you to join the conversation.
Check out ourGitHub, open an issue, or start a discussion — we are eager to hear your thoughts and see how you use LiveStash in your projects.
With the recent launch of Phoenix LiveView 1.0.0, the library has become more popular and is now considered as a compelling option for developers creating web applications. In response to the growing demand and the unique debugging challenges presented by this technology, we developed LiveDebugger.
After we launched the first version of Elixir WebRTC, the thing people asked about most often was data channels. Back then, our prime focus was on multimedia development, so data channels were not on the priority list. However, Elixir WebRTC is at its more mature stage now, which means we could shift our roadmap a little bit!
Last week, at the RTC.On Conference, I had the pleasure of announcing a new Elixir library created by the Membrane team: Boombox. In this article, I’ll shortly explain what Boombox is and the motivation behind creating it.
Meet Boombox
Boombox lets you stream multimedia using various protocols and formats: WebRTC, RTMP, RTSP, HLS, and MP4. Let’s consider a scenario when someone sends their stream over RTMP (like from OBS) and broadcasts it to a wider public via HLS:
Last year, together with Łukasz Wala, we started a brand new project called Elixir WebRTC. The initial goal was straightforward — let’s try to create a project similar to Pion but for the Elixir community. However, we realized quickly that bringing just WebRTC API won’t be enough. It’s too low-level and too complex, and the learning curve is steep. With that, our overarching goal became to bring WebRTC to the masses. We wanted to provide people with the whole ecosystem — all the pieces you need to create a fully functional application without a hassle. And building such a system is a lot of work.
The final requirements we defined for Elixir WebRTC were to:
mimic JS API as much as possible,
be easy to integrate with other Elixir frameworks and libraries — Phoenix, Membrane, Elixir Nx,
provide everything that’s needed to work with audio and video data without the need to use third-party multimedia libraries,
include learning resources and guides.
In other words, we wanted to enable exploring WebRTC just by playing with Elixir WebRTC.
Let’s see what it means! :)
Discoverability
When you enter a new technology, one of the first questions is: “What can I build with it?”. Well, Elixir WebRTC comes with a website that aims to show you exactly that — demo applications deployed 24/7. They include:
Recognizer — a simple real-time image recognition app. Uses Phoenix Framework, Elixir Nx, and Elixir WebRTC.
Broadcaster — a WHIP/WHEP, Twitch-like streaming service. The deployed version runs an infinite stream so you can use it to test your WHEP clients.
Nexus — a single-room, Google Meet-like, video conferencing app.
Rel — standalone, UDP TURN server able to handle gigabits/s of data traffic.
The icing on the cake is a live coding video presenting how to use our API.
Observability
The next thing is observability. Whenever you are on a Google Meet, Discord, or any other WebRTC-based meeting running in your browser, you can go under chrome://webrtc-internals and see a lot of plots and metrics.
Those metrics include packets per second you are sending/receiving, the number of video freezes, frames dropped, or keyframe requests, the video quality limitation reason, what codecs are used, and so on.
The tool is extremely useful for debugging but it only runs in a web browser on the client side, which sometimes is not enough.
In Elixir WebRTC and Phoenix, you can get the same thing but for the server side just by adding two lines of code! See the video.
Learning Resources
Documentation is king in Elixir — the same goes for Elixir WebRTC. We created a series of tutorials introducing you to both WebRTC and Elixir WebRTC. The idea is to limit the number of external resources you have to read before starting your journey. Our guides include:
how JS API maps to the Elixir’s one
what are transceivers and how to use them
how to deploy WebRTC apps
how to debug WebRTC apps
and many, many more!
WHIP/WHEP ready
WebRTC adoption started by creating video conferencing systems but nowadays it is moving strongly towards real-time streaming. More and more companies are trying to scale their WebRTC solutions to thousands of viewers. To make this process more standardized, two more acronyms emerged — WHIP and WHEP. They are very tiny protocols built on top of WebRTC and are meant for broadcasting.
Making this less abstract — WHIP is a protocol that OBS can use to send a stream to a broadcasting server and WHEP is a protocol that web browsers can use to receive this stream.
Elixir WebRTC is fully compatible with both WHIP and WHEP. We even implemented them in our Broadcaster app and this example. Watch the below video to see how you can stream from OBS to the Phoenix-based app :)
Streaming from OBS to the Phoenix-based app
Streaming from OBS to the Phoenix-based app
Enabling AI
Ahh, there is no project that doesn’t “use AI” today, right? To meet this need, we created Xav — a tiny wrapper around FFmpeg for decoding audio and video so you can easily feed it into Elixir Nx. For example, converting an encoded video frame into Nx tensor is as easy as creating a new decoder and calling two functions:
It’s been three months since we posted about the initial release of Elixir WebRTC. In that time, we’ve added a lot of features, made a bunch of improvements, and now we are about to release the second version of the library. Here, I’m going to tell you about what exactly we have done.