Gleam's Lustre is Frontend Development's Endgame
How a combination of language, architecture and ecosystem lead to more maintainable code and a more enjoyable experience.
This is a launch post for a tool I built called OmniMessage, which aspires to be the final nail in coffin of client-server state synchronizing problems.
OmniMessage is written in Gleam, a language that compiles to both Erlang and JavaScript, and is an extension of Lustre, Gleam’s major frontend framework.
To understand the benefits of OmniMessage and how it works, we first need we’ll first need explore how Gleam and Lustre provide one of the best developer experiences you can find for the web these days, and how building on top of them can improve that experience even more.
If you’re already familiar with The Elm Architecture and functional languages, you can click here to skip to the OmniMessage part, however going through the Lustre tutorial really puts the problem it solves in perspective.
A Functional C
Gleam’s website says:
Gleam powers the app this blog is for, Nestful. Which I am incrementally rewriting.
That other blog post already covers a lot of Gleam’s advantages and the specific considerations and tradeoffs as they pertain to Nestful. Those still apply, but when we look at frontend development in general, one key trait of Gleam is crucial:
Gleam is a functional language with a C-style syntax.
That is an indispensable advantage on the journey to ecosystem nirvana.
Here’s some for you to bask in. It is a very pleasant language:
We know that frontend developers like functional paradigms. TypeScript functional-like libraries continue to pop up, some developers really like React, and even when they dislike those, they still argue about it on AirBnB’s style guide repository.
Even with all the that functional engagement in mind, however you slice it, frontend development is all in JavaScript-land, and JavaScript has a C-style syntax.
In my opinion, this is a significant part of why previous attempts like Elm, Reason/ReScript and PureScript did not reach the success they should have. It is also why Flutter made such huge strides with frontend developers. Flutter’s developer experience is excellent, and Dart sure does feel a lot like TypeScript.
The fact that Gleam has that kind of syntax will help the most crucial part of going mainstream — ecosystem growth. It’s going to be nice to have a functional language that’s not only simple and type safe, but also has a large ecosystem.
Enter Lustre
If I had to describe Lustre with only a few words, I would say it is Gleam’s Elm and LiveView.
Yes, Elm and LiveView. While not a 1-to-1 match, Lustre is a Model-View-Update framework that can run both as a single-page application, like Elm, on the server like LiveView, and with OmniMessage — well, you’ll see.
I am not going to define Model-View-Update. Instead, I’m going to teach you some Lustre. By the time we’re done you’ll know exactly what it means.
We’ll start with a counter example, which is small and contained, then continue to build a (very, very minimal) chat app. Follow along with the code, it’s nice seeing how the different pieces fall into place.
First, let’s keep our data in a model, and have a function initializing it:
type Model {
Model(count: Int)
}
fn init(initial_count: Int) -> Model {
Model(int.max(0, initial_count))
}
A Model
type that contains a single integer, which we initialize to a given value, but not lower than zero. Fairly simple.
Next, we’ll define messages that can operate on that model:
import gleam/int
import lustre/effect.{type Effect}
type Msg {
Increment
Decrement
}
fn update(model: Model, msg: Msg) -> #(Model, Effect(Msg)) {
case msg {
Increment -> #(Model(model.count + 1), effect.none())
Decrement -> #(Model(int.max(0, model.count - 1)), effect.none())
}
}
Our update function returns a new count based on the message it receives, adding or substracting from the original. Don’t mind the effect.none()
part for now — we’ll get to that later.
Finally, we’ll have a function for displaying a user interface. It’s important to note that all the view functions in this post use lustre_pipes
, which is an extension of Lustre’s view utilities that I consider easier to read:
import lustre_pipes/attribute
import lustre_pipes/element.{type Element}
import lustre_pipes/element/html
import lustre_pipes/event
fn view(model: Model) -> Element(Msg) {
let count = int.to_string(model.count)
html.div()
|> attribute.class("h-full w-full flex justify-center items-center")
|> element.children([
html.button()
|> event.on_click(Decrement)
|> element.text_content("-"),
html.p()
|> element.text_content(count),
html.button()
|> event.on_click(Increment)
|> element.text_content("+"),
])
}
When we hand the init
, update
, and view
functions to Lustre starts a runtime that:
Calls
init
with an initial valueUses the resulting model to render a view
Listens to events and calls
update
with any dispatched messages, then back to #2
Model, view, update.
Because views are pure functions, meaning they do not perform any side-effects, they can be used anywhere. Every time we’ll hand the same model into that function, we’re going to get the same view. Every. Single Time.
This makes features like hydration very simple. Since this is a deterministic state machine, all we need is the current state. In Lustre, hydration means simply sending the model alongside the rendered HTML. Lustre will use that model to create a view of its own and compare it to the prerendered HTML. If they match, the app is “hydrated”.
But I digress. We’re here to talk about client-server state management, so let’s continue, building a small chat app.
A LiveView Single Page Application
Our app should be able to:
Send chat messages
Display chat messages and their status (sending, sent, received, etc)
Have a button that scrolls down to the latest chat message
Show the amount of active users in the chat room
We’re missing some key features such as receiving messages, chat room creation and user authentication. It doesn’t matter. Even this confined use case of a single room with a single user will show us the benefits of managing state with Lustre.
We’re going to build a single page application first, then use the fundamentals for the server as well. The conversion will be trivial.
First, our model:
// For date handling
import birl
pub type Model {
Model(
messages: List(ChatMessage),
draft_content: String,
)
}
pub type ChatMessage {
ChatMessage(
id: String,
content: String,
status: ChatMessageStatus,
sent_at: birl.Time,
)
}
pub type ChatMessageStatus {
ClientError
ServerError
Sent
Received
Sending
}
Note that to avoid confusing chat messages and Lustre messages, the former will be strictly called a chat message.
We’ll initialize our model with an empty draft and empty chat messages.
fn init(_) -> #(Model, effect.Effect(Msg)) {
#(Model([], draft_content: ""), effect.none())
}
Again, don’t worry about that effect.none()
for now.
To define our Msg
type, first let’s think through what we need to do. Lustre suggests using a SubjectVerbObject (SVO) structure:
UserUpdateDraftContent
UserSendChatMessage
UserScrollToLatest
ServerSentChatMessages
Number 4 is for getting back messages we sent to the server, not for other user’s messages, which is out of scope. This is more of a bring-your-own-refresh kind of app.
Having those messages written in SVO not only makes it easy to reason about how the app works, but also makes it easier to debug later, if and when needed.
Here is an update function that takes care of all the simple bits:
// For uuid generation
import gluid
pub type Msg {
UserUpdateDraftContent(String)
UserSendChatMessage
UserScrollToLatest
ServerSentChatMessages(List(ChatMessage))
}
fn update(model: Model, msg: Msg) -> #(Model, Effect(Msg)) {
case msg {
UserUpdateDraftContent(draft_content) -> #(
Model(..model, draft_content:),
effect.none(),
)
UserSendChatMessage -> {
let chat_msg =
Message(
id: gluid.guidv4() |> string.lowercase(),
content: model.draft_message,
status: shared.Sending,
sent_at: birl.utc_now(),
)
let chat_msgs = [chat_msg, ..model.chat_msgs]
#(Model(..model, chat_msgs:), effect.none())
}
}
}
How lovely is that case
, which the Gleam compiler should mark as an error. Gleam makes sure case
is exhaustive — it has our back in guaranteeing we address all of the different messages that the update
function should be able to handle.
But wait! There’s a design problem. We need to clean the draft after sending it, but that doesn’t make sense to do in the UserSendChatMessage
branch — chat messages can come from other places. A “scheduled messages” queue, for example. We need another message, UserSendDraft
:
pub type Msg {
UserSendDraft
UserUpdateDraftContent(String)
UserSendChatMessage(String)
UserScrollToLatest
ServerSentChatMessages(List(ChatMessage))
}
fn update(model: Model, msg: Msg) -> #(Model, Effect(Msg)) {
case msg {
UserUpdateDraftContent(content) -> #(
Model(..model, draft_content: content),
effect.none(),
)
UserSendChatMessage(chat_msg) -> {
let chat_msgs = [chat_msg, ..model.chat_msgs]
#(Model(..model, chat_msgs:), effect.none())
}
UserSendDraft -> #(
Model(..model, draft_content: ""),
effect.from(fn(dispatch) {
Message(
id: gluid.guidv4() |> string.lowercase(),
content: model.draft_content,
status: shared.Sending,
sent_at: birl.utc_now(),
)
|> UserSendChatMessage
|> dispatch
}),
)
}
}
UserSendDraft
returns not only the model, but also something called an Effect
.
An effect is a type that instructs Lustre’s runtime to perform tasks on our behalf. In this case we ask Lustre to dispatch UserSendChatMessage
with the message, by returning that effect alongside our model. effect.none()
simply tells Lustre that there’s nothing to do, which is the case for the rest of our case
branches.
Side effects are a must in most apps, especially on the web. To be able to use them while keeping our update function pure (meaning, having the same output every time it receives the same input), MVU delegates their execution to the runtime. If we want a side effect to happen, we have to ask the runtime to perform it.
Now that we know effects exist, we’ll use them to implement UserScrollToLatest
.
For that we need need a container to scroll. Enter, a view function:
fn status_string(status: ChatMessageStatus) {
case status {
ClientError -> "Client Error"
ServerError -> "Server Error"
Sent -> "Sent"
Received -> "Received"
Sending -> "Sending"
}
}
fn chat_message_element(chat_msg: ChatMessage) {
html.div()
|> element.children([
html.p()
|> element.text_content(
status_string(chat_msg.status) <> ": " <> chat_msg.content,
),
])
}
fn sort_chat_messages(chat_msgs: List(ChatMessage)) {
use a, b <- list.sort(chat_msgs)
birl.compare(a.sent_at, b.sent_at)
}
fn view(model: Model) -> element.Element(Msg) {
let sorted_chat_msgs =
model.chat_msgs
|> sort_chat_messages
html.div()
|> attribute.class(
"h-full flex flex-col justify-center items-center gap-y-5"
)
|> element.children([
html.div()
|> attribute.id("chat-msgs")
|> attribute.class(
"h-80 w-80 overflow-y-auto p-5 border border-gray-400 rounded-xl",
)
|> element.keyed({
use chat_msg <- list.map(sorted_chat_msgs)
#(chat_msg.id, chat_message_element(chat_msg))
}),
html.form()
|> attribute.class("w-80 flex gap-x-4")
|> event.on_submit(UserSendDraft)
|> element.children([
html.input()
|> event.on_input(UserUpdateDraftContent)
|> attribute.type_("text")
|> attribute.value(model.draft_content)
|> attribute.class(
"flex-1 border border-gray-400 rounded-lg p-1.5")
|> element.empty(),
html.input()
|> attribute.type_("submit")
|> attribute.value("Send")
|> attribute.class(
"border border-gray-400 rounded-lg p-1.5 text-gray-700 font-bold",
)
|> element.empty(),
]),
])
}
Note how event handlers must return a Msg
for our update function to handle.
event_onsubmit
accepts a straight up Msg
, so we just put UserSendDraft
.
on_input
accepts a function of the type fn(String) → Msg
, which is exactly what UserUpdateDraftContent
is. The following is identical:
|> event.on_input(fn(value: String) {
UserUpdateDraftContent(value)
})
If you know HTML, the rest is fairly straightforward except for two parts: use
and element.keyed
.
use
is syntactic sugar for a final-argument callback. Everything before the arrow is the callback’s arguments, lines below the use
are the callback’s body.
This means these two are equivalent:
fn sort_chat_messages(chat_msgs: List(ChatMessage)) {
use a, b <- list.sort(chat_msgs)
birl.compare(a.sent_at, b.sent_at)
}
fn sort_chat_messages(chat_msgs: List(ChatMessage)) {
chat_msgs
|> list.sort(fn(a, b) {
birl.compare(a.sent_at, b.sent_at)
})
}
You know a language is simple when use
is its most “complicated” part.
element.keyed
creates an element whose children have a unique identifier (the key) attached to them such that when Lustre has to re-render the list itself, it knows which elements changed and which didn’t. This is similar to React’s or Vue’s key
property and is done in Lustre by giving the element.keyed()
function a list of tuples in the form of #(key, Element)
.
Now that we have our view, we can handle UserScrollToLatest
:
# for handling possible errors
import gleam/result
# for interacting with the DOM
import plinth/browser/element as plinth_element
fn update(model: Model, msg: Msg) -> #(Model, Effect(Msg)) {
case msg {
// other handlers omitted for brevity
UserScrollToLatest -> #(model, scroll_to_latest_message())
}
}
const msgs_container_id = "chat-msgs"
fn scroll_to_latest_message() {
effect.from(fn(_dispatch) {
let _ =
document.get_element_by_id(msgs_container_id)
|> result.then(fn(container) {
plinth_element.scroll_height(container)
|> plinth_element.set_scroll_top(container, _)
Ok(Nil)
})
Nil
})
}
This code uses plinth
, an library to interact with browser APIs, to first find the container by its id (get_element_by_id
), and if found (result.then
), scroll.
By extracting this effect to a separate function, we can include it in other places, like automatically scrolling on UserSendChatMessage
.
By now you should experience how Gleam and Lustre make for this very structured, harmonious development experience, where everything has a place in the render loop.
Now that we have the client side taken care of, let’s address the server. As you’ve probably guessed, talking to the server is a side effect, meaning we’ll instruct Lustre to make the talking on our behalf. Luckily there is a package that does just that:
import lustre_http as http
fn update(model: Model, msg: Msg) -> #(Model, Effect(Msg)) {
case msg {
// other handlers omitted for brevity
ClientMessage(shared.UserSendChatMessage(chat_msg)) -> {
let chat_msgs = [chat_msg, ..model.chat_msgs]
#(
Model(..model, chat_msgs:),
effect.batch([
scroll_to_latest_message(),
http.post(
"/chat-message",
[chat_msg] |> chat_msgs_to_json,
http.expect_json(
chat_msgs_from_json,
ServerSentChatMessages,
),
),
]
)
}
}
}
This takes care of creating the message on the server. We still update the model with the new chat message in Sending
state, and we continue with effect.batch
.
effect.batch
takes several effects and combines them to a single one for our update
function. The first one is our scroll effect that’ll happen after sending a message. The second will HTTP POST that message to the path /chat_message
.
That effect is from the lustre_http
library (that we import as http
), that can create effects of HTTP requests. The arguments for creating the POST effect are:
The path to POST to, in our case
/chat-message
The body of the post request, in our case a
JSON.stringify
-ed chat messageA description of the result, (JSON) and its handler (
ServerSentChatMessages
).
In a production chat app it would have been better to use websockets. The websocket effect works very similarly but requires more setup due to the nature of websockets. Since boilerplate does not add to our learning, we demonstrate using regular HTTP.
When Lustre will execute this effect, the following will happen:
It will post our encoded chat message to
/chat-message
It will try parsing it as JSON using
chat_msgs_from_json
On success, it will dispatch
ServerSendChatMessages(Ok(messages))
On error, it will dispatch
ServerSendChatMessages(Error(error))
Gleam, you see, has errors as values. That means that except for problematic FFI, functions never throw. You must deal with errors as they come or consciously defer their handling. This pairs fantastically with case
’s exhaustiveness checks:
pub type Msg {
// accepts a `Result` from `lustre_http`'s effect:
ServerSentChatMessages(Result(List(ChatMessage), http.HttpError))
}
fn update(model: Model, msg: Msg) -> #(Model, Effect(Msg)) {
case msg {
// Gleam will make sure both variants are present. Lovely.
ServerSentChatMessage(Ok(List(ChatMessage)) -> todo
ServerSentChatMessage(Error(error)) -> todo
}
}
Again, we have a design problem. The server is the source of truth for chat messages, so we’d like it to override our local copies (that’s how a Sending
chat message will become a Sent
chat message). However doing that for a list can be quite costly.
Let’s change our state to hold chat messages in a dictionary, instead:
pub type Model {
Model(
messages: Dict(String, ChatMessage),
draft_content: String,
)
}
And implement ServerSendChatMessage
:
fn update(model: Model, msg: Msg) -> #(Model, Effect(Msg)) {
case msg {
ServerSentChatMessage(Ok(server_chat_msgs)) -> {
let chat_msgs =
model.chat_msgs
|> dict.merge(server_chat_msgs)
#(Model(..model, chat_msgs:), effect.none())
}
ServerSentChatMessage(Error(error)) -> {
// this is where you'd show, say, an error toast
#(model, effect.none())
}
// changes to other branches available in the full code below
}
}
As with any project, the more time we spend writing its code, the more we learn about it and about the solutions it demands. With MVU, those changes are easy to adapt to since the state mechanism stays the same. By keeping the state handling mechanism completely separate from our project’s design choices, we avoid having to change it when we inevitably discover we made poor ones.
To complete our server-communication portion for chat messages, let’s fetch on init
:
fn init(_) -> #(Model, effect.Effect(Msg)) {
#(
Model(dict.new(), draft_content: ""),
http.get(
"/chat-message",
http.expect_json(
chat_msgs_from_json,
ServerSentChatMessages,
),
),
)
}
So far this should have been a relatively pleasant experience of writing a very regular app, with all the pros and cons that come with it. Our state handling approach forever puts us on alert, having to make sure our local copy of the chat is up to date with the server’s copy — the source of truth. This is a very common SPA issue.
I can hear the LiveView gang collectively yelling into the past as I’m typing. “Keep everything on the server”, they say. Well, we’re about to. — “and get rid of that REST mess!”. Ok, I heard you, we’re about to.
Our first step of evolution will be to implement our final missing feature:
Show the amount of active users in the chat room
I sneakily did not include a Msg
for handling this when we built our update
function. This information is strictly server-side, with no interaction, and most importantly — it is meaningless when we’re offline.
It’s a no-brainer to run it exclusively on the server. This is where the LiveView part of “Elm and LiveView” comes in. We can take this Lustre component:
type Model {
Model(user_count: Option(Int))
}
fn init(count_listener: fn(fn(Int) -> Nil) -> Nil) {
#(
Model(None),
effect.from(fn(dispatch) {
listen(fn(new_count) {
dispatch(GotNewCount(new_count))
})
})
)
}
type Msg {
GotNewCount(Int)
}
fn update(model: Model, msg: Msg) {
case msg {
GotNewCount(new_count) -> #(Model(new_count), effect.none())
}
}
fn view(model: Model) {
let count_message =
model.user_count
|> option.map(int.to_string)
|> option.unwrap("Getting user count...")
html.p()
|> element.text_content(count_message)
}
Run it on the server, serve it via websockets on /user-count
, then add the following to our client’s view function:
server_component.component()
|> server_component.route("/user-count")
|> element.children([
html.p()
|> element.text_content("Getting user count...")
])
Lustre will make it happen so that our user count will travel from the server into the client, accurately rendered. No need to sync state, everything comes from the source.
In my opinion, even after considering all the good that is Gleam, MVU, and Lustre’s implementation of it, this is the biggest advantage of them all.
Whenever you use HTMX or LiveView, there always comes a time when you need to “sprinkle some JavaScript”. While sprinkling some JavaScirpt is much better than writing everything in JavaScript, writing none is best.
This is the power of Gleam’s Lustre. You have all the advantages of a single page application, and of server side components. You pick the right approach, and use the same exact tool to fulfill it.
I hear you, HTMX people calling into the past: “make the chat messages server side too”. Nope. Not going to happen. At least not the way you think it will.
You see, the chat is meaningful even when the user is offline. Even if you make it online-only, there’s a problem moving the chat messages functionality to the server. How will we handle a chat message that was just sent? The server can’t render a Sending
state because it’s still sending — it doesn’t know it exists!
“Sprinkle some JavaScript!”
Yes, the current solution is to have some mix of client and server side code, and do the syncing manually, just for that one bit.
Or is it?
Introducing OmniMessage
OmniMessage is a library created to answer this exact problem. It is composed of a Lustre extension on the client side, and server utilities (including another, optional, Lustre extension) on the server side.
After you set those two up a subset of messages, at your discretion, will dispatch in both the client and the server. You can think of this as a more blissful RPC.
Here’s the boilerplate:
import omnimessage_lustre as omniclient
// This just converts to/from JSON
let encoder_decoder =
omniclient.EncoderDecoder(
// this converts TO JSON
fn(msg) {
case msg {
// Encode a certain subset of messages
ClientMessage(message) -> Ok(shared.encode_client_message(message))
// Return Error(Nil) for messages you don't want to send out
_ -> Error(Nil)
}
},
// this converts FROM JSON
fn(encoded_msg) {
shared.decode_server_message(encoded_msg)
|> result.map(ServerMessage)
},
)
// This creates an extended Lustre component
omniclient.component(
init,
update,
view,
dict.new(),
encoder_decoder,
transports.http("http://localhost:8000/omni-http", option.None, dict.new()),
TransportState,
)
The “biggest” chunk of work is encoding and decoding, but that’s something you need to do anyway. Even pure server components eventually need to save to a database.
If we setup OmniMessage like that, our original Lustre app could reuse all the client-side messages we wrote at the beginning of this post, and the server will reply and override our Sending
chat messages when they are received.
Yes, yes, I know you want to see the code, but it really is the same. Here, look:
// MODEL ---------------------------------------------------------------
pub type Model {
Model(chat_msgs: dict.Dict(String, ChatMessage), draft_content: String)
}
fn init(_initial_model: Option(Model)) -> #(Model, effect.Effect(Msg)) {
#(Model(dict.new(), draft_content: ""), effect.none())
}
// UPDATE --------------------------------------------------------------
pub type Msg {
UserSendDraft
UserScrollToLatest
UserUpdateDraftContent(String)
ClientMessage(ClientMessage)
ServerMessage(ServerMessage)
TransportState(transports.TransportState(json.DecodeError))
}
fn update(model: Model, msg: Msg) -> #(Model, effect.Effect(Msg)) {
case msg {
// Good old UI
UserUpdateDraftContent(content) -> #(
Model(..model, draft_content: content),
effect.none(),
)
UserSendDraft -> {
#(
Model(..model, draft_content: ""),
effect.from(fn(dispatch) {
shared.new_chat_msg(model.draft_content, shared.Sending)
|> shared.UserSendChatMessage
|> ClientMessage
|> dispatch
}),
)
}
UserScrollToLatest -> #(model, scroll_to_latest_message())
// Shared messages
ClientMessage(shared.UserSendChatMessage(chat_msg)) -> {
let chat_msgs =
model.chat_msgs
|> dict.insert(chat_msg.id, chat_msg)
#(Model(..model, chat_msgs:), scroll_to_latest_message())
}
// The rest of the ClientMessages are exlusively handled by the server
ClientMessage(_) -> {
#(model, effect.none())
}
// Merge strategy
ServerMessage(shared.ServerUpsertChatMessages(server_messages)) -> {
let chat_msgs =
model.chat_msgs
// Omnimessage shines when you're OK with server being source of truth
|> dict.merge(server_messages)
#(Model(..model, chat_msgs:), effect.none())
}
// State handlers - use for initialization, debug, online/offline indicator
TransportState(transports.TransportUp) -> {
#(
model,
effect.from(fn(dispatch) {
dispatch(ClientMessage(shared.FetchChatMessages))
}),
)
}
TransportState(transports.TransportDown(_, _)) -> {
// Use this for debugging, online/offline indicator
#(model, effect.none())
}
TransportState(transports.TransportError(_)) -> {
// Use this for debugging, online/offline indicator
#(model, effect.none())
}
}
}
See? Same logic. The only differences are that some types are wrapped so we could share them with the server, and instead of handling network errors directly we now handle them through a TransportState
variant.
Other than that, you dispatch messages and the server replies as if it’s the same app.
You decide how to encode the messages, and what transport to send them through. As long as the server can understand those, you can use whatever server, in any language. Currently we have transports for HTTP and websockets, but any transport is possible. For example, you could write one to communicate with an Electron or Tauri backend.
Here are some examples utilizing omnimessage_server
, meant for Gleam servers. Say you have this simple handler:
fn handle(ctx: Context, msg: Msg) -> Msg {
case msg {
ClientMessage(shared.UserSendMessage(message)) -> {
ctx |> context.add_message(message)
context.get_chat_messages(ctx)
|> shared.ServerUpsertMessages
|> ServerMessage
}
ClientMessage(shared.UserDeleteMessage(message_id)) -> {
ctx |> context.delete_message(message_id)
context.get_chat_messages(ctx)
|> shared.ServerUpsertMessages
|> ServerMessage
}
ClientMessage(shared.FetchMessages) -> {
context.get_chat_messages(ctx)
|> shared.ServerUpsertMessages
|> ServerMessage
}
ServerMessage(_) | Noop -> Noop
}
}
Here’s how you’d use it in a Gleam HTTP server:
use <- omniserver.wisp_http_middleware(
req,
"/omni-http",
encoder_decoder(),
handle(ctx, _),
)
Just give it the request, the path it should handle, the messages encoder/decoder and the handler from above, and it will:
Decode incoming messages
Run them through the handler
Encode the result
Generate an HTTP response with it
Need to send messages without the client initiating a request? Use websockets:
["omni-pipe-ws"], http.Get ->
omniserver.mist_websocket_pipe(
req,
encoder_decoder(), // this is the same encoder_decoer
handle(ctx, _), // the same message handler
logger, // error handler
)
Have a more complex app and you need structure? Use a Lustre server component!
["omni-app-ws"], http.Get ->
omniserver.mist_websocket_application(
req,
chat.app(), // lustre server component
ctx, // flags for its init
logger // error handler
)
This is how the server component will look like:
// MODEL ---------------------------------------------------------------
pub type Model {
Model(messages: dict.Dict(String, shared.ChatMessage), ctx: Context)
}
fn init(ctx: Context) -> #(Model, effect.Effect(Msg)) {
#(Model(messages: ctx |> context.get_chat_msgs, ctx:), effect.none())
}
// UPDATE --------------------------------------------------------------
pub type Msg {
ClientMessage(ClientMessage)
ServerMessage(ServerMessage)
}
pub fn update(model: Model, msg: Msg) {
case msg {
ClientMessage(shared.UserSendMessage(message)) -> #(
model,
effect.from(fn(dispatch) {
model.ctx |> context.add_message(message)
ctx
|> context.get_chat_msgs
|> shared.ServerUpsertMessages
|> ServerMessage
|> dispatch
}),
)
ClientMessage(shared.UserDeleteMessage(message_id)) -> #(
model,
effect.from(fn(dispatch) {
model.ctx |> context.delete_message(message_id)
ctx
|> context.get_chat_msgs
|> shared.ServerUpsertMessages
|> ServerMessage
|> dispatch
}),
)
ClientMessage(shared.FetchMessages) -> #(
model,
effect.from(fn(dispatch) {
get_messages(model.ctx)
|> shared.ServerUpsertMessages
|> ServerMessage
|> dispatch
}),
)
ServerMessage(shared.ServerUpsertMessages(messages)) -> #(
Model(..model, messages:),
effect.none(),
)
}
}
It’s like having the same app spread across two different files.
OmniMessage is still very young and is missing some important features, but the vision is clear — it is the last piece in the trifecta that is zen state management:
Client state — solved by MVU
Server state — solved by server components/LiveView approach
Hybrid state — solved by OmniState
The hybrid state OmniMessage represents is a very sharp sword that can be tricky to wield without a clear separation of concerns. This is why OmniMessage shines when a single party is the source of truth, since we can simply override the other party’s state every time a message arrives. This gives us all the benefits of OmniMessage without the beehive hornets nest that is carefully merging state.1
This is why Lustre is the end game of frontend development. It collected all of the right solutions and wrapped them up in the nicest, gleamy package.
Full, working, code is available in the OmniMessage repository.
Here is the OmniMessage documentation:
https://hex.pm/packages/omnimessage_server
https://hex.pm/packages/omnimessage_lustre
And don’t forget to check out Nestful! Not only is Nestful written in Gleam (the new “written in Rust”), but it actually has novel ways to manage your time:
Or you could use a CRDT (which we do, but that’s a story for another post)