As of 2025, Rust has quite a few user interface libraries, however; when looking for a GUI toolkit to use into a game engine, the number of usable libraries drops to just a few. This article presents a minimal, engine-agnostic, GUI architecture specifically designed for Rust which was developed for a game I'm developing.
The main problem with integrating GUI libraries in a custom game engine is that they almost all provide their own renderer, their own event loop, their own assets system, etc. Those systems inevitably end up creating conflicts with the engine backend. For the few that are fully platform-independent, like the egui core crate, styling and looks were a deal breaker.
This is NOT a Rust library. This is an in-depth guide to my own GUI architecture. The goal of this article is to share the work that I've done, and to be a resource for people looking to build their own custom GUI toolkits.
This article is better read on a desktop, or any environment where the screen is large enough to hold this the demo side-by-side. This will enable side by side code lookup which is used to reference the code associated with each part of the article. Also, the demos were built for desktop applications and mobile platforms were not tested.
Special thanks to:
- Egui: the inspiration for the top level API and for being a great GUI library
- How Clay's UI Layout Algorithm Works: The main inspiration for the layout system
Egui «aims to be the easiest-to-use Rust GUI library, and the simplest way to make a web app in Rust» (from the egui README on github).
Having already implemented egui into a few personal projects (including in my last article), I really think egui is one of the best GUI libraries available for Rust, however it falls short on two important points.
Styling. For styling, egui uses egui::style::Style and egui::style::Visuals, which is a very limited way to change the visual style of components.
Outside of changing colors and font sizes, you don't get much customization.
Layout. Egui uses a single-pass layout system. This heavily limits the complexity of layouts you can implement. Outside of settings menus, games UI layouts tend to have non-standard presentation because they have to compromise between displaying information and showing gameplay. A multi-pass layout system gives all the power needed to build even the craziest of layout.
Keeping layout and styling in mind, you can head over to gameuidatabase.com and think about how you would implement any of those interfaces. Not just using egui, but with any GUI system. Without an insanely powerful environment like HTML/CSS or coding a custom system (*wink* *wink*), things get tricky really quickly.
One last thing (although this is purely subjective), egui is a immediate-mode GUI, and I prefer retained-mode GUI.
Mostly because I find it much easier to define complex layout or extensive styling in a retained environment.
In "immediate mode", the code that builds your ui components runs every frame. Some of the most commonly used game GUI libraries use immediate mode (for example imgui).
The lack of state makes them much easier to integrate into an existing system. With Rust specifically, it makes managing lifetime easier as the ui data only really exists for the duration of a function call.
In "Retained mode", the UI state persist between frames. Usually there is a state object that is initialized once and then you can use that object to modify the GUI visuals at runtime.
Retained mode GUI are often much more complex and powerful than immediate mode GUI, but they are harder to fit into an existing system. For example HTML/CSS operates as a retained mode GUI.
🤓 Immediate-mode GUI vs Retained-mode GUI
This article is split into two sections: the "usage" section that explains how the GUI API is used and the "implementation" section that covers how things work under the hood.
The GUI architecture is built from the ground up to work with the Rust programming language. The first requirement is to make sure the API doesn't store callbacks. Mixing callbacks and GUI logic is one of the quickest ways to make the borrow checker throw a tantrum. This tend to have a terrible effect on API ergonomics. A callback-less API also greatly simplifies lifetime management; in the best-case scenario, they are completely elided.
Flexibility is also very important. Game GUIs have unique visual requirements and an API that is too restrictive can choke on edge cases. Here, each GUI component defines its own style and rendering logic. A custom GUI architecture can also directly reuse the asset system (images, fonts, shaders) from the host game engine. Data types are "hardcoded" in an enum instead of using generics or trait objects. All of this gives plenty of leeway when encountering edge cases.
Under anything but the most extreme UI workloads, performance is a non-issue. Rust is very fast and rendering UI elements is really simple. So long as the most obvious code smells are avoided, such as allocating large amounts of memory each frame or generating a draw call for every sprite, UI rendering will be a fraction of a fraction of the total rendering budget.
The entry point of the GUI system is the Gui struct, a context structure that holds all the GUI data for the duration of the application. The context structure is mostly decoupled from the rest of the engine. The only exception is the assets and the base types that are imported from the host engine.
The GUI context, by itself, doesn't need parameters in order to be instantiated. The demo simply derives the default implementation
from the GameData struct.
Because some assets (like fonts) are taken from the host engine, an initalization step is needed to copy the shared assets and to initialize the gui view.
This is done in the init function.
The main functionalities of the GUI context can be split into four categories: building and updating the GUI, handling user inputs, sending user events, and generating the sprites for rendering. Here's a pseudocode example:
fn main() {
let mut gui = Gui::default();
gui.build(|gui| {
// TODO define the ui components
});
loop {
// Sending user events to the GUI
gui.send_inputs(user_inputs);
// Process the GUI events
while let Some(Ok(event)) = gui.read_next_event() {
}
// Generate sprites
let mut sprites = Vec::new();
gui.generate_sprites(|sprite| {
sprites.push(sprite);
});
render(sprites);
}
}
And here's how the first demo UI building code looks like (code link).
fn basic_demo(assets: &Assets, gui_state: &mut RunningGuiState, gui: &mut GuiBuilder) {
let atlas = &assets.atlas;
let button_style = default_button_style(atlas);
gui_state.ferris_type = 0;
gui_state.ferris = gui.image_state(atlas.texture, atlas.ferris);
gui.layout_center();
gui.layout_items_flex(flexbox_layout());
gui.group(|gui| {
gui.image_dyn(gui_state.ferris);
gui.spacer(0.0, 10.0);
gui.button(GuiEvent::ToggleFerris, &button_style, "Toggle", DEFAULT_TEXT_SCALE);
gui.spacer(0.0, 10.0);
gui.button(GuiEvent::NextDemo, &button_style, "Next", DEFAULT_TEXT_SCALE);
});
}
Building the GUI can happen at any time during the engine's game loop. In this demo, the first build happens when the game state changes from Uninitialized to Running.
All of the GUI building logic is grouped into the build_gui module. The module exports the build
function to update the current state of the UI based on the current value of state.current_demo.
In a real application, each GUI interface would have its own module, but the demo code is small enough to fit into a single file.
The build function in the build_gui module calls the build
method of the GUI context, which in turns calls the build
method of the GUI builder. The context function takes a callback. The callback has a single argument, a GuiBuilder
value that contains all the functions required to create new UI components, define layout, and register events (yes this is a lot of building).
Calling the build function of the context replaces all existing data with the new data defined by the callback. The GUI must be rebuilt every time the user changes interfaces.
Let's start by looking at how the first demo works.
• Inserting new components
The basic_demo function creates six UI components. A group, an image, two spacers,
and two buttons. A builder method with the GUI component name inserts them into the GUI context. The code is not in the main
builder source, and instead is located in the GUI component source. For example, the button
builder function creates
inserts a GuiComponentButton in the GUI context.
Components with children, like the group component, takes a callback just like the original build function and pass down the builder object.
Some components, like the list_view_base component, use a custom builder as argument in their callback.
• Defining a component style
There is no global styling system. Instead, each component can define their own "ComponentStyle" struct. This struct is instantiated by the user and passed to its builder function, and finally stored in the component data.
The GUI system does not own the assets used by the components style, instead, it borrows them from the host engine. This demo uses an atlas to store all the textures and the style data stores a unique identifier for the texture and the UV coordinates of the sub-image to display. For example, see the default_button_style function.
• Defining a component layout
The builder has two groups of functions to define the layout of a component: layout_* and layout_items_*. The first one
defines the layout of the component in its parent and the second one defines the layout of the component's children. * specifies the type of
layout to apply, for example layout_center. Calling a layout function sets the layout for the next component, the layout value is
cleared after every insert.
By default, child components use their parent items layout to size and position themselves. However, specifying layout_* overrides that.
The GUI "root", aka the screen, does not define a items layout so any top-level item need to explicitly define theirs. In "basic_demo", layout_center
centers the group on the screen.
The builder offers multiple helper functions to quickly define layouts, but under the hood they are all stored in a GuiLayout struct.
The Layout Compute section will go deeper on how the layout works.
• Registering events
Events are user-defined values that can be raised by GUI components, such as a button click. Retained GUI frameworks commonly use callbacks (e.g., addEventListener).
However, in Rust, storing callbacks introduces lifetime management and interior mutability issues, which significantly complicates development and makes the end API
more annoying to use.
Events are passed to GUI components via their builder methods. Beyond that, event implementation is left to each GUI component (for example, see the button implementation).
Inside the GUI context, user events are converted into a common storage type, GuiInternalEvent (NonZeroU32 in the demo).
This ensures the GUI context struct remains generic-free. Having a static storage type (rather than Box<dyn Any>) also makes it easier to read events from the context.
The user type must implement Into<GuiInternalEvent> and TryFrom<GuiInternalEvent>.
The Handling events section will go deeper on how the events work.
• Registering animations
The GuiAnimation struct defines a GUI animation. This is still a work in progress, but the end goal is to copy how animations work in HTML/CSS.
Animations are pushed into the GUI using the animate and animate_dyn functions. Just like layouts, calling
those functions will tie the animation to the next inserted component. The demo supports one animation per GUI component. Adding support for more
than one is trivial, but not needed here.
Dynamic animations take a special state value GuiAnimationControl that gives the user control over how the animation is executed.
For example, playing, pausing and restarting. One GuiAnimationControl can control many animations.
The animations section will go deeper on how animations works.
Just like Rust variables are immutable by default, the GUI state is also immutable. After the build phase, it becomes impossible to insert new UI components into the GUI, and all components become read-only.
• Defining mutable state
For large UI state changes (e.g., changing interfaces), rebuilding is fine. However, all retained GUI systems needs some level of finely-grained mutable state. The GuiState values are used to mark mutable values in a UI. The struct uses generics to implement a strongly typed interface.
GuiState values allow the user to read or write values within the current GUI. They can only be instantiated at build time, and are used
by UI components with mutable state. A GuiValue only stays valid for the current UI. Calling the build function again will invalidate
all generated GuiValues.
For example, the first demo uses gui.image_state to define a mutable image resource and it
uses image_dyn with the state value to display a mutable image in the GUI.
The GUI state pools mutable values into the GuiStateAlloc struct. The types of values that can be stored are explicitly defined in the GuiStateStore enum.
All state values for a specific interface can be grouped into a single struct. That struct can then be stored in the game state. This gives the program an overview of what is displayed in the current interface and a straightforward way to update the GUI state without having to worry about the inner GUI structure.
• Reading/Writing state
Once the GUI is running, state variables can be accessed using get_state and set_state. Because a state value
is only a pointer into the GUI data, updates have to go through the main Gui context. See the
update function in the main game loop for an example.
In this implementation, the state updates are grouped inside an array and are only propagated at sprite generation. When
generate_sprites is called, state::sync writes the state updates to the GUI components.
This is because state object can also be changed by other GUI components, and having one point of update makes things easier to handle.
After defining user events and binding the values to the components during the build phase, event handling works in two steps: first sending the user inputs (mouse click, keystrokes) to the GUI and then processing the GUI events generated by those inputs. GUI components could also generate events without user interaction, but this demo does not include an example of that.
• Sending user inputs to the GUI
The
GuiInputs struct groups the most recent user inputs. Then the
send_inputs
function takes this value and runs all the input processing logic. If the GUI state is changed in any way, this function returns true
otherwise, it returns false.
For example, this demo first receives user inputs from javascript callbacks, then the raw inputs are then sent to the WASM game client, and finally the user inputs are forwarded to the GUI system using the dispatch_inputs_to_gui function.
Under the hood, the GUI system will dispatch the inputs to the right components. This can trigger effects like a button style changing from default to hovered, or generating user events.
• Receiving user events
When send_inputs is called, under specific hardcoded conditions, it can trigger user events registered during the build phase.
Reading those events is done using the
read_next_event function. The function returns the next event in the pipeline if there is one or None
if all events have been processed. It also tries to automatically convert the storage type of the GUI back into the user type.
A rust iterator cannot be used here. It is pretty much certain that the event loop will mutate the GUI state, and using an iterator would it
because of the Borrow Checker. The read_next_event method fixes that problem without having to allocate memory.
In the demo, see the update function in the main loop again.
To turn GUI component into sprites, the generate_sprites function is used. The function takes a callback. The callback only parameter is a GuiOutputSprite struct. The callback will be triggered for every sprite generated by the GUI.
Right now, the returned sprites are really just AABBs using static resource identifer passed to the builder functions.
A real application may need to handle more complex shapes, so GuiOutputSprite could be turned into an enum.
The return value of the generate_sprites function is a bool that tells the renderer if the GUI should be re-rendered next frame.
This can be used for animations or for some specific procedure. For example, in the list view demo, the first frame computes the inner size of
the list and the second frame resizes the scrollbar components to match the inner size. Internally, this process is handled by a
AfterRenderHook (more on that later).
GuiSpriteFlags defines what kind of sprite should be rendered. The three kind: "textured", "solid_color", and "text" are
self-explanatory. The demo handle all three in a
single shader. A more dynamic system might pass unique shader identifier instead, just like it is done with textures.
The last step before rendering is to turn the sprite list into a renderable mesh. This is not handled by the GUI system. The demo use it's own tiny rasterizer in gui_rasterizer.
With the API overview done, now lets dig into the code.
Component storage is probably the simplest part of this system, mostly because the GUI structure becomes immutable after build time. Components are stored in linear memory for fast iteration speed and are split into a structure of arrays since not every subsystem needs to query all the component data at once.
The GUI components are stored in
GuiComponents.
The structure only allocates one big buffer and then suballocates the GUI components. By default, a GUI has a capacity
of 32 components (defined in impl Default). In the build phase, if this capacity is exceeded, the buffer will be reallocated.
Gui components have the following sub components:
-
GuiNode: Stores the number of children and descendants of a component. It is required to traverse the data structure. Also store a
clipflag to tell if the renderer should clip the children that overflow the parent bounds. - GuiComponentView: stores the results of the layout computation.
- GuiLayout: stores the layout info of the component.
- GuiComponentData: stores the component-specific data.
All the functions in GuiComponents are helpers to get or set data.
Building a sane state management system in a GUI toolkit is a challenge. Implementing basic support so that a user can query or update information from the GUI is pretty straightforward; however things get more difficult when GUI component themselves need to query or update data from other components.
In order to make things manageable, state management is implemented as a separate opaque structure that can be queried using special handles (GuiState).
This way both the user and the GUI components can read and write state without having to worry about things like the Borrow Checker.
• State storage
All the state values are stored in the GuiStateAlloc. The inner storage uses a Vec of
GuiStateStoreWrapper. The wrapper stores both the state value and the listeners.
The listeners array stores the indices of the components that needs to be updated if the value is updated.
Back in the pool struct, the updated Vec stores the indices of which state value were updated. This is because state updates are not immediately
propagated to the components, propagation instead happens once, when the GUI sprites are genererated.
GuiState handles are only valid for the active GUI and will be invalidated the next time the GUI is rebuilt.
To keep track of that, a generation number is stored in the GUI state and each GuiState
stores the generation value of their GUI. If a state value from a previous generation is accessed, a warning will be raised and
the current state won't be touched. The program will not panic.
• State registering
A new state value is registered using the push method. This is usually done at build time by the *_state methods on the GuiBuilder struct
(example).
The method takes a GuiStateStore and returns a GuiState referring to this value. However, GUI components are free to create their own state and
use it internally. For example, the scroll_view component manages scrolling offsets using internal state.
To bind a GUI component to a GUI state, the insert_*_listeners methods must be called. insert_component_listener binds
a state to component-specific data, and insert_layout_listener binds a state to layout data.
Two methods are needed because component data and component layouts are stored separately. Internally a flag is used to know how to synchronize updates.
The insert listeners methods are used by the GUI component creation functions in the builder. For example the label_dyn function.
• State read/write
From the user side, state is retrieved with the get_state method
and updated with the set_state method. Both methods take the state value
to find the data in the state storage. get_state will copy the inner value and set_state will replace the old value
with the new value provided to the method. The validate_state
method ensures that reading or writing is safe.
In generate_sprites, the
state::sync synchronize the values in the state pool with the GUI components.
Using listener.is_layout, the code matches the state to either component data or layout data. The sync_state_data
method of the target GUI component then receives the wrapped value and finalizes the update.
Layout computation happens during sprites generation, just after the state is synchronized. The layout_compute function handles all the logic. The layout system uses three pass inspired from Clay's UI layout video (https://www.youtube.com/watch?v=by9lQvpvMIc) , with few extra bells and whistles to support multiple layouts.
All three layout passes loop over every GUI component using a depth-first algorithm. Note that only the layout functions used in the demo are implemented.
• The sizing pass
The fit pass computes the fixed size of a component. The first thing to look up is the minimum size defined in the component data. This is done by the fit_component_from_default function.
For GUI components with children, the fit_layout_size is called again to visit the children. The function saves the
total size of the children in the LayoutSizingParentFit struct.
Then, the fit_component_from_layout is used to compute the fit sizing based the layout data and the total children size if any.
Once the fit sizing is computed, the update_parent_size function sends back the computed layout info to the parent GUI component. Finally, the computed size is stored in the component view.
• The grow pass
The grow pass takes the minimum size of the component computed in the fit pass and, if the layout demands it, grows the size to fill the free space in the parent.
In the demo, layout_background takes a GUI component and expand its size to cover the entirety of its parent.
Other layout models also support growing elements. For example, flexbox in CSS will grow items on the cross-axis with
align-items set to stretch or on the main axis using the flex-grow property.
However, none of the demos makes use of those features, so there is no code implementation for them.
• The position pass
The position pass finds the final position of all the components in the view. position_layout
first checks the align_self.align value of the layout. If the component layout does not override its parent layout,
position_layout_parent
computes the position, otherwise the position algorithm moves the component at the right place within its parent.
Then layout.align_self.offset is added to the computed position. offset can be used to move an item when scrolling or moving
an item inside its parent, as done in the window demo. This gives us the true final position of the component.
After positioning, we can compute the final scissor AABB for the component if node.clip is true or if the component parent clips its children.
Finally, position_layout is called again for every child of the current component.
All of the input handling logic is grouped under the inputs module. Inside this module, the GuiInputState struct is used to store the current input state of the GUI (e.g. which component is being hovered).
Starting from the send_inputs function. Each input type (mouse move, key presses) is handled by its own function.
Each of the input handling function returns true if the received input changed the GUI state. If the GUI state changed, the
send_inputs function also returns true to signal to the user that the GUI should be redrawn.
The type of actions executed by the inputs functions falls under two categories:
• Updating the GUI input state
GuiInputState stores the indices of which component is being hovered, which component is under user focus, and which component is being
pressed. Updating the GuiInputState must be the first thing to do when processing events because the state is used by the two following categories.
For example, the mouse move handler will call the update_hovered_components function. This function iterates over the GUI components and builds a list, from bottommost to topmost, of all currently hovered GUI components.
• Dispatching the inputs to the GUI components
Each GUI component has methods to receive specific user inputs, but not all of them need to respond to all input types. Before dispatching inputs to
a GUI component, the respond_to_input_type method is called. If a component accepts an event type, the associated
input function on GuiComponentData is called.
Input functions, like the on_mouse_state_changed function, can take GuiStateAlloc and a GuiOutputEvents
parameter on top of the input-specific data. This allows the targeted component to update the GUI state and push new events to the user.
In the context of this GUI architecture, rendering means turning GUI components into sprites. All logic related to this is handled in the generate_sprites module.
Generating sprites works by iterating over the GUI components, extracting the view and the data, and calling a more specific generate_*
function where * is the name of the GUI component to render. Not all GUI components needs to be rendered, components without visible parts
or those that are completely clipped from view are skipped.
Inside the component rendering function, one or more GuiOutputSprite are generated. Also, some component-specific layout can be computed, for example the button render function centers its own text.
Rendering sprites does not allocate memory, instead sprites are passed back to the game client using a callback. The game client must turn the sprites into renderable meshes. See generate_meshes_inner for an example.
Rasterizing could be handled by the GUI system; however, the architecture of this project splits data from rendering logic. That is why the GUI system is in
the data module, and the rasterizer is in the output module.
A way this could be improved is by defining a "renderer" trait in the GUI module, having the user's renderer implement it, and passing the renderer implementation to the generate sprites method.
The demo implements a few basic components, but the expectation is that each application will have its own custom component library. Custom components are built using the following steps:
• Creating a new component data type
GUI components are simple Rust structure. The first step is creating a new module in the components module
and defining your own component type. The naming convention is GuiComponent[ComponentName]. Next, the new struct needs to be added
to GuiComponentData. It's a good idea to keep the component size under 64 bytes to reduce the padding bytes
in the enum. This can be done by boxing the data that is accessed less often. See GuiComponentTextInput for example.
• Builder code
Components are not directly initialized by the user; instead, they go through the GuiBuilder object. To keep all the component code together, a GuiBuilder impl block is added to the component module. Inside it, a new method named after the component is added.
To insert a new GUI component into the GUI context, use the push helper method for simple components and the
push_parent/pop_parent methods for components with children.
The builder can also access the internals of the main GUI struct using self.inner.
More complex components can insert their own "shadow components" as is done with the window component.
• Rendering
Rendering is done by implementing a generate_sprites method on the component struct and updating the match statement in the
generate_sprites method of the generate_sprites module.
The generate_sprites function receives the computed view of the component and a callback to generate sprites. The callback can
be called for each sprites that builds the component.
• Inputs and events logic
Add the new component to the respond_to_input_type method of the components enum. The method returns an InputType that tells which input type is supported by the component. Then add the new component to the associated input methods.
Inputs logic can update the state and trigger user events, that's why many handlers use GuiStateAlloc and GuiOutputEvents as parameters.
• State updates
If a component needs to listen to state updates, as in the previous step, start by adding it to the sync_state_data method of the components enum. State updates are handled by patching the GUI component data with the value
from the GuiStateStore (example).
The more complex a GUI component becomes, the more annoying it is to implement them as a single entity. Large components can be broken down into smaller components as it is done in the list view demo. Composed components can reuse the layout and input system of the GUI engine, thus greatly reducing their implementation complexity. This also allows the end user to assemble their own custom components without having to dig into the engine's source code.
The list component is implemented as a user function, list_view_component and is composed of three built-in components: ScrollView,
ListViewBase, and ListViewItem.
The function takes four arguments, the GUI builder (gui), the values to display (&[&str]), a state value to track the clicked item (GuiState<usize>)
and the event generated when the user clicks on an ite. Because of the way the layout are passed down to components, the user-defined function receives its layout the same way built-in components do:
layout_parent_fixed_size temporarily stores data into the builder and both values will be used by the next component. In this case, the layout is consumed by the ScrollView.
If this method of passing down arguments is not clear enough, or if a component needs a large number of arguments in order to be initialized, the builder pattern can be used.
Inside the function, at the top level, a scroll_view component is used to add support for user scrolling. The scroll view uses a solid_color_block
component and a borders component to paint the background. The list view component also defines its own FlexboxItemsLayout.
For this component, the text_size and item_height are hardcoded.
Finally, a list_view_base component is used to display the items. Instead of passing down the items directly in the function, a callback is used.
The list view base uses its own custom builder GuiListViewBuilder that only exposes a single public method: list_view_item.
Items are implemented as GUI components and are inserted into the GUI tree. This way the layout system and the inner input system (click, hover, etc)
can be used on the items. All the parameters passed to the list_view_base component are copied into each added item. The list view component itself is
empty.
When an item is clicked, the event specified in on_list_view_item_clicked is sent back to the GUI. The demo raises the GuiEvent::AnimalSelected event.
However this event does not tell which item was just clicked. That is why, when a a list item is clicked, its ID (in this case the index in the original array) will be written
into the selected state value. gui.get_state(gui_state.selected_item) is used to query that value.
Some actions on GUI components depend on their rendering output. Take the text input component for example: in order to move the caret at the right position when a user clicks on it, the component needs to know the coordinates of the top-right corner of the text label. However, because the layout and the component data are decoupled, the final position of the text will only be known at the end of the render phase.
In order to work around this problem, the generate_sprites step can write some data into the component. In the
generate_sprites function of the text input, the rendering code writes
the text AABB into text.render_feedback. When the text input receives a mouse event, this render feedback data is used to move the caret
to the correct position.
In the worst case scenario, some component will be rendered differently depending on the state of other components. In this situation,
render feedback won't cut it. This is where after render hooks comes into play. For example, the scroll bar component needs to know
the total height of the parent GuiComponentScrollView and the total size of the GuiComponentScrollView children.
When generating the sprites for the vertical scroll bar component, its total height and position can be queried from its GuiComponentView. The problem is, the size of the scroll handle depends on the inner content size of the parent scroll view. To reduce complexity at sprite generation time, a component can only read its own data, which means render feedback alone is not enough.
To get the info, the scroll_view component creates a AfterRenderHook::UpdateScrollView at build time.
At sprite generation, the after_render_hooks::after_render function will iterate over each registered callback and
run the custom logic. If any of them return true that means the UI needs to be re-rendered next frame.
In the case of a scroll_view component, the function will fetch the children size (aka view.items_size) from its computed view.
If the size matches the cached value in the callback, the hook returns false. If not, it calls the after_render functions on both
the scrollview and the scrollbar and returns true. With this, the next generate_sprites call will have the correct values.
Game UIs often incorporate animated elements. It would be possible to build animations on top of the dynamic state system, however managing animations from outside the GUI system is far from optimal. The animation system implemented in this project is inspired by CSS animations, where each animation consists of keyframes that interpolate predefined values. Only basic translations are implemented in this demo.
The animations base is defined in the GuiAnimation struct. Any animation information that is independent of GUI components and the animation runtime should be stored there.
GuiAnimation are defined by the user and then inserted into the GUI at build time. When a component with an associated
animation is inserted into the GUI, register_animation wraps the GuiAnimation
into a GuiAnimationPlayState. This struct links the component to the animation and stores the current
runtime of the animation.
Animations use the after render hook system. Each animation inserts a AfterRenderHook::UpdateAnimation into the hooks array.
After generating sprites, UpdateAnimation logic interpolates the keyframes using the current animation runtime and then update the GUI component data.
Because this process happens after sprite generation, the builder needs to initialize_animations once at build time to make sure all animated components are correctly initialized.
Animation values are computed in interpolate_current_frame and the apply function that follows stores the computed
values in the component.
Controlling animations is done using a GuiAnimationControl value, which is an opaque object that stores commands to execute.
The GuiAnimationControl is then stored in the GUI state, and at state sync time, the animation state is updated using the last values sent by the user.
Animations can also be registered internally by a GUI component; however this is not included in the demo.