Today's bundlers rely on developers to decide where and when the application code should be lazy loaded. This is done by developers inserting dynamic imports into their codebase like so:
async function doSomething() {
const chunk = await import('./my-chunk');
console.log(chunk.someSymbol);
}
Developer needs to:
- Decide where in code a good place for lazy loading would be.
- Lazy load in a way that is compatible with the existing application workflow. (Lazy loading is inherently asynchronous, and the ideal function to do perform the lazy loading may be synchronous, limiting where the lazy loading code can be placed.)
- Assign a chunk name
./my-chunk
which will influence what the bundler can name its chunks, and how it can put chunks together into an application. - Determine what will go into the chunk (e.g. should
symbolA
andsymbolB
go into the same chunk, or should they go to separate chunks?).
The issue with the above is that when a developer is writing source code, they have no idea if the location they chose is a good place to have a lazy loaded boundary, or if the chunks have the right symbols. This information is not available until the application is deployed and the results of real-world usage are observed. For example, maybe the settings page is rarely visited and so it should be pulled out of the main bundle. Alternatively, maybe the notification section is lazy loaded but it is the most frequented page by users, so the lazy loading is only making the experience worse.
To make matters worse, once a developer makes these choices there is very little the bundler can do to compensate for them. The bundler pretty much MUST do what the developer asked for. To give the bundler more freedom, we need to look at the problem in a fresh new way.
My point is that when we write code we have no idea what the final bundles should look like and, therefore, don't have sufficient information to decide where to put the dynamic imports. On the other hand, by the time we collect sufficient data on what ideal chunks should look like the source code is already written. Inserting dynamically imports retroactively may be a huge undertaking. (Or alternatively, we over lazy loaded and broke the app into far too many small pieces.)
What we want is the ability to decide what the ideal number of chunks should be and move the code between those chunks based on how real users use our application. We also want to do that without having to go back and refactor our source code. Chunk layout should be configuration information we pass into the bundler, rather than code we embed into our codebase.
To add even more complexity, all of the current generation of frameworks have synchronous rendering pipelines. This makes it very difficult to insert asynchronous dynamic imports into the application.
If we want an optimal lazy loading strategy we need to solve the problems above.
Enter Qwik
Components are the basic building blocks of Qwik applications. Qwik asks you to breaks up the component into three parts:
- view: Contains the JSX code which renders the visual portion of the component.
- state factory: Contains code that creates a new component state.
- event handlers: Contains code used for component behavior/user interactions.
Why break up components into three parts?
Most frameworks keep view, state, and handler code together. Here is an example of how a pseudo framework might achieve this:
export function Counter(props: {step?:number}) {
const [count, setCount] = useState({count: 50});
const step = props.step || 1;
return (
<div>
<button onclick={() => setCount(count - step)}>-</botton>
<span>{count}</span>
<button onclick={() => setCount(count + step)}>+</botton>
</div>
)
}
Note that the components view, state, and handler are all inlined together. The implication is that all of these parts (view, state, and handler) have to be downloaded, parsed, and executed together. This severely limits our lazy loading capability.
The example above might be trivial, but imagine a more complex version of the above, which requires many KB worth of code to be downloaded, parsed, and executed together. In such a case, requiring the view, state, and handler to be eagerly loaded together might be a problem. Let's look at some common user usage patterns to get a better idea as to why this is an issue:
User interacts with a component by clicking on it:
- some of the
handler
s are needed: Only the specific handler which is triggered needs to be downloaded. All other handlers are not needed. -
view
is not needed: View may not be needed because the handler may not cause a re-render on may cause a re-render of a different component. -
state factory
is not needed: The component is being rehydrated and so no state initialization code is needed.
Component state is mutated:
-
handler
s are not needed: No handlers need to execute. -
view
is needed: View is needed because the component needs to be rerendered. -
state factory
is not needed: The component is being rehydrated and so no state initialization code is needed.
New component is created by the parent:
-
handler
s are not needed: No handlers need to execute. -
view
is needed: View is needed because the component needs to be rendered. -
state factory
is needed: The component is being created and so state initialization code is needed.
What the above demonstrates is that in each use-case only part of the view, state, handler information is required. The problem is that we have three distinct pieces of information which are all inlined together, but we only need to use them at different times of the component lifecycle. To achieve the optimal performance we need a way to download and execute the component in parts, based on what the component needs to do. The above code, as it is written, is permanently bound together.
Breaking up is easy to do
Qwik solves this by only downloading and executing the code that is needed for the task at hand. Keep in mind that while the example above is simple, the complexity of the code is significantly larger in real-world scenarios. Furthermore, more complex code oftentimes contains more imports (which in turn have imports of their own), that adds even more code to the component.
It is not possible to "tool" our way out of this. It isn’t possible to write a statically analyzable tool that can separate these pieces into parts that can then be lazy loaded as needed. The developer must break up the component into the corresponding parts to allow fine-grain lazy loading.
Qwik has qrlView
, qrlState
and qrlHandler
marker functions for this purpose.
file: my-counter.tsx
import {
QComponent,
qComponent,
qrlView,
qrlHandler,
qrlState
} from '@builder.io/qwik';
// Declare the component type, defining prop and state shape.
export type Counter = QComponent<{ step?: number },
{ count: number }>;
// Declare the component's state factory. This will be used
// when new component is being created to initialize the state.
// (It will not be used on rehydration.)
export const CounterState = qrlState<Counter>(() => {
return { count: 0 };
});
// Define the component's view used for rendering the component.
export const CounterView = qrlView<Counter>((props, state) => {
return (
<div>
<button on:click={Counter_update.with({ direction: -1 })}>
-
</button>
<span>{state.count}</span>
<button on:click={Counter_update.with({ direction: 1 })}>
+
</button>
</div>
);
});
// Component view may need handlers describing behavior.
export const Counter_update
= qrlHandler<Counter, {direction: number }>(
(props, state, params) => {
state.count += params.direction * (props.step || 1);
}
);
// Finally tie it all together into a component.
export const Counter = qComponent<Counter>({
state: CounterState,
view: CounterView,
});
Compared to other frameworks, the above is wordier. However, the cost of the explicit break up of components into their parts gives us the benefit of fine-grained lazy loading.
- Keep in mind that this is a relatively fixed DevExp overhead per component. As the component complexity increases, the added overhead becomes less of an issue.
- The benefit of this is that tooling now has the freedom to package up the component in multiple chunks which can be lazy loaded as needed.
What happens behind the scenes
qrlState
, qrlHandler
, qrlView
are all markers for Qwik Optimizer, which tell the tooling that it needs to transform any reference to it into a QRL. The resulting files can be seen here:
File: my-counter.js
import {qComponent, qrlView, qrlHandler, qrlState} from '@builder.io/qwik';
export const CounterState = qrlState(() => ({
count: 0,
}));
export const CounterView = qrlView((props) => {
const state = getState(props);
return (
<div>
<button on:click="/chunk-pqr#Counter_update?direction=-1">
// ^^^^^^^^^^^^^^^^^ LOOK ^^^^^^^^^^^^^^^^
-
</button>
<span>{state.count}</span>
<button on:click="/chunk-pqr#Counter_update?direction=1">
// ^^^^^^^^^^^^^^^^^ LOOK ^^^^^^^^^^^^^^^^
+
</button>
</div>
);
});
export const Counter_update = qrlHandler(
(props, state, params) => {
state.count += params.direction * (props.step || 1);
);
export const Counter = qComponent({
state: '/chunk-abc#CounterState', // <<=== LOOK
view: '/chunk-cde#CounterView', // <<=== LOOK
});
In addition to the source file transformation, the optimizer removed any static references between the view, state, and handlers. Qwik also generates entry point files for the rollup. These entry points match the QRLs above.
File: chunk-abc.js
export { CounterState } from './my-counter';
File: chunk-pqr.js
export { Counter_update } from './my-counter';
File: chunk-cde.js
export { CounterView } from './my-counter';
The important thing to note is that Qwik has great freedom on how many entry files should be generated, as well as which export goes into which entry file. This is because the developer never specified where the lazy loading boundaries are. Instead, the framework guided the developer to write code in a way that introduced many lazy loading boundaries in the codebase. This gives Qwik the power to generate optimal file distribution based on actual application usage. For small applications, Qwik can generate a single file. As the application size grows, more entry files can be generated. If a particular feature is rarely used, it can be placed in its own bundle.
Once Rollup processes the entry files, the resulting files are as seen below:
File: chunk-abc.js
import { qrlState } from '@builder.io/qwik';
export const CounterState = qrlState(() => ({
count: 0,
}));
File: chunk-pqr.js
import { qrlHandler} from '@builder.io/qwik';
export const Counter_update = qrlHandler(
(props, state, params) => {
state.count += params.direction * (props.step || 1);
);
File: chunk-cde.js
import { qrlView} from '@builder.io/qwik';
export const CounterView = qrlView((props, state) => {
return (
<div>
<button on:click="/chunk-pqr#Counter_update?direction=-1">
-
</button>
<span>{state.count}</span>
<button on:click="/chunk-pqr#Counter_update?direction=1">
+
</button>
</div>
);
});
Notice that Rollup flattened the contents of files into the entry files and removed any unneeded code, resulting in ideally sized bundles.
Constraints
In order for the tooling to be able to move qComponent
, qrlState
, qrlHandler
around the usage of these methods is restricted. (Not every valid JS program is a valid Qwik program.) The constraint is that all of the marker functions must be a top-level function that is export
ed.
Examples of invalid code:
import { someFn } from './some-place';
function main() {
const MyStateFactory = qrlState(() => ({})); // INVALID not top level
}
const MyStateFactory = qrlState(() => someFn({ data: 123 })); // VALID imports OK
Tooling has choices
It is possible (and all too common) to break up an application into too many small files, which negatively impacts download performance. For this reason, the tooling may choose to merge files together and over-bundle. This is desirable behavior. If your entire application is relatively small (less than 50KB) then breaking it up into hundreds of files would be counterproductive.
If your code structure is fine-grained, the tooling can always choose to create larger (and fewer) bundles. The opposite is not true. If your code structure is coarse, there is nothing the tooling can do to break it up. Qwik guides the developer to break up the application into the smallest possible chunks, and then rely on tooling to find the optimal bundle chunks. This way Qwik can provide optimal performance for applications of all sizes.
Do you find the above exciting? Then join our team and help us make the web fast!
- Try it on StackBlitz
- Star us on github.com/builderio/qwik
- Follow us on @QwikDev and @builderio
- Chat us on Discord
- Join builder.io