Have you noticed copilots everywhere these days?
New copilots are being created every day, giving each of us more power over the way we work. The white noise that comes from any exciting new tool—especially introducing software like Generative AI and ChatGPT—creates a large amount of chaos in the space, as so many versions of a newly available feature set are created but end up deprecated soon after. But what if there was a world where you could just create your own personal copilot, run it entirely on your own computer (even with no internet), reference your personal context, and then add your own specific features to it?
Pieces OS Client gives you the ability to do all of this and more, and you can get up and running in no time at all. And to make it better, here is a prebuilt Vanilla TypeScript project that includes all of the content that will be covered today, and can be used to start your project.
By the end of this article, you will be able to have your first conversation with your personal copilot. We will explore Downloading and Using LLMs Locally in part two of this series followed by a final article about Setting Local Conversational Context.
When building a copilot, there are a few things we recommend:
You can download and view this QGPT Stream Controller to see a full example using a WebSocket with copilot conversations. This can be used to model your own stream controllers and expanded further. There are specific sections we will cover below, but the controller document is above.
A minimal number of dependencies have been added to work with this repo:
- "@pieces-app/pieces-os-client"
- "@types/node" - Used to load in all type definitions when using TypeScript in Node.js.
- "esbuild"
- "ws"
Let's get into the SDK and start looking at how it all works in this Vanilla project.
Getting Started
It’s super easy to start a conversation with your personal copilot. If you’re looking at the CopilotStreamController.ts
file, you'll see the entry point for conversational messages named askQGPT()
.
Connect to the qgpt/stream WebSocket
When using other copilots, you may have noticed the streaming structure as the copilot generates a response, which makes it feel like someone is typing a response back. This is a nice-to-have feature that allows for each word to be streamed over as it is available and updates the UI accordingly.
Here is an example:
Utilizing WebSockets to listen quickly to changes can be done using the connect()
method, which you will see throughout this article. Using connect()
allows us to ensure that our WebSocket is both running and able to be reached. View that snippet here in its entirety. Let’s focus on a few parts here:
Breaking Out connect()
The first section to note is right inside of the connect()
method, where the WebSocket is instantiated and set to the copilot stream endpoint:
this.ws = new WebSocket(`ws://localhost:1000/qgpt/stream`);
Before you receive any message back from the WebSocket, you must first send user input via .askQGPT
. Then the msg
comes back and needs to be parsed, then strongly typed using Pieces.QGPTStreamOutputFromJSON(json)
in order to access properties on the JSON that was returned. Then you can access the answer itself (this is why we typed the json
variable) to get result.question.answers.iterable[0]
. The property being accessed there could be semantically called the "most recent response." Here is that logic:
this.ws.onmessage = (msg) => {
const json = JSON.parse(msg.data);
const result = Pieces.QGPTStreamOutputFromJSON(json);
const answer: Pieces.QGPTQuestionAnswer | undefined = result.question?.answers.iterable[0];
// the message is complete, or we do nothing
if (result.status === 'COMPLETED') {
// in the unlikely event there is no message, show an error message
if (!totalMessage) {
this.setMessage?.("ERROR: received no messages from the copilot websockets")
}
// render the new total message
this.setMessage?.(
totalMessage,
);
totalMessage = '';
return;
} else if (result.status === 'FAILED' || result.status === 'UNKNOWN') {
this.setMessage?.('Message failed')
totalMessage = '';
return;
}
// add to the total message
if (answer?.text) {
totalMessage += answer.text;
}
// render the new total message
this.setMessage?.(totalMessage);
};
// in the case that websocket is closed or errored we do some cleanup here
const refreshSockets = (error?: any) => {
if (error) console.error(error);
totalMessage = '';
this.setMessage?.('Websocket closed')
this.ws = null;
};
Send Your First Prompt with askQGPT
Take a look at this query, as it is important to track the parameter query
. Its usage is not to be ignored since it is the message that you typed into the input box on the Vanilla example. setMessage
is used to store the message that comes back from the copilot and place it in the response as it returns.
This is an asynchronous function, requiring a response back before continuing. You also will note the check here to see if the WebSocket has been connected to before we build the input
object. Look at this pure example of using the Pieces.QGPTStreamInput
:
const input: PiecesQGPTStreamInput = {
question: {
query,
relevant: {iterable: []}
},
}
Note that if you are planning to use relevant
and pass in a snippet list of related snippets, you can add your list there, but if you do not plan on adding any relevant information, you MUST still include an empty array and pass it into iterable []
. Then handleMessages
takes the new input Item
and passes it to the next step.
Before Continuing - check out the entire code snippet for the askQGPT method and spot the code snippet that we just talked about for starting your conversation with the copilot:
/**
* This is the entry point for all chat messages into this socket.
* @param param0 The inputted user query, and the function to update the message
*/
public async askQGPT({
query,
setMessage
}: {
query: string;
setMessage: (message: string) => void;
}): Promise<void> {
// need to connect the socket if it's not established.
if (!this.ws) {
this.connect();
}
const input: Pieces.QGPTStreamInput = {
question: {
query,
relevant: {iterable: []}
},
};
this.handleMessages({ input, setMessage });
}
Update the UI with handleMessages and get your response
Now that we have sent our input (which is typed: Pieces.QGPTStreamInput
) we can go to the other method here.
/**
*
* @param param0 the input into the websocket, and the function to update the ui.
*/
private async handleMessages({
input,
setMessage,
}: {
input: Pieces.QGPTStreamInput;
setMessage: (message: string) => void;
}) {
if (!this.ws) this.connect();
await this.connectionPromise;
this.setMessage = setMessage;
try {
this.ws!.send(JSON.stringify(input));
} catch (err) {
console.error('err', err);
setMessage?.(JSON.stringify(err, undefined, 2));
}
}
public static getInstance() {
return (CopilotStreamController.instance ??= new CopilotStreamController());
}
}
Connecting to Your UI
Now that the logic needed to send and receive a message is in CopilotStreamController.ts
, you just need to connect to a button (or another UI element). In theory, you could use something like this sendMessage()
function to send the message itself:
// send a message via askQGPT.
async function sendMessage() {
const userInput = input.value;
CopilotStreamController.getInstance().askQGPT({
query: userInput,
setMessage
})
}
And add it to any arbitrary button; here the element has an ID of send-chat-btn
inside of main()
in your index.ts
:
async function main(){
CopilotStreamController.getInstance();
const sendChatBtn = document.getElementById("send-chat-btn");
if (!sendChatBtn) throw new Error('expected id send-chat-btn');
}
// and dont forget your window.onload:
window.onload = main;
We call CopilotStreamController.getInstance()
here to instantiate the connect()
method which is run on .getInstance
and creates the copilot and WebSocket connection. Be sure that the values are passed over from the input when the button is pressed as well, so it can be attached as the query on the CopilotStreamController.askQGPT()
.
Wrapping Up
Now you have all of the tools you need to ask questions of your own personal copilot, see how you can quickly communicate back and forth with the copilot itself and some light configuration if you need it for your UI. There are more resources available, an entire Open Source Community, and a number of repos/SDKS with other examples and projects underway.
Here are some resources:
- Vanilla Copilot Example - full usable example of the copilot
- Open Source Resources - other open source projects and resources currently underway
The next article in this series will cover downloading the models and using them locally, and using your code as copilot context. Happy Coding!