How To Implement Text-To-Speech Functionality For BlockNote In Next.Js

It's Just Nifty - Sep 16 - - Dev Community

I have figured out rocket science! I’m lying, of course. I have figured out how to implement text-to-speech functionality in Next.Js for BlockNote. If you’re not aware of BlockNote go ahead and read the article that I made on stumbling across a solution for my WYSIWYG editor problems in Next.js, and this other one.

So, go ahead and read them, but I’m just gonna continue writing this article despite your lack of knowledge. For the rest of you who are aware of BlockNote, let’s dive into this article. Let’s go!

Unsplash Image by Pankaj Patel

Recap

So, we have learned how to store BlockNote data in Firebase by converting the blocks into HTML and then back again, which I’ll give you a virtual thumbs up if you read that article. Anyway if you were keeping up, our code should look like this right now:

'use client';
import React, { useState, useEffect, useRef, ChangeEvent, useCallback } from 'react';
import { useSearchParams } from 'next/navigation';
import { firestore } from '../../../../../firebase';
import { getDoc, doc, updateDoc } from 'firebase/firestore';
import "@blocknote/core/fonts/inter.css";
import { useCreateBlockNote } from "@blocknote/react";
import { BlockNoteView } from "@blocknote/mantine";
import "@blocknote/mantine/style.css";
import { Block } from "@blocknote/core";

function Document() {
 const params = useSearchParams();
 const docId = params.get("id");
 const [title, setTitle] = useState('');
 const [value, setValue] = useState('');
 const [blocks, setBlocks] = useState<Block[]>([]);

   useEffect(() => {
   const fetchDocument = async () => {
     if (!docId) return;

     const docRef = doc(firestore, `documents/${docId}`);
     try {
       const docSnap = await getDoc(docRef);
       if (docSnap.exists()) {
         const data = docSnap.data();
         setTitle(data.title || '');
         setValue(data.content || '');
       } else {
         console.log('Document does not exist');
       }
     } catch (error) {
       console.error('Error fetching document: ', error);
     }
   };

   fetchDocument();
 }, [docId]);

 const editor = useCreateBlockNote();

 useEffect(() => {
   async function loadInitialHTML() {
     const blocks = await editor.tryParseHTMLToBlocks(value);
     editor.replaceBlocks(editor.document, blocks);
   }
   loadInitialHTML();
 }, [editor, value]);

 const saveDocument = async () => {
   if (!docId) return;

   const content = await editor.blocksToHTMLLossy(blocks);

   try {
     await updateDoc(doc(firestore, `documents/${docId}`), {
       content: content,
     });

     console.log('Document saved successfully');
   } catch (error) {
     console.error('Error saving document: ', error);
   }
 };

 return (
   <div>
     <h1>{title}</h1>
     <BlockNoteView editor={editor} onChange={() => { setBlocks(editor.document); }} />
     <button onClick={saveDocument}>Save Document</button>
   </div>
 );
}

export default Document;
Enter fullscreen mode Exit fullscreen mode

Now that we have refreshed our memories, let’s add stuff!

How Are We Going To Do This???

Great Question! I know because I came up with it. We are going to implement text-to-speech functionality by using the Web Speech API.

The Web Speech API enables you to incorporate voice data into web apps. In this case, Next.js applications.

We will use the Web Speech API’s interface, which is called SpeechSynthesis to add this kind of functionality.

Implementation

In our code, we only need to add one function and a button. That’s it. That’s how easy it is. Add this function to your code:

  function speak() {
    const text = blocks
    .map(block => block?.content)
    .filter(content => content !== undefined)
    .flatMap(content => {
      if (Array.isArray(content)) {
        return content
          .filter(contentItem => contentItem.type === 'text')
          .map(contentItem => contentItem.text);
      } else {
        return [];
      }
    })
    .join(' ');
    let utterance = new SpeechSynthesisUtterance(text);
    let voicesArray = speechSynthesis.getVoices();
    utterance.voice = voicesArray[2];
    speechSynthesis.speak(utterance);
  }
Enter fullscreen mode Exit fullscreen mode

This function gets the content from the document and if the content is type text it says what’s inside the editor. Simple. Now display a Speak button:

<button onClick={speak}>Speak</button>
Enter fullscreen mode Exit fullscreen mode

Diving Deeper

What if you want to customize our text-to-speech functionality by changing the pitch, rate, and volume? Well, let’s go over how to do that, after this ad! I’m kidding: this isn’t YouTube.

Changing The Pitch

To change the pitch, add this line:

utterance.pitch = 4; // Change the number to fit your preferences
Enter fullscreen mode Exit fullscreen mode

Changing The Rate

To change the rate, add this line:

utterance.rate= 5; // Change the number to fit your preferences
Enter fullscreen mode Exit fullscreen mode

Changing The Volume

Add this code to change the volume:

utterance.volume= 0.4; // Change the float to fit your preferences
Enter fullscreen mode Exit fullscreen mode

Well, that wraps up this short article on implementing text-to-speech functionality in your code. Follow me on Medium and subscribe to my newsletter.

Happy Coding!

. . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player