How to Build a Speech to Text Dictation App in Flutter

In this article, we will be exploring how to create a Speech to Text dictation app in Flutter. We will cover the prerequisites, creating a Flutter project, installing dependencies, building the UI, integrating a voice recognition library, and building voice recognition in Flutter.

flutter speech to text

Prerequisites

Before we begin, you should make sure you have the following installed:

Additionally, you should have a basic understanding of the Flutter framework and the Dart programming language.

Creating a Flutter Project

First, we will set up a new Flutter project. Open your preferred IDE, and create a new Flutter project. When prompted, choose a name for your project and a location to save it.

Once the project is created, open the pubspec.yaml file and add the following dependencies:

dependencies:
  flutter:
    sdk: flutter
  
  speech_to_text: ^0.3.0

Save the file and run flutter pub get to install the dependencies.

Installing Dependencies

Next, we need to install the speech_to_text package. To do this, open a terminal window and run the following command:

flutter pub get

This will install the package in your project.

Building the UI

Now, we will create the UI for our app. Open the main.dart file and add the following code:

import 'package:flutter/material.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Speech to Text App',
      home: Scaffold(
        appBar: AppBar(
          title: Text('Speech to Text App'),
        ),
        body: Center(
          child: Text('Speech to Text App'),
        ),
      ),
    );
  }
}

This will create the basic UI for our app.

Integrating a Voice Recognition Library

Next, we need to integrate the speech_to_text package into our project. To do this, open the main.dart file and add the following code:

import 'package:speech_to_text/speech_recognition.dart';

This will import the speech_to_text package into our project.

Building Voice Recognition in Flutter

Finally, we need to add the code to enable voice recognition. To do this, open the main.dart file and add the following code:

import 'package:speech_to_text/speech_recognition.dart';

// ...

class MyApp extends StatelessWidget {
  // ...

  
  Widget build(BuildContext context) {
    // ...

    return MaterialApp(
      title: 'Speech to Text App',
      home: Scaffold(
        appBar: AppBar(
          title: Text('Speech to Text App'),
        ),
        body: Center(
          child: Column(
            mainAxisAlignment: MainAxisAlignment.center,
            children: <Widget>[
              Text('Speech to Text App'),
              SpeechRecognitionWidget(),
            ],
          ),
        ),
      ),
    );
  }
}

class SpeechRecognitionWidget extends StatefulWidget {
  
  _SpeechRecognitionWidgetState createState() => _SpeechRecognitionWidgetState();
}

class _SpeechRecognitionWidgetState extends State<SpeechRecognitionWidget> {
  SpeechRecognition _speech;

  bool _speechRecognitionAvailable = false;
  bool _isListening = false;

  String transcription = '';

  
  void initState() {
    super.initState();
    activateSpeechRecognizer();
  }

  // Platform messages are asynchronous, so we initialize in an async method.
  void activateSpeechRecognizer() {
    _speech = new SpeechRecognition();
    _speech.setAvailabilityHandler(onSpeechAvailability);
    _speech.setCurrentLocaleHandler(onCurrentLocale);
    _speech.setRecognitionStartedHandler(onRecognitionStarted);
    _speech.setRecognitionResultHandler(onRecognitionResult);
    _speech.setRecognitionCompleteHandler(onRecognitionComplete);
    _speech
        .activate()
        .then((res) => setState(() => _speechRecognitionAvailable = res));
  }

  
  Widget build(BuildContext context) {
    return Container(
      width: MediaQuery.of(context).size.width * 0.7,
      child: Column(
        mainAxisAlignment: MainAxisAlignment.center,
        crossAxisAlignment: CrossAxisAlignment.center,
        children: <Widget>[
          Row(
            mainAxisAlignment: MainAxisAlignment.center,
            children: <Widget>[
              FloatingActionButton(
                child: Icon(Icons.cancel),
                mini: true,
                backgroundColor: Colors.deepOrange,
                onPressed: _speechRecognitionAvailable && _isListening
                    ? () => _speech.cancel()
                    : null,
              ),
              FloatingActionButton(
                child: Icon(Icons.mic),
                onPressed: _speechRecognitionAvailable && !_isListening
                    ? () => start()
                    : null,
                backgroundColor: Colors.pink,
              ),
              FloatingActionButton(
                child: Icon(Icons.stop),
                mini: true,
                backgroundColor: Colors.deepPurple,
                onPressed: _speechRecognitionAvailable && _isListening
                    ? () => stop()
                    : null,
              ),
            ],
          ),
          Container(
            width: MediaQuery.of(context).size.width * 0.7,
            decoration: BoxDecoration(
              color: Colors.cyanAccent[100],
              borderRadius: BorderRadius.circular(6.0),
            ),
            padding: EdgeInsets.symmetric(
              vertical: 8.0,
              horizontal: 12.0,
            ),
            child: Text(
              transcription,
              style: TextStyle(fontSize: 24.0),
            ),
          )
        ],
      ),
    );
  }

  void start() => _speech
      .listen(locale: 'en_US')
      .then((result) => print('_MyAppState.start => result $result'));

  void stop() => _speech.stop().then((result) {
        setState(() => _isListening = result);
      });

  void onSpeechAvailability(bool result) =>
      setState(() => _speechRecognitionAvailable = result);

  void onCurrentLocale(String locale) {
    print('_MyAppState.onCurrentLocale... $locale');
    setState(
        () => _currentLocale = locale);
  }

  void onRecognitionStarted() => setState(() => _isListening = true);

  void onRecognitionResult(String text) {
    setState(() => transcription = text);
  }

  void onRecognitionComplete() => setState(() => _isListening = false);
}

This will enable voice recognition in our app.

Conclusion

In this article, we explored how to create a Speech to Text dictation app in Flutter. We covered the prerequisites, creating a Flutter project, installing dependencies, building the UI, integrating a voice recognition library, and building voice recognition in Flutter. With the code provided in this article, you should be able to create your own Speech to Text dictation app in Flutter.