By Ruslan Prytula October 29, 2015 10:54 PM
Record Videos in the Browser Using Media Recorder API

I’ve been waiting for MediaRecorder API for several months, and now I am very excited to let you know that its first prototype is up and running, and is available via Google Chrome Canary. Before delving any deeper on the ways to use it, first let’s define what MediaRecorder API actually is (source:

This API attempts to make basic recording very simple, while still allowing for more complex use cases. In the simplest case, the application instantiates the MediaRecorder object, calls record() and then calls stop() or waits for the MediaStream to be ended. The contents of the recording will be made available in the platform’s default encoding via the data available event.

In a nutshell, a MediaStream instance that you get by calling getUserMedia is just raw PCM data, and it’s fine if you want to use it in video-tag, but not enough to create a video-file, simply because you are missing something that encodes your raw PCM data to a video format, for example WebM.

With MediaRecorder API we can access encoded blob chunks, which means that once the recording is finished we can construct a real file out of them, and then upload or download it. This was not possible to achieve before - at least without some dirty hacks :) For example, if not for MediaRecorder API, you would have to perform the following steps:

  • request getUserMedia and stream data to a video tag;
  • capture images from a video tag to the canvas;
  • generate WebP file by using a library like Whammy Js.

If you want to learn more about this method, check out this amazing article.


The entry point is the same as for streaming a webcam via an HTML video tag. We call getUserMedia and receive the video-stream in a callback.

  video: true

function onStreamGetSuccess(stream) {
  // our cool stuff goes here.

onStreamGetSuccess is where we create the MediaRecorder, and initiate the recording process. It looks like this:

function onStreamGetSuccess(stream) {
  recorder = new MediaRecorder(stream);
  // will be called each time we get data from stream.
  recorder.ondataavailable = onDataAvailable;

Finally, we need to collect our blob-chunks, and define a function which will create a file that we can send to the server. Converting recorded data to a file can be tricky, so we need to use two functions here: bufferToDataUrl, which will convert the collected blob-chunks to a URL; and dataUrlToFile, which will transform the URL to a File-object. This method is normally more appropriate, because it works for Chrome extensions too (you can’t normally transfer objects like blob, file, etc. when working with extensions). If, for example, you want a background-script to transfer recorded video to a web page (so that it can be uploaded) by firing an event, it will not work. And the only way to solve it would be doing the following:

  • use bufferToDataUrl to convert a blob into a string;
  • use background-script to send a string;
  • receive a string on a web page;
  • use dataUrlToFileto convert a string into a file;
  • upload the resulting file to the server.

Code-wise, it looks like this:

var buffer = [];
function onDataAvailable(e) {
  if (

function bufferToDataUrl(callback) {
  var blob = new Blob(buffer, {
    type: 'video/webm'

  var reader = new FileReader();
  reader.onload = function() {

// returns file, that we can send to the server.
function dataUrlToFile(dataUrl) {
  var binary = atob(dataUrl.split(',')[1]),
  data = [];

  for (var i = 0; i < binary.length; i++)

  return new File([new Uint8Array(data)], 'recorded-video.webm', {
    type: 'video/webm'

// triggered by user.
function onStopButtonClick() {
  try {
    recorder.stop();       {track.stop();});
  } catch (e) {}

  bufferToDataUrl(function(dataUrl) {
    var file = dataUrlToFile(dataUrl);
    // upload file to the server.

You can see the full demo here (works only in Google Chrome 48+). Thanks for reading!