Mute audio and video
In the previous lesson we learnt how to join a meeting. In this lesson we will add a toolbar with three buttons:
Mute audio button: The microphone will be muted and the remote participants won't be able to listen to us.
Mute video button: We use this button to turn off the camera. We will also hide the
SelfView
and, instead of theVideo
in the remote side, we will display anIcon
.Disconnect button: This button disconnects us from the meeting and redirects us to the
CreateMeeting
component.
You can download the starter code from Step.04-Exercise-Mute-audio-and-video.
Participant component
We will start by making some small modifications to the file
./client/src/Meeting/RemoteParticipants/Participant.tsx
. We will hide the
video when the remote participant mutes their camera.
First we import the useState
hook:
import {useState} from 'react';
Next, we define a state to indicate when the current participant is muted:
const [videoMuted, setVideoMuted] = useState(false);
Now we get the VideoTrack
and bind the functions to the onmute
and
onunmuted
events. These functions will update the videoMuted
state:
const videoTracks = props.videoStream?.getVideoTracks();
if (videoTracks != null && videoTracks.length > 0) {
videoTracks[0].onmute = () => {
setVideoMuted(true);
};
videoTracks[0].onunmute = () => {
setVideoMuted(false);
};
}
Finally, we add a new condition to render the Video
element when it's not
muted:
{(props.videoStream != null && !videoMuted) && (
And we do the opposite to NoStreamContainer
:
{(props.videoStream == null || videoMuted) && (
Toolbar component
Next we will define a new component in the file
./client/src/Meeting/Toolbar/Toolbar.tsx
. This component will contain all our
buttons and will be displayed at the bottom of the screen.
We will start by importing all the dependencies that we will use:
import {useState} from 'react';
import {useNavigate} from 'react-router-dom';
import {type Vpaas} from '@pexip/mee-sdk';
import {Button, Icon, IconTypes, Tooltip} from '@pexip/components';
The first thing is to define the properties:
interface ToolbarProps {
vpaas: Vpaas;
localStream: MediaStream | undefined;
onLocalStreamChange: (stream: MediaStream | undefined) => void;
}
Next we assign the properties to the functional component:
export const Toolbar = (props: ToolbarProps): JSX.Element => {
In the following steps we will start editing the Toolbar
function.
Define states and variables
Now, inside the component, we define the mute state for the buttons and other constants:
const navigate = useNavigate();
const [audioMuted, setAudioMuted] = useState(false);
const [videoMuted, setVideoMuted] = useState(false);
We will use navigate
to redirect to the CreateMeeting
component when we
disconnect from the meeting.
Handlers for buttons
In this section we will define the functions that will be triggered when the user clicks in the toolbar buttons.
Mute audio
The first handler that we will define is the one to mute the audio. The behaviour will depend on the previous state:
Mute audio: If we want to mute the audio, we only have to make one task. We have to stop the audio tracks. After this, the application will free the microphone and you will see that this reflected in the browser tab.
Unmute audio: In case we want to unmute, we will need to make a couple of things more:
- Request a new
MediaStream
using for thatgetUserMedia
. Take into account that we only request the stream for audio (video: false
). - If the video isn't muted, we take the video tracks from the previous stream and attach them to the new stream.
- We send the updated media stream to VPaaS.
- Finally, we notify the application that the
localStream
changed throughonLocalStreamChange
.
- Request a new
The last step is to update the current value for audioMuted
.
const handlePressMuteAudio = async (): Promise<void> => {
if (!audioMuted) {
props.localStream?.getAudioTracks().forEach(track => {
track.stop();
});
} else {
const newStream = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false,
});
if (!videoMuted) {
const audioTrack = props.localStream?.getVideoTracks()[0];
if (audioTrack != null) {
newStream.addTrack(audioTrack);
}
}
props.vpaas.setStream(newStream);
props.onLocalStreamChange(newStream);
}
setAudioMuted(!audioMuted);
};
Mute video
Now we will define what happens when the user clicks on the video mute button. The tasks that we need to perform are similar to what we have done for the audio, but with some small differences:
Mute video: The first thing that we will do is to stop all the video tracks. This will free the camera access and its LED will be turned off. One difference from audio is that we would like to notify the application that the
localStream
changed. This allows us to update theSelfView
. To do that, we need to perform the following tasks:- Clone the
localStream
. - Stop all the audio tracks from the old stream.
- Notify the application that the
localStream
changed throughonLocalStreamChange
.
- Clone the
Unmute video: The process to unmute the video is the same that we followed in mute audio:
- Request a new
MediaStream
usinggetUserMedia
. Take into account that we only request the stream for video (audio: false
). - If the audio isn't muted, we take the audio tracks from the previous stream and attach them to the new stream.
- We send the updated media stream to VPaaS.
- Finally, we notify the application that the
localStream
changed throughonLocalStreamChange
.
- Request a new
The last step is to update the current value for videoMuted
.
const handlePressMuteVideo = async (): Promise<void> => {
if (!videoMuted) {
props.localStream?.getVideoTracks().forEach(track => {
track.stop();
});
const clonedStream = props.localStream?.clone();
props.localStream?.getAudioTracks().forEach(track => {
track.stop();
});
props.onLocalStreamChange(clonedStream);
} else {
const newStream = await navigator.mediaDevices.getUserMedia({
audio: false,
video: true,
});
if (!audioMuted) {
const audioTrack = props.localStream?.getAudioTracks()[0];
if (audioTrack != null) {
newStream.addTrack(audioTrack);
}
}
props.onLocalStreamChange(newStream);
props.vpaas.setStream(newStream);
}
setVideoMuted(!videoMuted);
};
Disconnect
Finally, we will implement the disconnect button. This feature will leave the current meeting and come back to the component for creating meetings. The tasks to do this include:
- Stop all the audio and video tracks. This will free the use of the microphone and camera.
- Disconnect from VPaaS.
- Redirect to the component that we can use to create a new meeting.
const handlePressDisconnect = (): void => {
props.localStream?.getTracks().forEach(track => {
track.stop();
});
props.vpaas.disconnect();
navigate('/meetings');
};
Render the component
Now that we have all the logic in place, we will start building the visual interface. We will create one button for each function. Each button will have a tooltip that will display a label when the user's mouse hovers over the button.
return (
<div className='Toolbar'>
<Tooltip text={`${audioMuted ? 'Unmute' : 'Mute'} audio`}>
<Button
variant='translucent'
modifier='square'
onClick={() => { handlePressMuteAudio().catch((e) => { console.error(e) }) }}
isActive={audioMuted}
>
<Icon source={audioMuted ? IconTypes.IconMicrophoneOff : IconTypes.IconMicrophoneOn} />
</Button>
</Tooltip>
<Tooltip text={`${videoMuted ? 'Unmute' : 'Mute'} video`}>
<Button
variant='translucent'
modifier='square'
onClick={() => { handlePressMuteVideo().catch((e) => { console.error(e) }) }}
isActive={videoMuted}
>
<Icon source={videoMuted ? IconTypes.IconVideoOff : IconTypes.IconVideoOn} />
</Button>
</Tooltip>
<Tooltip text='Disconnect'>
<Button variant='danger' modifier='square' onClick={handlePressDisconnect}>
<Icon source={IconTypes.IconLeave} />
</Button>
</Tooltip>
</div>
)
Meeting component
Now we will jump to the file ./client/src/Meeting/Meeting.tsx
and make some
changes to improve the user experience and display the toolbar.
We start by importing useMemo
and Toolbar
:
import {useEffect, useMemo, useState} from 'react';
import {Toolbar} from './Toolbar/Toolbar';
We will implement a new function to detect when a video stream is not active. This way we can change the layout and display an image if we aren't receiving a stream. This could happen if another participant muted their video.
const isStreamActive = (stream: MediaStream | undefined): boolean => {
return (
stream?.getVideoTracks().some(track => track.readyState === 'live') ?? false
);
};
Next we will use the useMemo
hook in the Selfview
. With this modification it
will be only re-rendered when the videoTrackId
change. Without it, the
Selfview
would be re-rendered each time the user mutes and unmutes audio,
because we will receive a new localStream
.
const videoTracks = localStream?.getVideoTracks()
const videoTrackId =
videoTracks != null && videoTracks.length !== 0 ? videoTracks[0].id : ''
// Only re-render the selfie if the videoTrack id changes
const selfie = useMemo(
(): JSX.Element => (
<Selfview
className="SelfView"
isVideoInputMuted={false}
shouldShowUserAvatar={false}
username="User"
localMediaStream={localStream}
/>
),
[videoTrackId]
)
Now we will add the condition isStreamActive(localStream)
to render the
Selfview
that we have modified:
<div className="PipContainer">
{isStreamActive(localStream) && selfie}
</div>
And finally, we render the toolbar:
{vpaas != null && (
<Toolbar
vpaas={vpaas}
localStream={localStream}
onLocalStreamChange={setLocalStream}
/>
)}
Run the app
If you want to run the application, you will need to launch the server and the client at the same time. We start by launching the server:
$ cd server
$ npm start
And in another terminal we launch the client:
$ cd client
$ npm start
The browser will be opened automatically with the URL https://localhost:4000.
Once you have joined a meeting, you will see the following interface with the toolbar:
You can compare your code with the solution in Step.04-Solution-Mute-audio-and-video. You can also check the differences with the previous lesson in the git diff.