Skip to main content

Receive presentation

In this lesson we will learn how to receive a presentation from any other participant. The user that is presenting will be able to share is whole screen, a window, a tab or event a file, such a PDF.

During this lesson, we will perform the following tasks:

  • Show the presentation in the main video region.

  • Move the remote video to a second thumbnail.

  • Swap between presentation and remote video if we click on the second thumbnail.

You can download the initial code from Step.05-Exercise-Receive-presentation.

Modify the conference ViewModel

As always, we will define some new LiveData that will convey the presentation state to the fragment and layout.

// Presentation VideoTrack
private val _presentationVideoTrack = MutableLiveData<VideoTrack?>()
val presentationVideoTrack: LiveData<VideoTrack?>
get() = _presentationVideoTrack

And now we have to create the method that will receive the videoTrack and attach it to the media connection.

private fun startWebRTCConnection(
conference: InfinityConference,
localAudioTrack: LocalAudioTrack,
localVideoTrack: CameraVideoTrack
) {
...
// Define a callback method for when the presentation is received
val presentationVideoTrackListener = MediaConnection.RemoteVideoTrackListener { videoTrack ->
_presentationVideoTrack.postValue(videoTrack)
}
// Attach the callback to the media connection
mediaConnection.registerPresentationRemoteVideoTrackListener(presentationVideoTrackListener)

// Start the media connection.
mediaConnection.start()
}

Now we have to modify the method configureConferenceListener and listen to two new events: PresentationStartConferenceEvent and PresentationStopConferenceEvent .

private fun configureConferenceListeners(conference: InfinityConference) {
conference.registerConferenceEventListener(ConferenceEventListener { event ->
when (event) {
is DisconnectConferenceEvent -> {
_isConnected.postValue(false)
}
is PresentationStartConferenceEvent -> {
mediaConnection.startPresentationReceive()
_isPresentationInMain.postValue(true)
}
is PresentationStopConferenceEvent -> {
mediaConnection.stopPresentationReceive()
_isPresentationInMain.postValue(false)
_presentationVideoTrack.postValue(null)
}
else -> {
Log.d("ConferenceViewModel", event.toString())
}
}
})
}

Now we will add an additional functionality to our app. The users will want to choose what to see when a presentation is enabled. They will want to change between seeing the presentation in fullscreen and the other participants in a small thumbnail or the other way around.

We will need to add a boolean LiveData value that will indicate if the presentation should be in the main RendererViewSurface or in the small thumbnail.

// Presentation in main region
private val _isPresentationInMain = MutableLiveData<Boolean>()
val isPresentationInMain: LiveData<Boolean>
get() = _isPresentationInMain

Now we will define a public method that will change the LiveData state.

fun onSwapMainSecondaryVideos() {
_isPresentationInMain.value = _isPresentationInMain.value != true
}

Modify the conference layout

Now it's time to create a new RendererViewSurface in the conference layout. This view will be only visible when the presentation is enabled. Also, if you check the properties, it has a click listener. This method will exchange the videoTracks in this surface with the one attached to the main surface.

<androidx.cardview.widget.CardView
android:id="@+id/secondary_video_card"
android:layout_width="150dp"
android:layout_height="150dp"
android:layout_marginStart="16dp"
android:layout_marginTop="16dp"
android:onClick="@{() -> viewModel.onSwapMainSecondaryVideos()}"
app:cardCornerRadius="0dp"
app:goneUnless="@{viewModel.presentationVideoTrack != null}"
app:layout_constraintStart_toEndOf="@id/local_video_card"
app:layout_constraintTop_toTopOf="parent">
<com.pexip.sdk.media.webrtc.SurfaceViewRenderer
android:id="@+id/secondary_video_surface"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_gravity="center" />
</androidx.cardview.widget.CardView>

Modify the conference fragment

We need to add a new call to a method in onCreateView() . This method is called setPresentationObservers() and it will look for changes in the LiveData related with the presentation.

override fun onCreateView(
inflater: LayoutInflater, container: ViewGroup?,
savedInstanceState: Bundle?
): View? {
...
setPresentationObservers()
...
}

In this method we will observe two LiveData variables. The first one will observe when a new videoTrack is detected and the second will manage what happens when we try to change what we show in the main video. We will display the remote participants or the presentation depending on the value of presentationInMain .

    private fun setPresentationObservers() {
viewModel.presentationVideoTrack.observe(viewLifecycleOwner, Observer { videoTrack ->
if (videoTrack != null) {
cleanMainSurface()
cleanSecondarySurface()
viewModel.remoteVideoTrack.value?.addRenderer(binding.secondaryVideoSurface)
videoTrack?.addRenderer(binding.mainVideoSurface)
}
})
viewModel.isPresentationInMain.observe(viewLifecycleOwner, Observer { presentationInMain ->
cleanMainSurface()
cleanSecondarySurface()
if (presentationInMain) {
viewModel.remoteVideoTrack.value?.addRenderer(binding.secondaryVideoSurface)
viewModel.presentationVideoTrack.value?.addRenderer(binding.mainVideoSurface)
} else {
viewModel.remoteVideoTrack.value?.addRenderer(binding.mainVideoSurface)
viewModel.presentationVideoTrack.value?.addRenderer(binding.secondaryVideoSurface)
}
})
}

Now we will define a function that will detach the mainVideoSurface from all the available videoTracks and clean the image that it's displaying.

    private fun cleanMainSurface() {
viewModel.remoteVideoTrack.value?.removeRenderer(binding.mainVideoSurface)
viewModel.presentationVideoTrack.value?.removeRenderer(binding.mainVideoSurface)
binding.mainVideoSurface.clearImage()
}

We will do the same for the secondaryVideoSurface .

    private fun cleanSecondarySurface() {
viewModel.remoteVideoTrack.value?.removeRenderer(binding.secondaryVideoSurface)
viewModel.presentationVideoTrack.value?.removeRenderer(binding.secondaryVideoSurface)
binding.secondaryVideoSurface.clearImage()
}

We need to initialize the secondaryVideoSurface inside the method initializeVideoSurface .

private fun initializeVideoSurfaces() {
...
binding.secondaryVideoSurface.init(viewModel.eglBase.eglBaseContext, null)
}

The final step is to release the secondaryVideoSurface when the fragment is destroyed.

override fun onDestroyView() {
...
viewModel.presentationVideoTrack.value?.removeRenderer(binding.secondaryVideoSurface)
binding.secondaryVideoSurface.release()
}

Run the app

At this moment we are prepare for receiving a presentation. To test it, join to a conference using your Android app and use your computer with Web App 3 to join the same conference. Now you only have to active "share screen" in the Web App 3 and you will be able to see the content in your phone.

You can compare your code with the solution in Step.05-Solution-Receive-presentation. You can also check the differences with the previous tutorial in the git diff.

Receive Presentation