Home
/ Blog /
Comparing Zoom video JS SDK to 100msDecember 15, 20235 min read
Share
Zoom is a go-to video conferencing tool for a lot of folks. It caters to a diverse range of use cases with its impressive array of features for all sorts of virtual meetings. Its widespread adoption is a testament to its reliability in facilitating seamless virtual communication, making it a preferred choice for numerous individuals and businesses alike.
However, even with its extensive capabilities, it's crucial to acknowledge that Zoom is not without its limitations. The need for custom code handling to make sure it works, adds a layer of complexity to the integration process. Furthermore, Zoom's reliance on a proprietary implementation of the H.264 video codec and its lack of native support for WebRTC can pose hurdles for those seeking more standardized and open solutions.
In this article, we've compared some of the core areas where developers may encounter challenges when integrating Zoom SDK. We will briefly touch upon how 100ms SDK which solves for these challenges so developers spend less time writing low level handling and build high quality applications.
Feature | Zoom SDK | 100ms SDK |
---|---|---|
SDK size | 75MB | 4MB without compressing, more details |
Max tiles with audio/video on in a single page | 9 tiles | 49 tiles |
Automatic bandwidth management | Custom code required to handle audio and video track degrade. | Automatic bandwidth management with Simulcast control with 100ms dashboard |
Max Quality | 720p | 1080p |
Live streaming | Not available | One SDK for both conferencing and livestreaming |
Cloud Recording | Limited customisations available | Flexible recording APIs to compose tracks |
State Management | Lacks dynamic state management | Real-time component specific state management |
Webhooks | Limited to cloud recording scenarios | Real-time notification updates sent to your server |
Analytics | Aggregated analytics limited to a session | Rest APIs to build insights on peer and session level |
Zoom's SDK poses challenges for developers, requiring intricate code handling. From complex video rendering and specifying coordinates to additional code for features and limitations on 720p video rendering, Zoom introduces complexities. Audio issues on Firefox and Safari, along with managing states, add to the developer's workload.
Zoom allows rendering video streams on a canvas based on changes in the peer video state. Key challenges that developers come across:
<audio>
and <video>
elements, it robs the developer of the power of HTML that can easily create dynamic tile layouts.With Zoom’s SDK, developers must handle intricate handling on application side such as canvas rendering, managing participants( who joined, who left, active speaker, where to place them - here’s the math for canvas to render multiple videos with Zoom, and determining the placement of elements.
Unlike Zoom, 100ms SDK automatically attaches the source to video element wherever it is present on the UI. The element can be customised with CSS.
With Zoom SDK, you have to use SharedArrayBuffers which adds another layer of complexity. Without SharedArrayBuffers, features like Gallery View, Virtual Background, and 720p video are unavailable. See one of the developer’s struggling https://devforum.zoom.us/t/struggling-with-sharedbufferarray-and-cors/87881
Setting up ShareArrayBuffers for rendering multiple videos requires one of the following to be setup:
Zoom SDK restricts rendering only one 720p video on a webpage at a time, making grid-style rendering with higher quality challenging (Zoom Video SDK Documentation).
Getting audio started on Firefox and Safari desktops requires additional logic, as outlined in the Zoom Video SDK documentation. This extra step may pose challenges for developers seeking seamless audio functionality across different browsers.
Managing state in a web app can be tricky, but it's crucial for things like keeping track of who's in a room and recording user actions to be used across your app.
With Zoom, you have to maintain who is present in the room, which user has performed which action etc.
The provided sample code by Zoom for maintaining the state in a Zoom application is rather basic and has certain limitations:
zoomClient
, mediaStream
, and audio/video tracks wherever they are needed introduces complexity, potentially hindering the maintenance of an efficient codebase.100ms uses HMSReactiveStore as a store for managing reactive state in a video application. It also sets up the store to notify subscribers about changes immediately upon subscription, exports the store's actions and store, making them available for other parts of the application. This store basically aims to solve the limitations of Zoom state management with:
While Zoom’s client is great for video conferencing, in our opinion, Zoom Video SDK leaves a lot of the heavy lifting to developers for an ideal user experience. In practice, this adds several cycles of error handling to just get close to production quality. Their non-standard webRTC implementation also adds a ceiling to the user-experience that can be achieved in a browser.
Custom video SDKs like 100ms, come baked in with sensible defaults (state management, video layouts using HTML, dynamic quality changes) - that allows developers to go live with fewer errors, and higher quality user experiences. Read more on about
We believe in empoworing developers to create rich, real-time applications with simplicity and compatibility abstracting low level code handling. Some of the key features of 100ms Web SDK:
For more details, do check out Web SDK mental model.. Want to start building? Head to JS quickstart guide.
General
Share