This is the official iOS SDK for StreamVideo, a platform for building apps with video and audio calling support. The repository includes a low-level SDK and a set of reusable UI components, available in both UIKit and SwiftUI.
Stream allows developers to rapidly deploy scalable feeds, chat messaging and video with an industry leading 99.999% uptime SLA guarantee.
With Stream's video components, you can use our SDK to build in-app video calling, audio rooms, audio calls, or live streaming. The best place to get started is with our tutorials:
Stream provides UI components and state handling that make it easy to build video calling for your app. All calls run on Stream's network of edge servers around the world, ensuring optimal latency and reliability.
Stream is free for most side and hobby projects. To qualify, your project/company needs to have < 5 team members and < $10k in monthly revenue. Makers get $100 in monthly credit for video for free.
Here are some of the features we support:
- Developer experience: Great SDKs, docs, tutorials and support so you can build quickly
- Edge network: Servers around the world ensure optimal latency and reliability
- Chat: Stored chat, reactions, threads, typing indicators, URL previews etc
- Security & Privacy: Based in USA and EU, Soc2 certified, GDPR compliant
- Dynascale: Automatically switch resolutions, fps, bitrate, codecs and paginate video on large calls
- Screen sharing
- Picture in picture support
- Active speaker
- Custom events
- Geofencing
- Notifications and ringing calls
- Opus DTX & Red for reliable audio
- Webhooks & SQS
- Backstage mode
- Flexible permissions system
- Joining calls by ID, link or invite
- Enabling and disabling audio and video when in calls
- Flipping, Enabling and disabling camera in calls
- Enabling and disabling speakerphone in calls
- Push notification providers support
- Call recording
- Broadcasting to HLS
- Noise cancellation
Check our docs to get more details about the supported features and integration guides.
This repository contains the following parts:
- low-level client for calling (can be used standalone if you want to build your own UI)
- SwiftUI SDK (UI components developed in SwiftUI)
- UIKit SDK (wrappers for easier usage in UIKit apps)
- Progressive disclosure: The SDK can be used easily with very minimal knowledge of it. As you become more familiar with it, you can dig deeper and start customizing it on all levels.
- Swift native API: Uses Swift's powerful language features to make the SDK usage easy and type-safe.
- Familiar behavior: The UI elements are good platform citizens and behave like native elements; they respect
tintColor
, padding, light/dark mode, dynamic font sizes, etc. - Fully open-source implementation: You have access to the complete source code of the SDK on GitHub.
The low-level client is used for establishing audio and video calls. It integrates with Stream's backend infrastructure, and implements the WebRTC protocol.
Here are the most important components that the low-level client provides:
StreamVideo
- the main SDK object.Call
- an object that provides info about the call state, as well as methods for updating it.
This is the main object for interfacing with the low-level client. It needs to be initialized with an API key and a user/token, before the SDK can be used.
let streamVideo = StreamVideo(
apiKey: "key1",
user: user.userInfo,
token: user.token,
videoConfig: VideoConfig(),
tokenProvider: { result in
yourNetworkService.loadToken(completion: result)
}
)
The Call
class provides all the information about the call, such as its participants, whether the call is being recorded, etc. It also provides methods to perform standard actions available during a call, such as muting/unmuting users, sending reactions, changing the camera input, granting permissions, recording, etc.
You can create a new Call
via the StreamVideo
's method func call(callType: String, callId: String, members: [Member])
.
The SwiftUI SDK provides out of the box UI components, ready to be used in your app.
The simplest way to add calling support to your hosting view is to attach the CallModifier
:
struct CallView: View {
@StateObject var viewModel: CallViewModel
init() {
_viewModel = StateObject(wrappedValue: CallViewModel())
}
var body: some View {
HomeView(viewModel: viewModel)
.modifier(CallModifier(viewModel: viewModel))
}
}
You can customize the look and feel of the screens presented in the calling flow, by implementing the corresponding methods in our ViewFactory
.
Most of our components are public, so you can use them as building blocks if you want to build your custom UI.
All the texts, images, fonts and sounds used in the SDK are configurable via our Appearance
class, to help you brand the views to be inline with your hosting app.
The UIKit SDK provides UIKit wrappers around the SwiftUI views. Its main integration point is the CallViewController
which you can easily push in your navigation stack, or add as a modal screen.
private func didTapStartButton() {
let next = CallViewController.make(with: callViewModel)
next.modalPresentationStyle = .fullScreen
next.startCall(
callType: "default",
callId: callId,
members: members
)
self.navigationController?.present(next, animated: true)
}
The CallViewController
is created with a CallViewModel
- the same one used in our SwiftUI SDK.
At the moment, all the customizations in the UIKit SDK, need to be done in SwiftUI.
Video roadmap and changelog is available here.
- Toast views
- Test coverage
- Lobby updates (show participants and events)
- Support settings.audio.default_device
- Report SDK version number on all API calls
- Fix AppClips
- Pagination on query channels
- Remote pinning of users
- Add chat to the AppStore app (and app clip if possible)
- Stability
- API integration tests
- CPU usage improvements
- Audio Room UIKit tutorial
- Improved chat integration
- Screensharing from mobile
- Button to switch speakerphone/earpiece
- Audio filters
- Picture-in-picture support
- Enable SFU switching
- Address testing feedback
- Call Analytics / Stats
- Thermal state improvements
- Test with many participants
- Testing on more devices
- Tap to focus
- Complete reconnection flows
- Camera controls (zooming, tap to focus)
- Picture-in-picture improvements
- Blur & AI video filters
- Analytics and stats for calls
- Standardization across SDKs
- Livestream, default video player UI for all SDKs
- Improved CallKit integration
- Benchmarks for audio rooms and livestreams
- Improve noise reduction/cancelation
- Improved support for teams & multi-tenant
- Session timers
- RTMP out
- Reconnection V2
- PiP improvements
- Missed calls support
- Joining calls ahead of time
- Manual quality selection (currently it's always automatic)
- Improve performance on lower end devices
- AV1 & VP9 support
- Closed Captions and multi language support for transcriptions
- Codec negotiation
- Waiting rooms
- Audio only call tutorial for each SDK
- Query call session endpoint + better missed calls support
- SIP
- Breakout rooms
- Transcription Summaries
- Ingress for SRT, RTSP, SDI, NDI, MTS/ MPEG-2 TS, RIST and Zixi
- Whiteboards
- RTSP input (via egress, same as RTMP input)
- WHEP