forked from GStreamer/gstreamer
-
Notifications
You must be signed in to change notification settings - Fork 0
/
NEWS
1392 lines (1027 loc) · 58.3 KB
/
NEWS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
GSTREAMER 1.16 RELEASE NOTES
GStreamer 1.16.0 was originally released on 19 April 2019.
See https://gstreamer.freedesktop.org/releases/1.16/ for the latest
version of this document.
_Last updated: Friday 19 April 2019, 00:00 UTC (log)_
Introduction
The GStreamer team is proud to announce a new major feature release in
the stable 1.x API series of your favourite cross-platform multimedia
framework!
As always, this release is again packed with many new features, bug
fixes and other improvements.
Highlights
- GStreamer WebRTC stack gained support for data channels for
peer-to-peer communication based on SCTP, BUNDLE support, as well as
support for multiple TURN servers.
- AV1 video codec support for Matroska and QuickTime/MP4 containers
and more configuration options and supported input formats for the
AOMedia AV1 encoder
- Support for Closed Captions and other Ancillary Data in video
- Support for planar (non-interleaved) raw audio
- GstVideoAggregator, compositor and OpenGL mixer elements are now in
-base
- New alternate fields interlace mode where each buffer carries a
single field
- WebM and Matroska ContentEncryption support in the Matroska demuxer
- new WebKit WPE-based web browser source element
- Video4Linux: HEVC encoding and decoding, JPEG encoding, and improved
dmabuf import/export
- Hardware-accelerated Nvidia video decoder gained support for VP8/VP9
decoding, whilst the encoder gained support for H.265/HEVC encoding.
- Many improvements to the Intel Media SDK based hardware-accelerated
video decoder and encoder plugin (msdk): dmabuf import/export for
zero-copy integration with other components; VP9 decoding; 10-bit
HEVC encoding; video post-processing (vpp) support including
deinterlacing; and the video decoder now handles dynamic resolution
changes.
- The ASS/SSA subtitle overlay renderer can now handle multiple
subtitles that overlap in time and will show them on screen
simultaneously
- The Meson build is now feature-complete (*) and it is now the
recommended build system on all platforms. The Autotools build is
scheduled to be removed in the next cycle.
- The GStreamer Rust bindings and Rust plugins module are now
officially part of upstream GStreamer.
- The GStreamer Editing Services gained a gesdemux element that allows
directly playing back serialized edit list with playbin or
(uri)decodebin
- Many performance improvements
Major new features and changes
Noteworthy new API
- GstAggregator has a new "min-upstream-latency" property that forces
a minimum aggregate latency for the input branches of an aggregator.
This is useful for dynamic pipelines where branches with a higher
latency might be added later after the pipeline is already up and
running and where a change in the latency would be disruptive. This
only applies to the case where at least one of the input branches is
live though, it won’t force the aggregator into live mode in the
absence of any live inputs.
- GstBaseSink gained a "processing-deadline" property and
setter/getter API to configure a processing deadline for live
pipelines. The processing deadline is the acceptable amount of time
to process the media in a live pipeline before it reaches the sink.
This is on top of the systemic latency that is normally reported by
the latency query. This defaults to 20ms and should make pipelines
such as v4l2src ! xvimagesink not claim that all frames are late in
the QoS events. Ideally, this should replace the "max-lateness"
property for most applications.
- RTCP Extended Reports (XR) parsing according to RFC 3611:
Loss/Duplicate RLE, Packet Receipt Times, Receiver Reference Time,
Delay since the last Receiver (DLRR), Statistics Summary, and VoIP
Metrics reports. This only provides the ability to parse such
packets, generation of XR packets is not supported yet and XR
packets are not automatically parsed by rtpbin / rtpsession but must
be actively handled by the application.
- a new mode for interlaced video was added where each buffer carries
a single field of interlaced video, with buffer flags indicating
whether the field is the top field or bottom field. Top and bottom
fields are expected to alternate in this mode. Caps for this
interlace mode must also carry a format:Interlaced caps feature to
ensure backwards compatibility.
- The video library has gained support for three new raw pixel
formats:
- Y410: packed 4:4:4 YUV, 10 bits per channel
- Y210: packed 4:2:2 YUV, 10 bits per channel
- NV12_10LE40: fully-packed 10-bit variant of NV12_10LE32,
i.e. without the padding bits
- GstRTPSourceMeta is a new meta that can be used to transport
information about the origin of depayloaded or decoded RTP buffers,
e.g. when mixing audio from multiple sources into a single stream. A
new "source-info" property on the RTP depayloader base class
determines whether depayloaders should put this meta on outgoing
buffers. Similarly, the same property on RTP payloaders determines
whether they should use the information from this meta to construct
the CSRCs list on outgoing RTP buffers.
- gst_sdp_message_from_text() is a convenience constructor to parse
SDPs from a string which is particularly useful for language
bindings.
Support for Planar (Non-Interleaved) Raw Audio
Raw audio samples are usually passed around in interleaved form in
GStreamer, which means that if there are multiple audio channels the
samples for each channel are interleaved in memory, e.g.
|LEFT|RIGHT|LEFT|RIGHT|LEFT|RIGHT| for stereo audio. A non-interleaved
or planar arrangement in memory would look like
|LEFT|LEFT|LEFT|RIGHT|RIGHT|RIGHT| instead, possibly with
|LEFT|LEFT|LEFT| and |RIGHT|RIGHT|RIGHT| residing in separate memory
chunks or separated by some padding.
GStreamer has always had signalling for non-interleaved audio since
version 1.0, but it was never actually properly implemented in any
elements. audioconvert would advertise support for it, but wasn’t
actually able to handle it correctly.
With this release we now have full support for non-interleaved audio as
well, which means more efficient integration with external APIs that
handle audio this way, but also more efficient processing of certain
operations like interleaving multiple 1-channel streams into a
multi-channel stream which can be done without memory copies now.
New API to support this has been added to the GStreamer Audio support
library: There is now a new GstAudioMeta which describes how data is
laid out inside the buffer, and buffers with non-interleaved audio must
always carry this meta. To access the non-interleaved audio samples you
must map such buffers with gst_audio_buffer_map() which works much like
gst_buffer_map() or gst_video_frame_map() in that it will populate a
little GstAudioBuffer helper structure passed to it with the number of
samples, the number of planes and pointers to the start of each plane in
memory. This function can also be used to map interleaved audio buffers
in which case there will be only one plane of interleaved samples.
Of course support for this has also been implemented in the various
audio helper and conversion APIs, base classes, and in elements such as
audioconvert, audioresample, audiotestsrc, audiorate.
Support for Closed Captions and Other Ancillary Data in Video
The video support library has gained support for detecting and
extracting Ancillary Data from videos as per the SMPTE S291M
specification, including:
- a VBI (Vertical Blanking Interval) parser that can detect and
extract Ancillary Data from Vertical Blanking Interval lines of
component signals. This is currently supported for videos in v210
and UYVY format.
- a new GstMeta for closed captions: GstVideoCaptionMeta. This
supports the two types of closed captions, CEA-608 and CEA-708,
along with the four different ways they can be transported (other
systems are a superset of those).
- a VBI (Vertical Blanking Interval) encoder for writing ancillary
data to the Vertical Blanking Interval lines of component signals.
The new closedcaption plugin in gst-plugins-bad then makes use of all
this new infrastructure and provides the following elements:
- cccombiner: a closed caption combiner that takes a closed captions
stream and another stream and adds the closed captions as
GstVideoCaptionMeta to the buffers of the other stream.
- ccextractor: a closed caption extractor which will take
GstVideoCaptionMeta from input buffers and output them as a separate
closed captions stream.
- ccconverter: a closed caption converter that can convert between
different formats
- line21encoder, line21decoder: inject/extract line21 closed captions
to/from SD video streams
- cc708overlay: decodes CEA 608/708 captions and overlays them on
video
Additionally, the following elements have also gained Closed Caption
support:
- qtdemux and qtmux support CEA 608/708 Closed Caption tracks
- mpegvideoparse, h264parse extracts Closed Captions from MPEG-2/H.264
video streams
- avviddec, avvidenc, x264enc got support for extracting/injecting
Closed Captions
- decklinkvideosink can output closed captions and decklinkvideosrc
can extract closed captions
- playbin and playbin3 learned how to autoplug CEA 608/708 CC overlay
elements
- the externally maintained ajavideosrc element for AJA capture cards
has support for extracting closed captions
The rsclosedcaption plugin in the Rust plugins collection includes a
MacCaption (MCC) file parser and encoder.
New Elements
- overlaycomposition: New element that allows applications to draw
GstVideoOverlayCompositions on a stream. The element will emit the
"draw" signal for each video buffer, and the application then
generates an overlay for that frame (or not). This is much more
performant than e.g. cairooverlay for many use cases, e.g. because
pixel format conversions can be avoided or the blitting of the
overlay can be delegated to downstream elements (such as
gloverlaycompositor). It’s particularly useful for cases where only
a small section of the video frame should be drawn on.
- gloverlaycompositor: New OpenGL-based compositor element that
flattens any overlays from GstVideoOverlayCompositionMetas into the
video stream. This element is also always part of glimagesink.
- glalpha: New element that adds an alpha channel to a video stream.
The values of the alpha channel can either be set to a constant or
can be dynamically calculated via chroma keying. It is similar to
the existing alpha element but based on OpenGL. Calculations are
done in floating point so results may not be identical to the output
of the existing alpha element.
- rtpfunnel funnels together RTP streams into a single session. Use
cases include multiplexing and bundle. webrtcbin uses it to
implement BUNDLE support.
- testsrcbin is a source element that provides an audio and/or video
stream and also announces them using the recently-introduced
GstStream API. This is useful for testing elements such as playbin3
or uridecodebin3 etc.
- New closed caption elements: cccombiner, ccextractor, ccconverter,
line21encoder, line21decoder and cc708overlay (see above)
- wpesrc: new source element acting as a Web Browser based on WebKit
WPE
- Two new OpenCV-based elements: cameracalibrate and cameraundistort
that can communicate to figure out distortion correction parameters
for a camera and correct for the distortion.
- New sctp plugin based on usrsctp with sctpenc and sctpdec elements.
These elements are used inside webrtcbin for implementing data
channels.
New element features and additions
- playbin3, playbin and playsink have gained a new "text-offset"
property to adjust the positioning of the selected subtitle stream
vis-a-vis the audio and video streams. This uses subtitleoverlay’s
new "subtitle-ts-offset" property. GstPlayer has gained matching API
for this, namely gst_player_get_text_video_offset().
- playbin3 buffering improvements: in network playback scenarios there
may be multiple inputs to decodebin3, and buffering will be done
before decodebin3 using queue2 or downloadbuffer elements inside
urisourcebin. Since this is before any parsers or demuxers there may
not be any bitrate information available for the various streams, so
it was difficult to configure the buffering there smartly within
global constraints. This was improved now: The queue2 elements
inside urisourcebin will now use the new bitrate query to figure out
a bitrate estimate for the stream if no bitrate was provided by
upstream, and urisourcebin will use the bitrates of the individual
queues to distribute the globally-set "buffer-size" budget in bytes
to the various queues. urisourcebin also gained "low-watermark" and
"high-watermark" properties which will be proxied to the internal
queues, as well as a read-only "statistics" property which allows
querying of the minimum/maximum/average byte and time levels of the
queues inside the urisourcebin in question.
- splitmuxsink has gained a couple of new features:
- new "async-finalize" mode: This mode is useful for muxers or
outputs that can take a long time to finalize a file. Instead of
blocking the whole upstream pipeline while the muxer is doing
its stuff, we can unlink it and spawn a new muxer + sink
combination to continue running normally. This requires us to
receive the muxer and sink (if needed) as factories via the new
"muxer-factory" and "sink-factory" properties, optionally
accompanied by their respective properties structures (set via
the new "muxer-properties" and "sink-properties" properties).
There are also new "muxer-added" and "sink-added" signals in
case custom code has to be called for them to configure them.
- "split-at-running-time" action signal: When called by the user,
this action signal ends the current file (and starts a new one)
as soon as the given running time is reached. If called multiple
times, running times are queued up and processed in the order
they were given.
- "split-after" action signal to finish outputting the current GOP
to the current file and then start a new file as soon as the GOP
is finished and a new GOP is opened (unlike the existing
"split-now" which immediately finishes the current file and
writes the current GOP into the next newly-started file).
- "reset-muxer" property: when unset, the muxer is reset using
flush events instead of setting its state to NULL and back. This
means the muxer can keep state across resets, e.g. mpegtsmux
will keep the continuity counter continuous across segments as
required by hlssink2.
- qtdemux gained PIFF track encryption box support in addition to the
already-existing PIFF sample encryption support, and also allows
applications to select which encryption system to use via a
"drm-preferred-decryption-system-id" context in case there are
multiple options.
- qtmux: the "start-gap-threshold" property determines now whether an
edit list will be created to account for small gaps or offsets at
the beginning of a stream in case the start timestamps of tracks
don’t line up perfectly. Previously the threshold was hard-coded to
1% of the (video) frame duration, now it is 0 by default (so edit
list will be created even for small differences), but fully
configurable.
- rtpjitterbuffer has improved end-of-stream handling
- rtpmp4vpay will be preferred over rtpmp4gpay for MPEG-4 video in
autoplugging scenarios now
- rtspsrc now allows applications to send RTSP SET_PARAMETER and
GET_PARAMETER requests using action signals.
- rtspsrc has a small (100ms) configurable teardown delay by default
to try and make sure an RTSP TEARDOWN request gets sent out when the
source element shuts down. This will block the downward PAUSED to
READY state change for a short time, but can be disabled where it’s
a problem. Some servers only allow a limited number of concurrent
clients, so if no proper TEARDOWN is sent new clients may have
problems connecting to the server for a while.
- souphttpsrc behaves better with low bitrate streams now. Before it
would increase the read block size too quickly which could lead to
it not reading any data from the socket for a very long time with
low bitrate streams that are output live downstream. This could lead
to servers kicking off the client.
- filesink: do internal buffering to avoid performance regression with
small writes since we bypass libc buffering by using writev()
instead of fwrite()
- identity: add "eos-after" property and fix "error-after" property
when the element is reused
- input-selector: lets context queries pass through, so that
e.g. upstream OpenGL elements can use contexts and displays
advertised by downstream elements
- queue2: avoid ping-pong between 0% and 100% buffering messages if
upstream is pushing buffers larger than one of its limits, plus
performance optimisations
- opusdec: new "phase-inversion" property to control phase inversion.
When enabled, this will slightly increase stereo quality, but
produces a stream that when downmixed to mono will suffer audio
distortions.
- The x265enc HEVC encoder also exposes a "key-int-max" property to
configure the maximum allowed GOP size now.
- decklinkvideosink has seen stability improvements for long-running
pipelines (potential crash due to overflow of leaked clock refcount)
and clock-slaving improvements when performing flushing seeks
(causing stalls in the output timeline), pausing and/or buffering.
- srtpdec, srtpenc: add support for MKIs which allow multiple keys to
be used with a single SRTP stream
- srtpdec, srtpenc: add support for AES-GCM and also add support for
it in gst-rtsp-server and rtspsrc.
- The srt Secure Reliable Transport plugin has integrated server and
client elements srt{client,server}{src,sink} into one (srtsrc and
srtsink), since SRT connection mode can be changed by uri
parameters.
- h264parse and h265parse will handle SEI recovery point messages and
mark recovery points as keyframes as well (in addition to IDR
frames)
- webrtcbin: "add-turn-server" action signal to pass multiple ICE
relays (TURN servers).
- The removesilence element has received various new features and
properties, such as a "threshold" property, detecting silence only
after minimum silence time/buffers, a "silent" property to control
bus message notifications as well as a "squash" property.
- AOMedia AV1 decoder gained support for 10/12bit decoding whilst the
AV1 encoder supports more image formats and subsamplings now and
acquired support for rate control and profile related configuration.
- The Fraunhofer fdkaac plugin can now be built against the 2.0.0
version API and has improved multichannel support
- kmssink now supports unpadded 24-bit RGB and can configure mode
setting from video info, which enables display of multi-planar
formats such as I420 or NV12 with modesetting. It has also gained a
number of new properties: The "restore-crtc" property does what it
says on the tin and is enabled by default. "plane-properties" and
"connector-properties" can be used to pass custom properties to the
DRM.
- waylandsink has a "fullscreen" property now and supports the
XDG-Shell protocol.
- decklinkvideosink, decklinkvideosrc support selecting between
half/full duplex
- The vulkan plugin gained support for macOS and iOS via MoltenVK in
addition to the existing support for X11 and Wayland
- imagefreeze has a new num-buffers property to limit the number of
buffers that are produced and to send an EOS event afterwards
- webrtcbin has a new, introspectable get-transceiver signal in
addition to the old get-transceivers signal that couldn’t be used
from bindings
- Support for per-element latency information was added to the latency
tracer
Plugin and library moves
- The stereo element was moved from -bad into the existing audiofx
plugin in -good. If you get duplicate type registration warnings
when upgrading, check that you don’t have a stale stereoplugin lying
about somewhere.
GstVideoAggregator, compositor, and OpenGL mixer elements moved from -bad to -base
GstVideoAggregator is a new base class for raw video mixers and muxers
and is based on GstAggregator. It provides defined-latency mixing of raw
video inputs and ensures that the pipeline won’t stall even if one of
the input streams stops producing data.
As part of the move to stabilise the API there were some last-minute API
changes and clean-ups, but those should mostly affect internal elements.
Most notably, the "ignore-eos" pad property was renamed to
"repeat-after-eos" and the conversion code was moved to a
GstVideoAggregatorConvertPad subclass to avoid code duplication, make
things less awkward for subclasses like the OpenGL-based video mixer,
and make the API more consistent with the audio aggregator API.
It is used by the compositor element, which is a replacement for
‘videomixer’ which did not handle live inputs very well. compositor
should behave much better in that respect and generally behave as one
would expected in most scenarios.
The compositor element has gained support for per-pad blending mode
operators (SOURCE, OVER, ADD) which determines what operator to use for
blending this pad over the previous ones. This can be used to implement
crossfading and the available operators can be extended in the future as
needed.
A number of OpenGL-based video mixer elements (glvideomixer, glmixerbin,
glvideomixerelement, glstereomix, glmosaic) which are built on top of
GstVideoAggregator have also been moved from -bad to -base now. These
elements have been merged into the existing OpenGL plugin, so if you get
duplicate type registration warnings when upgrading, check that you
don’t have a stale openglmixers plugin lying about somewhere.
Plugin removals
The following plugins have been removed from gst-plugins-bad:
- The experimental daala plugin has been removed, since it’s not so
useful now that all effort is focused on AV1 instead, and it had to
be enabled explicitly with --enable-experimental anyway.
- The spc plugin has been removed. It has been replaced by the gme
plugin.
- The acmmp3dec and acmenc plugins for Windows have been removed. ACM
is an ancient legacy API and there was no point in keeping the
plugins around for a licensed MP3 decoder now that the MP3 patents
have expired and we have a decoder in -good. We also didn’t ship
these in our cerbero-built Windows packages, so it’s unlikely that
they’ll be missed.
Miscellaneous API additions
- GstBitwriter: new generic bit writer API to complement the existing
bit reader
- gst_buffer_new_wrapped_bytes() creates a wrap buffer from a GBytes
- gst_caps_set_features_simple() sets a caps feature on all the
structures of a GstCaps
- New GST_QUERY_BITRATE query: This allows determining from downstream
what the expected bitrate of a stream may be which is useful in
queue2 for setting time based limits when upstream does not provide
timing information. tsdemux, qtdemux and matroskademux have basic
support for this query on their sink pads.
- elements: there is a new “Hardware” class specifier. Elements
interacting with hardware devices should specify this classifier in
their element factory class metadata. This is useful to advertise as
one might need to put such elements into READY state to test if the
hardware is present in the system for example.
- protection: Add a new definition for unspecified system protection,
GST_PROTECTION_UNSPECIFIED_SYSTEM_ID
- take functions for various mini objects that didn’t have them yet:
gst_query_take(), gst_message_take(), gst_tag_list_take(),
gst_buffer_list_take(). Unlike the various _replace() functions
_take() does not increase the reference count but takes ownership of
the mini object passed.
- clear functions for various mini object types and GstObject which
unrefs the object or mini object (if non-NULL) and sets the variable
pointed to to NULL: gst_clear_structure(), gst_clear_tag_list(),
gst_clear_query(), gst_clear_message(), gst_clear_event(),
gst_clear_caps(), gst_clear_buffer_list(), gst_clear_buffer(),
gst_clear_mini_object(), gst_clear_object()
- miniobject: new API gst_mini_object_add_parent() and
gst_mini_object_remove_parent() to set parent pointers on mini
objects to ensure correct writability: Every container of
miniobjects now needs to store itself as parent in the child object,
and remove itself again later. A mini object is then only writable
if there is at most one parent, that parent is writable itself, and
the reference count of the mini object is 1. GstBuffer (for
memories), GstBufferList (for buffers), GstSample (for caps, buffer,
bufferlist), and GstVideoOverlayComposition were updated
accordingly. Without this it was possible to have e.g. a buffer list
with a refcount of 2 used in two places at once that both modify the
same buffer with refcount 1 at the same time wrongly thinking it is
writable even though it’s really not.
- poll: add API to watch for POLLPRI and stop treating POLLPRI as a
read. This is useful to wait for video4linux events which are
signalled via POLLPRI.
- sample: new API to update the contents of a GstSample and make it
writable: gst_sample_set_buffer(), gst_sample_set_caps(),
gst_sample_set_segment(), gst_sample_set_info(), plus
gst_sample_is_writable() and gst_sample_make_writable(). This makes
it possible to reuse a sample object and avoid unnecessary memory
allocations, for example in appsink.
- ClockIDs now keep a weak reference to underlying clock to avoid
crashes in basesink in corner cases where a clock goes away while
the ClockID is still in use, plus some new API
(gst_clock_id_get_clock(), gst_clock_id_uses_clock()) to check the
clock a ClockID is linked to.
- The GstCheck unit test library gained a
fail_unless_equals_clocktime() convenience macro as well as some new
GstHarness API for for proposing meta APIs from the allocation
query: gst_harness_add_propose_allocation_meta(). ASSERT_CRITICAL()
checks in unit tests are now skipped if GStreamer was compiled with
GST_DISABLE_GLIB_CHECKS.
- gst_audio_buffer_truncate() convenience function to truncate a raw
audio buffer
- GstDiscoverer has support for caching the results of discovery in
the default cache directory. This can be enabled with the use-cache
property and is disabled by default.
- GstMeta that are attached to GstBuffers are now always stored in the
order in which they were added.
- Additional support for signalling ONVIF specific features were
added: the SEEK event can store a trickmode-interval now and support
for the Rate-Control and Frames RTSP headers was added to the RTSP
library.
Miscellaneous performance and memory optimisations
As always there have been many performance and memory usage improvements
across all components and modules. Some of them (such as dmabuf
import/export) have already been mentioned elsewhere so won’t be
repeated here.
The following list is only a small snapshot of some of the more
interesting optimisations that haven’t been mentioned in other contexts
yet:
- The GstVideoEncoder and GstVideoDecoder base classes now release the
STREAM_LOCK when pushing out buffers, which means (multi-threaded)
encoders and decoders can now receive and continue to process input
buffers whilst waiting for downstream elements in the pipeline to
process the buffer that was pushed out. This increases throughput
and reduces processing latency, also and especially for
hardware-accelerated encoder/decoder elements.
- GstQueueArray has seen a few API additions
(gst_queue_array_peek_nth(), gst_queue_array_set_clear_func(),
gst_queue_array_clear()) so that it can be used in other places like
GstAdapter instead of a GList, which reduces allocations and
improves performance.
- appsink now reuses the sample object in pull_sample() if possible
- rtpsession only starts the RTCP thread when it’s actually needed now
- udpsrc uses a buffer pool now and the GstUdpSrc object structure was
optimised for better cache performance
GstPlayer
- API was added to fine-tune the synchronisation offset between
subtitles and video
Miscellaneous changes
- As a result of moving to newer FFmpeg APIs, encoder and decoder
elements exposed by the GStreamer FFmpeg wrapper plugin (gst-libav)
may have seen possibly incompatible changes to property names and/or
types, and not all properties exposed might be functional. We are
still reviewing the new properties and aim to minimise breaking
changes at least for the most commonly-used properties, so please
report any issues you run into!
OpenGL integration
- The OpenGL mixer elements have been moved from -bad to
gst-plugins-base (see above)
- The Mesa GBM backend now supports headless mode
- gloverlaycompositor: New OpenGL-based compositor element that
flattens any overlays from GstVideoOverlayCompositionMetas into the
video stream.
- glalpha: New element that adds an alpha channel to a video stream.
The values of the alpha channel can either be set to a constant or
can be dynamically calculated via chroma keying. It is similar to
the existing alpha element but based on OpenGL. Calculations are
done in floating point so results may not be identical to the output
of the existing alpha element.
- glupload: Implement direct dmabuf uploader, the idea being that some
GPUs (like the Vivante series) can actually perform the YUV->RGB
conversion internally, so no custom conversion shaders are needed.
To make use of this feature, we need an additional uploader that can
import DMABUF FDs and also directly pass the pixel format, relying
on the GPU to do the conversion.
- The OpenGL library no longer restores the OpenGL viewport. This is a
performance optimization to not require performing multiple
expensive glGet*() function calls per frame. This affects any
application or plugin use of the following functions and objects:
- glcolorconvert library object (not the element)
- glviewconvert library object (not the element)
- gst_gl_framebuffer_draw_to_texture()
- custom GstGLWindow implementations
Tracing framework and debugging improvements
- There is now a GDB PRETTY PRINTER FOR VARIOUS GSTREAMER TYPES: For
GstObject pointers the type and name is added, e.g.
0x5555557e4110 [GstDecodeBin|decodebin0]. For GstMiniObject pointers
the object type is added, e.g. 0x7fffe001fc50 [GstBuffer]. For
GstClockTime and GstClockTimeDiff the time is also printed in human
readable form, e.g. 150116219955 [+0:02:30.116219955].
- GDB EXTENSION WITH TWO CUSTOM GDB COMMANDS gst-dot AND gst-print:
- gst-dot creates dot files that a very close to what
GST_DEBUG_BIN_TO_DOT_FILE() produces, but object properties and
buffer contents such as codec-data in caps are not available.
- gst-print produces high-level information about a GStreamer
object. This is currently limited to pads for GstElements and
events for the pads. The output may look like this:
- gst_structure_to_string() now serialises the actual value of
pointers when serialising GstStructures instead of claiming they’re
NULL. This makes debug logging in various places less confusing,
because it’s clear now that structure fields actually hold valid
objects. Such object pointer values will never be deserialised
however.
Tools
- gst-inspect-1.0 has coloured output now and will automatically use a
pager if the output does not fit on a page. This only works in a
UNIX environment and if the output is not piped, and on Windows 10
build 16257 or newer. If you don’t like the colours you can disable
them by setting the GST_INSPECT_NO_COLORS=1 environment variable or
passing the --no-color command line option.
GStreamer RTSP server
- Improved backlog handling when using TCP interleaved for data
transport. Before there was a fixed maximum size for backlog
messages, which was prone to deadlocks and made it difficult to
control memory usage with the watch backlog. The RTSP server now
limits queued TCP data messages to one per stream, moving queuing of
the data into the pipeline and leaving the RTSP connection
responsive to RTSP messages in both directions, preventing all those
problems.
- Initial ULP Forward Error Correction support in rtspclientsink and
for RECORD mode in the server.
- API to explicitly enable retransmission requests (RTX)
- Lots of multicast-related fixes
- rtsp-auth: Add support for parsing .htdigest files
GStreamer VAAPI
- Support Wayland’s display for context sharing, so the application
can pass its own wl_display in order to be used for the VAAPI
display creation.
- A lot of work to support new Intel hardware using media-driver as VA
backend.
- For non-x86 devices, VAAPI display can instantiate, through DRM,
with no PCI bus. This enables the usage of libva-v4l2-request
driver.
- Added support for XDG-shell protocol as wl_shell replacement which
is currently deprecated. This change add as dependency
wayland-protocol.
- GstVaapiFilter, GstVaapiWindow, and GstVaapiDecoder classes now
inherit from GstObject, gaining all the GStreamer’s instrumentation
support.
- The metadata now specifies the plugin as Hardware class.
- H264 decoder is more stable with problematic streams.
- In H265 decoder added support for profiles main-422-10 (P010_10LE),
main-444 (AYUV) and main-444-10 (Y410)
- JPEG decoder handles dynamic resolution changes.
- More specification adherence in H264 and H265 encoders.
GStreamer OMX
- Add support of NV16 format to video encoders input.
- Video decoders now handle the ALLOCATION query to tell upstream
about the number of buffers they require. Video encoders will also
use this query to adjust their number of allocated buffers
preventing starvation when using dynamic buffer mode.
- The OMX_PERFORMANCE debug category has been renamed to OMX_API_TRACE
and can now be used to track a widder variety of interactions
between OMX and GStreamer.
- Video encoders will now detect frame rate only changes and will
inform OMX about it rather than doing a full format reset.
- Various Zynq UltraScale+ specific improvements:
- Video encoders are now able to import dmabuf from upstream.
- Support for HEVC range extension profiles and more AVC profiles.
- We can now request video encoders to generate an IDR using the
force key unit event.
GStreamer Editing Services and NLE
- Added a gesdemux element, it is an auto pluggable element that
allows decoding edit list like files supported by GES
- Added gessrc which wraps a GESTimeline as a standard source element
(implementing the ges protocol handler)
- Added basic support for videorate::rate property potentially
allowing changing playback speed
- Layer priority is now fully automatic and they should be moved with
the new ges_timeline_move_layer method, ges_layer_set_priority is
now deprecated.
- Added a ges_timeline_element_get_layer_priority so we can simply get
all information about GESTimelineElement position in the timeline
- GESVideoSource now auto orientates the images if it is defined in a
meta (overridable).
- Added some PyGObject overrides to make the API more pythonic
- The threading model has been made more explicit with safe guard to
make sure not thread safe APIs are not used from the wrong threads.
It is also now possible to properly handle in what thread the API
should be used.
- Optimized GESClip and GESTrackElement creation
- Added a way to compile out the old, unused and deprecated
GESPitiviFormatter
- Re implemented the timeline editing API making it faster and making
the code much more maintainable
- Simplified usage of nlecomposition outside GES by removing quirks in
it API usage and removing the need to treat it specially from an
application perspective.
- ges-launch-1.0:
- Added support to add titles to the timeline
- Enhance the help auto generating it from the code
- Deprecate ges_timeline_load_from_uri as loading the timeline should
be done through a project now
- MANY leaks have been plugged and the unit testsuite is now “leak
free”
GStreamer validate
- Added an action type to verify the checksum of the sink last-sample
- Added an include keyword to validate scenarios
- Added the notion of variable in scenarios, with the set-vars keyword
- Started adding support for “performance” like tests by allowing to
define the number of dropped buffers or the minimum buffer frequency
on a specific pad
- Added a validateflow plugin which allows defining the data flow to
be seen on a particular pad and verifying that following runs match
the expectations
- Added support for appsrc based test definition so we can instrument
the data pushed into the pipeline from scenarios
- Added a mockdecryptor allowing adding tests with on encrypted files,
the element will potentially be instrumented with a validate
scenario
- gst-validate-launcher:
- Cleaned up output
- Changed the default for “muting” tests as user doesn’t expect
hundreds of windows to show up when running the testsuite
- Fixed the outputted xunit files to be compatible with GitLab
- Added support to run tests on media files in push mode (using
pushfile://)
- Added support for running inside gst-build
- Added support for running ssim tests on rendered files
- Added a way to simply define tests on pipelines through a simple
.json file
- Added a python app to easily run python testsuite reusing all
the launcher features
- Added flatpak knowledge so we can print backtrace even when
running from within flatpak
- Added a way to automatically generated “known issues”
suppressions lines
- Added a way to rerun tests to check if they are flaky and added
a way to tolerate tests known to be flaky
- Add a way to output html log files
GStreamer Python Bindings
- add binding for gst_pad_set_caps()
- pygobject dependency requirement was bumped to >= 3.8
- new audiotestsrc, audioplot, and mixer plugin examples, and a
dynamic pipeline example
GStreamer C# Bindings
- bindings for the GstWebRTC library
GStreamer Rust Bindings
The GStreamer Rust bindings are now officially part of the GStreamer
project and are also maintained in the GStreamer GitLab.
The releases will generally not be synchronized with the releases of
other GStreamer parts due to dependencies on other projects.
Also unlike the other GStreamer libraries, the bindings will not commit
to full API stability but instead will follow the approach that is
generally taken by Rust projects, e.g.:
1) 0.12.X will be completely API compatible with all other 0.12.Y
versions.
2) 0.12.X+1 will contain bugfixes and compatible new feature additions.
3) 0.13.0 will _not_ be backwards compatible with 0.12.X but projects
will be able to stay at 0.12.X without any problems as long as they
don’t need newer features.
The current stable release is 0.12.2 and the next release series will be
0.13, probably around March 2019.
At this point the bindings cover most of GStreamer core (except for most
notably GstAllocator and GstMemory), and most parts of the app, audio,
base, check, editing-services, gl, net. pbutils, player, rtsp,
rtsp-server, sdp, video and webrtc libraries.
Also included is support for creating subclasses of the following types
and writing GStreamer plugins:
- gst::Element
- gst::Bin and gst::Pipeline
- gst::URIHandler and gst::ChildProxy
- gst::Pad, gst::GhostPad
- gst_base::Aggregator and gst_base::AggregatorPad
- gst_base::BaseSrc and gst_base::BaseSink
- gst_base::BaseTransform
Changes to 0.12.X since 0.12.0
Fixed
- PTP clock constructor actually creates a PTP instead of NTP clock
Added
- Bindings for GStreamer Editing Services
- Bindings for GStreamer Check testing library
- Bindings for the encoding profile API (encodebin)
- VideoFrame, VideoInfo, AudioInfo, StructureRef implements Send and
Sync now
- VideoFrame has a function to get the raw FFI pointer
- From impls from the Error/Success enums to the combined enums like
FlowReturn
- Bin-to-dot file functions were added to the Bin trait
- gst_base::Adapter implements SendUnique now
- More complete bindings for the gst_video::VideoOverlay interface,
especially
gst_video::is_video_overlay_prepare_window_handle_message()
Changed
- All references were updated from GitHub to freedesktop.org GitLab
- Fix various links in the README.md
- Link to the correct location for the documentation
- Remove GitLab badge as that only works with gitlab.com currently