File wf-recorder-0.4.0+git0.obscpio of Package wf-recorder
07070100000000000041ED00000000000000000000000164E5C8BE00000000000000000000000000000000000000000000001F00000000wf-recorder-0.4.0+git0/.github07070100000001000041ED00000000000000000000000164E5C8BE00000000000000000000000000000000000000000000002900000000wf-recorder-0.4.0+git0/.github/workflows07070100000002000081A400000000000000000000000164E5C8BE00000464000000000000000000000000000000000000003400000000wf-recorder-0.4.0+git0/.github/workflows/build.yamlname: Build
on: [push, pull_request]
jobs:
linux:
runs-on: ubuntu-latest
container: registry.fedoraproject.org/fedora:latest
steps:
- name: Set up DNF download cache
id: dnf-cache
uses: actions/cache@v3
with:
path: /var/cache/dnf
key: ${{ runner.os }}-dnfcache
- name: Install pre-requisites
run: dnf --assumeyes --setopt=install_weak_deps=False install
gcc-c++ meson /usr/bin/git /usr/bin/wayland-scanner
'pkgconfig(wayland-client)' 'pkgconfig(wayland-protocols)' 'pkgconfig(libpulse-simple)'
'pkgconfig(libavutil)' 'pkgconfig(libavcodec)' 'pkgconfig(libavformat)'
'pkgconfig(libavdevice)' 'pkgconfig(libavfilter)' 'pkgconfig(libswresample)'
'pkgconfig(gbm)'
- uses: actions/checkout@v2
with:
fetch-depth: 0 # Shallow clones speed things up
- run: git config --global --add safe.directory '*' # Needed for git rev-parse
- name: meson configure
run: meson ./Build
- name: compile with ninja
run: ninja -C ./Build
07070100000003000081A400000000000000000000000164E5C8BE00000438000000000000000000000000000000000000001F00000000wf-recorder-0.4.0+git0/LICENSEThe MIT License (MIT)
Copyright (c) 2019 Ilia Bozhinov
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
07070100000004000081A400000000000000000000000164E5C8BE00000E82000000000000000000000000000000000000002100000000wf-recorder-0.4.0+git0/README.md# wf-recorder
wf-recorder is a utility program for screen recording of `wlroots`-based compositors (more specifically, those that support `wlr-screencopy-v1` and `xdg-output`). Its dependencies are `ffmpeg`, `wayland-client` and `wayland-protocols`.
# installation
## Arch Linux
Arch users can install wf-recorder from the Community repo.
```
pacman -S wf-recorder
```
## Artix Linux
Artix users can install wf-recorder from the official repos
```
pacman -S wf-recorder
```
## Gentoo Linux
Gentoo users can install wf-recorder from the official (`::gentoo`) repository.
## Void Linux
Void users can install wf-recorder from the official repos
```
xbps-install -S wf-recorder
```
## Fedora Linux
Fedora users can install wf-recorder from the official repos
```
sudo dnf install wf-recorder
```
## Debian GNU/Linux
Debian users can install wf-recorder from official repos
```
apt install wf-recorder
```
## From Source
### Install Dependencies
#### Ubuntu
```
sudo apt install g++ meson libavutil-dev libavcodec-dev libavformat-dev libswscale-dev libpulse-dev
```
#### Fedora
```
$ sudo dnf install gcc-c++ meson wayland-devel wayland-protocols-devel ffmpeg-free-devel pulseaudio-libs-devel
```
### Download & Build
```
git clone https://github.com/ammen99/wf-recorder.git && cd wf-recorder
meson build --prefix=/usr --buildtype=release
ninja -C build
```
Optionally configure with `-Ddefault_codec='codec'`. The default is libx264. Now you can just run `./build/wf-recorder` or install it with `sudo ninja -C build install`.
The man page can be read with `man ./manpage/wf-recorder.1`.
# Usage
In its simplest form, run `wf-recorder` to start recording and use Ctrl+C to stop. This will create a file called `recording.mp4` in the current working directory using the default codec.
Use `-f <filename>` to specify the output file. In case of multiple outputs, you'll first be prompted to select the output you want to record. If you know the output name beforehand, you can use the `-o <output name>` option.
To select a specific part of the screen you can either use `-g <geometry>`, or use [slurp](https://github.com/emersion/slurp) for interactive selection of the screen area that will be recorded:
```
wf-recorder -g "$(slurp)"
```
You can record screen and sound simultaneously with
```
wf-recorder --audio --file=recording_with_audio.mp4
```
To specify an audio device, use the `-a<device>` or `--audio=<device>` options.
To specify a video codec, use the `-c <codec>` option. To modify codec parameters, use `-p <option_name>=<option_value>`.
You can also specify an audio codec, using `-C <codec>`. Alternatively, the long form `--audio-codec` can be used.
You can use the following command to check all available video codecs
```
ffmpeg -hide_banner -encoders | grep -E '^ V' | grep -F '(codec' | cut -c 8- | sort
```
and the following for audio codecs
```
ffmpeg -hide_banner -encoders | grep -E '^ A' | grep -F '(codec' | cut -c 8- | sort
```
Use ffmpeg to get details about specific encoder, filter or muxer.
To set a specific output format, use the `--muxer` option. For example, to output to a video4linux2 loopback you might use:
```
wf-recorder --muxer=v4l2 --codec=rawvideo --file=/dev/video2
```
To use GPU encoding, use a VAAPI codec (for ex. `h264_vaapi`) and specify a GPU device to use with the `-d` option:
```
wf-recorder -f test-vaapi.mkv -c h264_vaapi -d /dev/dri/renderD128
```
Some drivers report support for rgb0 data for vaapi input but really only support yuv planar formats. In this case, use the `-x yuv420p` or `--pixel-format yuv420p` option in addition to the vaapi options to convert the data to yuv planar data before sending it to the GPU.
07070100000005000081A400000000000000000000000164E5C8BE00000175000000000000000000000000000000000000002300000000wf-recorder-0.4.0+git0/config.h.in#pragma once
#define DEFAULT_CODEC "@default_codec@"
#define DEFAULT_AUDIO_CODEC "@default_audio_codec@"
#define DEFAULT_AUDIO_SAMPLE_RATE @default_audio_sample_rate@
#define DEFAULT_CONTAINER_FORMAT "@default_container_format@"
#define FALLBACK_AUDIO_SAMPLE_FMT "@fallback_audio_sample_fmt@"
#mesondefine HAVE_PULSE
#mesondefine HAVE_OPENCL
#mesondefine HAVE_LIBAVDEVICE
07070100000006000041ED00000000000000000000000164E5C8BE00000000000000000000000000000000000000000000001F00000000wf-recorder-0.4.0+git0/manpage07070100000007000081A400000000000000000000000164E5C8BE000017E2000000000000000000000000000000000000002D00000000wf-recorder-0.4.0+git0/manpage/wf-recorder.1.Dd $Mdocdate: July 30 2022 $
.Dt WF-RECORDER 1
.Os
.Sh NAME
.Nm wf-recorder
.Nd simple screen recording program for wlroots-based compositors
.Sh SYNOPSIS
.Nm wf-recorder
.Op Fl abcCdDefFghlmopPrRvxX
.Op Fl a, -audio Op Ar =DEVICE
.Op Fl b, -bframes Ar max_b_frames
.Op Fl B, -buffrate Ar buffrate
.Op Fl c, -codec Ar output_codec
.Op Fl r, -framerate Ar framerate
.Op Fl d, -device Ar encoding_device
.Op Fl -no-dmabuf
.Op Fl D, -no-damage
.Op Fl f Ar filename.ext
.Op Fl F Ar filter_string
.Op Fl g, -geometry Ar geometry
.Op Fl h, -help
.Op Fl l, -log
.Op Fl m, -muxer Ar muxer
.Op Fl o, -output Ar output
.Op Fl p, -codec-param Op Ar option_param=option_value
.Op Fl v, -version
.Op Fl x, -pixel-format
.Op Fl C, -audio-codec Ar output_audio_codec
.Op Fl P, -audio-codec-param Op Ar option_param=option_value
.Op Fl R, -sample-rate Ar sample_rate
.Op Fl X, -sample-format Ar sample_format
.Sh DESCRIPTION
.Nm
is a tool built to record your screen on Wayland compositors.
It makes use of
.Sy wlr-screencopy
for capturing video and
.Xr ffmpeg 1
for encoding it.
.Pp
In its simplest form, run
.Nm
to start recording and use
.Ql Ctrl+C
to stop.
This will create a file called
.Ql recording.mp4
in the current working directory using the default
.Ar codec.
.Pp
The options are as follows:
.Pp
.Bl -tag -width Ds -compact
.It Fl a , -audio Op Ar =DEVICE
Starts recording the screen with audio.
.Pp
.Ar DEVICE
argument is optional.
In case you want to specify the PulseAudio device which will capture the audio,
you can run this command with the name of that device.
You can find your device by running
.D1 $ pactl list sources | grep Name
.Pp
.It Fl b , -bframes Ar max_b_frames
Sets the maximum number of B-Frames to use.
.It Fl B , -buffrate Ar buffrate
Tells the encoder a prediction of what framerate to expect.
This preserves VFR and Solves FPS limit issue of some encoders (like svt-av1).
Should be set to the same framerate as display.
.Pp
.It Fl c , -codec Ar output_codec
Specifies the codec of the video. Supports GIF output as well.
.Pp
To modify codec parameters, use
.Fl p Ar option_name=option_value
.Pp
.It Fl r , -framerate Ar framerate
Sets hard constant framerate. Will duplicate frames to reach it.
This makes the resulting video CFR. Solves FPS limit issue of some encoders.
.Pp
.It Fl d , -device Ar encoding_device
Selects the device to use when encoding the video.
.Pp
Some drivers report support for
.Ql rgb0
data for vaapi input but really only support yuv.
Use the
.Fl x Ar yuv420
option in addition to the vaapi options to convert the
data in software, before sending it to the GPU.
.Pp
.It Fl -no-dmabuf
By default, wf-recorder will try to use only GPU buffers and copies if using a GPU encoder.
However, this can cause issues on some systems.
In such cases, this option will disable the GPU copy and force a CPU one.
.Pp
.It Fl D , -no-damage
By default, wf-recorder will request a new frame from the compositor
only when the screen updates. This results in a much smaller output
file, which however has a variable refresh rate. When this option is
on, wf-recorder does not use this optimization and continuously
records new frames, even if there are no updates on the screen.
.Pp
.It Fl f Ar filename.ext
By using the
.Fl f
option, the output file will have the name
.Ar filename.ext
and the file format will be determined by the provided extension.
If the extension is not recognized by your
.Xr ffmpeg 1
muxers, the command will fail.
.Pp
You can check the muxers that your
.Xr ffmpeg 1
installation supports by running
.Dl $ ffmpeg -muxers
.Pp
.It Fl F , -filter Ar filter_string
Set the ffmpeg filter to use. VAAPI requires `scale_vaapi=format=nv12:out_range=full` to work.
.Pp
.It Fl g , -geometry Ar screen_geometry
Selects a specific part of the screen. The format is "x,y WxH".
.Pp
.It Fl h , -help
Prints the help screen.
.Pp
.It Fl l , -log
Generates a log on the current terminal. For debug purposes.
.Pp
.It Fl m , -muxer Ar muxer
Set the output format to a specific muxer instead of detecting it from the filename.
.Pp
.It Fl o , -output
Specify the output where the video is to be recorded.
.Pp
.It Fl p , -codec-param Op Ar option_name=option_value
Change the codec parameters.
.Pp
.It Fl v , -version
Print the version of wf-recorder.
.Pp
.It Fl x , -pixel-format Ar pixel_format
Set the output pixel format.
.Pp
List available formats using
.Dl $ ffmpeg -pix_fmts
.Pp
.It Fl C , -audio-codec Ar output_audio_codec
Specifies the codec of the audio.
.Pp
.It Fl P , -audio-codec-param Op Ar option_name=option_value
Change the audio codec parameters.
.Pp
.It Fl R , -sample-rate Ar sample_rate
Changes the audio sample rate, in HZ. The default value is 48000.
.Pp
.It Fl X , -sample-format Ar sample_format
Set the output audio sample format.
.Pp
List available formats using
.Dl $ ffmpeg -sample_fmts
.El
.Sh EXAMPLES
To select a specific part of the screen you can either use
.Fl -g Ar geometry
or
use https://github.com/emersion/slurp for interactive selection of the
screen area that will be recorded:
.Dl $ wf-recorder -g "$(slurp)"
.Pp
You can record screen and sound simultaneously with
.Dl $ wf-recorder --audio --file=recording_with_audio.mp4
.Pp
To specify an audio device, use the
.Fl -a<DEVICE>
or
.Fl --audio=<DEVICE>
options.
.Pp
To specify a
.Ar codec
use the
.Fl c Ar codec
option. To modify codec parameters,
.Fl p
.Ar option_name=option_value.
.Pp
To set a specific output format, use the
.Fl m, -muxer
option. For example, to
output to a
.Sy video4linux2
loopback you might use:
.Dl $ wf-recorder --muxer=v4l2 --codec=rawvideo --file=/dev/video2
.Pp
To use GPU encoding, use a VAAPI codec (for ex.
.Ql h264_vaapi
) and specify a GPU
device to use with the
.Fl d
option:
.Dl $ wf-recorder -f test-vaapi.mkv -c h264_vaapi -d /dev/dri/renderD128
.Pp
Some drivers report support for
.Ql rgb0
data for
.Ql vaapi
input but really only support yuv planar formats.
In this case, use the
.Fl x Ar yuv420p
option in addition to the
.Ql vaapi
options to convert the data to yuv planar data before sending it to the GPU.
.Sh SEE ALSO
.Xr ffmpeg 1 ,
.Xr pactl 1
07070100000008000081A400000000000000000000000164E5C8BE00000A1D000000000000000000000000000000000000002300000000wf-recorder-0.4.0+git0/meson.buildproject(
'wf-recorder',
'c',
'cpp',
version: '0.4.0',
license: 'MIT',
meson_version: '>=0.54.0',
default_options: [
'cpp_std=c++11',
'c_std=c11',
'warning_level=2',
'werror=false',
],
)
conf_data = configuration_data()
conf_data.set('default_codec', get_option('default_codec'))
conf_data.set('default_audio_codec', get_option('default_audio_codec'))
conf_data.set('default_audio_sample_rate', get_option('default_audio_sample_rate'))
conf_data.set('default_container_format', get_option('default_container_format'))
conf_data.set('fallback_audio_sample_fmt', get_option('fallback_audio_sample_fmt'))
version = '"@0@"'.format(meson.project_version())
git = find_program('git', native: true, required: false)
if git.found()
git_commit = run_command([git, 'rev-parse', '--short', 'HEAD'], check: true)
git_branch = run_command([git, 'rev-parse', '--abbrev-ref', 'HEAD'], check: true)
if git_commit.returncode() == 0 and git_branch.returncode() == 0
version = '"@0@-@1@ (" __DATE__ ", branch \'@2@\')"'.format(
meson.project_version(),
git_commit.stdout().strip(),
git_branch.stdout().strip(),
)
endif
endif
add_project_arguments('-DWFRECORDER_VERSION=@0@'.format(version), language: 'cpp')
include_directories(['.'])
add_project_arguments(['-Wno-deprecated-declarations'], language: 'cpp')
project_sources = ['src/frame-writer.cpp', 'src/main.cpp', 'src/averr.c']
wayland_client = dependency('wayland-client', version: '>=1.20')
wayland_protos = dependency('wayland-protocols', version: '>=1.14')
pulse = dependency('libpulse-simple', required : get_option('pulse'))
if pulse.found()
conf_data.set('HAVE_PULSE', true)
project_sources += 'src/pulse.cpp'
endif
libavutil = dependency('libavutil')
libavcodec = dependency('libavcodec')
libavformat = dependency('libavformat')
libavdevice = dependency('libavdevice', required: false)
libavfilter = dependency('libavfilter')
swr = dependency('libswresample')
threads = dependency('threads')
gbm = dependency('gbm')
conf_data.set('HAVE_LIBAVDEVICE', libavdevice.found())
configure_file(input: 'config.h.in',
output: 'config.h',
configuration: conf_data)
install_data('manpage/wf-recorder.1', install_dir :
join_paths(get_option('prefix'), get_option('mandir'), 'man1'))
subdir('proto')
dependencies = [
wayland_client, wayland_protos,
libavutil, libavcodec, libavformat, libavdevice, libavfilter,
wf_protos, threads, pulse, swr, gbm
]
executable('wf-recorder', project_sources,
dependencies: dependencies,
install: true)
07070100000009000081A400000000000000000000000164E5C8BE00000303000000000000000000000000000000000000002900000000wf-recorder-0.4.0+git0/meson_options.txtoption('default_codec', type: 'string', value: 'libx264', description: 'Codec that will be used by default')
option('default_audio_codec', type: 'string', value: 'aac', description: 'Audio codec that will be used by default')
option('default_audio_sample_rate', type: 'integer', value: 48000, description: 'Audio sample rate that will be used by default')
option('default_container_format', type: 'string', value: 'mkv', description: 'Container file format that will be used by default')
option('fallback_audio_sample_fmt', type: 'string', value: 's16', description: 'Fallback audio sample format that will be used if wf-recorder cannot determine the sample formats supported by a codec')
option('pulse', type: 'feature', value: 'auto', description: 'Enable Pulseaudio')
0707010000000A000041ED00000000000000000000000164E5C8BE00000000000000000000000000000000000000000000001D00000000wf-recorder-0.4.0+git0/proto0707010000000B000081A400000000000000000000000164E5C8BE00000470000000000000000000000000000000000000002900000000wf-recorder-0.4.0+git0/proto/meson.buildwl_protocol_dir = wayland_protos.get_variable(pkgconfig: 'pkgdatadir', internal: 'pkgdatadir')
wayland_scanner = find_program('wayland-scanner')
wayland_scanner_code = generator(
wayland_scanner,
output: '@BASENAME@-protocol.c',
arguments: ['private-code', '@INPUT@', '@OUTPUT@'],
)
wayland_scanner_client = generator(
wayland_scanner,
output: '@BASENAME@-client-protocol.h',
arguments: ['client-header', '@INPUT@', '@OUTPUT@'],
)
client_protocols = [
[wl_protocol_dir, 'unstable/xdg-output/xdg-output-unstable-v1.xml'],
[wl_protocol_dir, 'unstable/linux-dmabuf/linux-dmabuf-unstable-v1.xml'],
'wlr-screencopy-unstable-v1.xml',
'wl-drm.xml'
]
wl_protos_client_src = []
wl_protos_headers = []
foreach p : client_protocols
xml = join_paths(p)
wl_protos_client_src += wayland_scanner_code.process(xml)
wl_protos_headers += wayland_scanner_client.process(xml)
endforeach
lib_wl_protos = static_library('wl_protos', wl_protos_client_src + wl_protos_headers,
dependencies: [wayland_client]) # for the include directory
wf_protos = declare_dependency(
link_with: lib_wl_protos,
sources: wl_protos_headers,
)
0707010000000C000081A400000000000000000000000164E5C8BE00001F27000000000000000000000000000000000000002800000000wf-recorder-0.4.0+git0/proto/wl-drm.xml<?xml version="1.0" encoding="UTF-8"?>
<protocol name="drm">
<copyright>
Copyright © 2008-2011 Kristian Høgsberg
Copyright © 2010-2011 Intel Corporation
Permission to use, copy, modify, distribute, and sell this
software and its documentation for any purpose is hereby granted
without fee, provided that\n the above copyright notice appear in
all copies and that both that copyright notice and this permission
notice appear in supporting documentation, and that the name of
the copyright holders not be used in advertising or publicity
pertaining to distribution of the software without specific,
written prior permission. The copyright holders make no
representations about the suitability of this software for any
purpose. It is provided "as is" without express or implied
warranty.
THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
THIS SOFTWARE.
</copyright>
<!-- drm support. This object is created by the server and published
using the display's global event. -->
<interface name="wl_drm" version="2">
<enum name="error">
<entry name="authenticate_fail" value="0"/>
<entry name="invalid_format" value="1"/>
<entry name="invalid_name" value="2"/>
</enum>
<enum name="format">
<!-- The drm format codes match the #defines in drm_fourcc.h.
The formats actually supported by the compositor will be
reported by the format event. New codes must not be added,
unless directly taken from drm_fourcc.h. -->
<entry name="c8" value="0x20203843"/>
<entry name="rgb332" value="0x38424752"/>
<entry name="bgr233" value="0x38524742"/>
<entry name="xrgb4444" value="0x32315258"/>
<entry name="xbgr4444" value="0x32314258"/>
<entry name="rgbx4444" value="0x32315852"/>
<entry name="bgrx4444" value="0x32315842"/>
<entry name="argb4444" value="0x32315241"/>
<entry name="abgr4444" value="0x32314241"/>
<entry name="rgba4444" value="0x32314152"/>
<entry name="bgra4444" value="0x32314142"/>
<entry name="xrgb1555" value="0x35315258"/>
<entry name="xbgr1555" value="0x35314258"/>
<entry name="rgbx5551" value="0x35315852"/>
<entry name="bgrx5551" value="0x35315842"/>
<entry name="argb1555" value="0x35315241"/>
<entry name="abgr1555" value="0x35314241"/>
<entry name="rgba5551" value="0x35314152"/>
<entry name="bgra5551" value="0x35314142"/>
<entry name="rgb565" value="0x36314752"/>
<entry name="bgr565" value="0x36314742"/>
<entry name="rgb888" value="0x34324752"/>
<entry name="bgr888" value="0x34324742"/>
<entry name="xrgb8888" value="0x34325258"/>
<entry name="xbgr8888" value="0x34324258"/>
<entry name="rgbx8888" value="0x34325852"/>
<entry name="bgrx8888" value="0x34325842"/>
<entry name="argb8888" value="0x34325241"/>
<entry name="abgr8888" value="0x34324241"/>
<entry name="rgba8888" value="0x34324152"/>
<entry name="bgra8888" value="0x34324142"/>
<entry name="xrgb2101010" value="0x30335258"/>
<entry name="xbgr2101010" value="0x30334258"/>
<entry name="rgbx1010102" value="0x30335852"/>
<entry name="bgrx1010102" value="0x30335842"/>
<entry name="argb2101010" value="0x30335241"/>
<entry name="abgr2101010" value="0x30334241"/>
<entry name="rgba1010102" value="0x30334152"/>
<entry name="bgra1010102" value="0x30334142"/>
<entry name="yuyv" value="0x56595559"/>
<entry name="yvyu" value="0x55595659"/>
<entry name="uyvy" value="0x59565955"/>
<entry name="vyuy" value="0x59555956"/>
<entry name="ayuv" value="0x56555941"/>
<entry name="xyuv8888" value="0x56555958"/>
<entry name="nv12" value="0x3231564e"/>
<entry name="nv21" value="0x3132564e"/>
<entry name="nv16" value="0x3631564e"/>
<entry name="nv61" value="0x3136564e"/>
<entry name="yuv410" value="0x39565559"/>
<entry name="yvu410" value="0x39555659"/>
<entry name="yuv411" value="0x31315559"/>
<entry name="yvu411" value="0x31315659"/>
<entry name="yuv420" value="0x32315559"/>
<entry name="yvu420" value="0x32315659"/>
<entry name="yuv422" value="0x36315559"/>
<entry name="yvu422" value="0x36315659"/>
<entry name="yuv444" value="0x34325559"/>
<entry name="yvu444" value="0x34325659"/>
<entry name="abgr16f" value="0x48344241"/>
<entry name="xbgr16f" value="0x48344258"/>
</enum>
<!-- Call this request with the magic received from drmGetMagic().
It will be passed on to the drmAuthMagic() or
DRIAuthConnection() call. This authentication must be
completed before create_buffer could be used. -->
<request name="authenticate">
<arg name="id" type="uint"/>
</request>
<!-- Create a wayland buffer for the named DRM buffer. The DRM
surface must have a name using the flink ioctl -->
<request name="create_buffer">
<arg name="id" type="new_id" interface="wl_buffer"/>
<arg name="name" type="uint"/>
<arg name="width" type="int"/>
<arg name="height" type="int"/>
<arg name="stride" type="uint"/>
<arg name="format" type="uint"/>
</request>
<!-- Create a wayland buffer for the named DRM buffer. The DRM
surface must have a name using the flink ioctl -->
<request name="create_planar_buffer">
<arg name="id" type="new_id" interface="wl_buffer"/>
<arg name="name" type="uint"/>
<arg name="width" type="int"/>
<arg name="height" type="int"/>
<arg name="format" type="uint"/>
<arg name="offset0" type="int"/>
<arg name="stride0" type="int"/>
<arg name="offset1" type="int"/>
<arg name="stride1" type="int"/>
<arg name="offset2" type="int"/>
<arg name="stride2" type="int"/>
</request>
<!-- Notification of the path of the drm device which is used by
the server. The client should use this device for creating
local buffers. Only buffers created from this device should
be be passed to the server using this drm object's
create_buffer request. -->
<event name="device">
<arg name="name" type="string"/>
</event>
<event name="format">
<arg name="format" type="uint"/>
</event>
<!-- Raised if the authenticate request succeeded -->
<event name="authenticated"/>
<enum name="capability" since="2">
<description summary="wl_drm capability bitmask">
Bitmask of capabilities.
</description>
<entry name="prime" value="1" summary="wl_drm prime available"/>
</enum>
<event name="capabilities">
<arg name="value" type="uint"/>
</event>
<!-- Version 2 additions -->
<!-- Create a wayland buffer for the prime fd. Use for regular and planar
buffers. Pass 0 for offset and stride for unused planes. -->
<request name="create_prime_buffer" since="2">
<arg name="id" type="new_id" interface="wl_buffer"/>
<arg name="name" type="fd"/>
<arg name="width" type="int"/>
<arg name="height" type="int"/>
<arg name="format" type="uint"/>
<arg name="offset0" type="int"/>
<arg name="stride0" type="int"/>
<arg name="offset1" type="int"/>
<arg name="stride1" type="int"/>
<arg name="offset2" type="int"/>
<arg name="stride2" type="int"/>
</request>
</interface>
</protocol>
0707010000000D000081A400000000000000000000000164E5C8BE000027B6000000000000000000000000000000000000003C00000000wf-recorder-0.4.0+git0/proto/wlr-screencopy-unstable-v1.xml<?xml version="1.0" encoding="UTF-8"?>
<protocol name="wlr_screencopy_unstable_v1">
<copyright>
Copyright © 2018 Simon Ser
Copyright © 2019 Andri Yngvason
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice (including the next
paragraph) shall be included in all copies or substantial portions of the
Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
</copyright>
<description summary="screen content capturing on client buffers">
This protocol allows clients to ask the compositor to copy part of the
screen content to a client buffer.
Warning! The protocol described in this file is experimental and
backward incompatible changes may be made. Backward compatible changes
may be added together with the corresponding interface version bump.
Backward incompatible changes are done by bumping the version number in
the protocol and interface names and resetting the interface version.
Once the protocol is to be declared stable, the 'z' prefix and the
version number in the protocol and interface names are removed and the
interface version number is reset.
</description>
<interface name="zwlr_screencopy_manager_v1" version="3">
<description summary="manager to inform clients and begin capturing">
This object is a manager which offers requests to start capturing from a
source.
</description>
<request name="capture_output">
<description summary="capture an output">
Capture the next frame of an entire output.
</description>
<arg name="frame" type="new_id" interface="zwlr_screencopy_frame_v1"/>
<arg name="overlay_cursor" type="int"
summary="composite cursor onto the frame"/>
<arg name="output" type="object" interface="wl_output"/>
</request>
<request name="capture_output_region">
<description summary="capture an output's region">
Capture the next frame of an output's region.
The region is given in output logical coordinates, see
xdg_output.logical_size. The region will be clipped to the output's
extents.
</description>
<arg name="frame" type="new_id" interface="zwlr_screencopy_frame_v1"/>
<arg name="overlay_cursor" type="int"
summary="composite cursor onto the frame"/>
<arg name="output" type="object" interface="wl_output"/>
<arg name="x" type="int"/>
<arg name="y" type="int"/>
<arg name="width" type="int"/>
<arg name="height" type="int"/>
</request>
<request name="destroy" type="destructor">
<description summary="destroy the manager">
All objects created by the manager will still remain valid, until their
appropriate destroy request has been called.
</description>
</request>
</interface>
<interface name="zwlr_screencopy_frame_v1" version="3">
<description summary="a frame ready for copy">
This object represents a single frame.
When created, a series of buffer events will be sent, each representing a
supported buffer type. The "buffer_done" event is sent afterwards to
indicate that all supported buffer types have been enumerated. The client
will then be able to send a "copy" request. If the capture is successful,
the compositor will send a "flags" followed by a "ready" event.
For objects version 2 or lower, wl_shm buffers are always supported, ie.
the "buffer" event is guaranteed to be sent.
If the capture failed, the "failed" event is sent. This can happen anytime
before the "ready" event.
Once either a "ready" or a "failed" event is received, the client should
destroy the frame.
</description>
<event name="buffer">
<description summary="wl_shm buffer information">
Provides information about wl_shm buffer parameters that need to be
used for this frame. This event is sent once after the frame is created
if wl_shm buffers are supported.
</description>
<arg name="format" type="uint" enum="wl_shm.format" summary="buffer format"/>
<arg name="width" type="uint" summary="buffer width"/>
<arg name="height" type="uint" summary="buffer height"/>
<arg name="stride" type="uint" summary="buffer stride"/>
</event>
<request name="copy">
<description summary="copy the frame">
Copy the frame to the supplied buffer. The buffer must have a the
correct size, see zwlr_screencopy_frame_v1.buffer and
zwlr_screencopy_frame_v1.linux_dmabuf. The buffer needs to have a
supported format.
If the frame is successfully copied, a "flags" and a "ready" events are
sent. Otherwise, a "failed" event is sent.
</description>
<arg name="buffer" type="object" interface="wl_buffer"/>
</request>
<enum name="error">
<entry name="already_used" value="0"
summary="the object has already been used to copy a wl_buffer"/>
<entry name="invalid_buffer" value="1"
summary="buffer attributes are invalid"/>
</enum>
<enum name="flags" bitfield="true">
<entry name="y_invert" value="1" summary="contents are y-inverted"/>
</enum>
<event name="flags">
<description summary="frame flags">
Provides flags about the frame. This event is sent once before the
"ready" event.
</description>
<arg name="flags" type="uint" enum="flags" summary="frame flags"/>
</event>
<event name="ready">
<description summary="indicates frame is available for reading">
Called as soon as the frame is copied, indicating it is available
for reading. This event includes the time at which presentation happened
at.
The timestamp is expressed as tv_sec_hi, tv_sec_lo, tv_nsec triples,
each component being an unsigned 32-bit value. Whole seconds are in
tv_sec which is a 64-bit value combined from tv_sec_hi and tv_sec_lo,
and the additional fractional part in tv_nsec as nanoseconds. Hence,
for valid timestamps tv_nsec must be in [0, 999999999]. The seconds part
may have an arbitrary offset at start.
After receiving this event, the client should destroy the object.
</description>
<arg name="tv_sec_hi" type="uint"
summary="high 32 bits of the seconds part of the timestamp"/>
<arg name="tv_sec_lo" type="uint"
summary="low 32 bits of the seconds part of the timestamp"/>
<arg name="tv_nsec" type="uint"
summary="nanoseconds part of the timestamp"/>
</event>
<event name="failed">
<description summary="frame copy failed">
This event indicates that the attempted frame copy has failed.
After receiving this event, the client should destroy the object.
</description>
</event>
<request name="destroy" type="destructor">
<description summary="delete this object, used or not">
Destroys the frame. This request can be sent at any time by the client.
</description>
</request>
<!-- Version 2 additions -->
<request name="copy_with_damage" since="2">
<description summary="copy the frame when it's damaged">
Same as copy, except it waits until there is damage to copy.
</description>
<arg name="buffer" type="object" interface="wl_buffer"/>
</request>
<event name="damage" since="2">
<description summary="carries the coordinates of the damaged region">
This event is sent right before the ready event when copy_with_damage is
requested. It may be generated multiple times for each copy_with_damage
request.
The arguments describe a box around an area that has changed since the
last copy request that was derived from the current screencopy manager
instance.
The union of all regions received between the call to copy_with_damage
and a ready event is the total damage since the prior ready event.
</description>
<arg name="x" type="uint" summary="damaged x coordinates"/>
<arg name="y" type="uint" summary="damaged y coordinates"/>
<arg name="width" type="uint" summary="current width"/>
<arg name="height" type="uint" summary="current height"/>
</event>
<!-- Version 3 additions -->
<event name="linux_dmabuf" since="3">
<description summary="linux-dmabuf buffer information">
Provides information about linux-dmabuf buffer parameters that need to
be used for this frame. This event is sent once after the frame is
created if linux-dmabuf buffers are supported.
</description>
<arg name="format" type="uint" summary="fourcc pixel format"/>
<arg name="width" type="uint" summary="buffer width"/>
<arg name="height" type="uint" summary="buffer height"/>
</event>
<event name="buffer_done" since="3">
<description summary="all buffer types reported">
This event is sent once after all buffer events have been sent.
The client should proceed to create a buffer of one of the supported
types, and send a "copy" request.
</description>
</event>
</interface>
</protocol>
0707010000000E000041ED00000000000000000000000164E5C8BE00000000000000000000000000000000000000000000001B00000000wf-recorder-0.4.0+git0/src0707010000000F000081A400000000000000000000000164E5C8BE000000A3000000000000000000000000000000000000002300000000wf-recorder-0.4.0+git0/src/averr.c#include "averr.h"
const char* averr(int err)
{
static char buf[AV_ERROR_MAX_STRING_SIZE];
av_make_error_string(buf, sizeof(buf), err);
return buf;
}
07070100000010000081A400000000000000000000000164E5C8BE000000D2000000000000000000000000000000000000002300000000wf-recorder-0.4.0+git0/src/averr.h#include <libavutil/error.h>
/* the macro av_err2str doesn't work in C++, so we have a wrapper for it here */
#ifdef __cplusplus
extern "C"
{
#endif
const char* averr(int err);
#ifdef __cplusplus
}
#endif
07070100000011000081A400000000000000000000000164E5C8BE000009D7000000000000000000000000000000000000002B00000000wf-recorder-0.4.0+git0/src/buffer-pool.hpp#pragma once
#include <array>
#include <mutex>
#include <atomic>
#include <type_traits>
class buffer_pool_buf
{
public:
bool ready_capture() const
{
return released;
}
bool ready_encode() const
{
return available;
}
std::atomic<bool> released{true}; // if the buffer can be used to store new pending frames
std::atomic<bool> available{false}; // if the buffer can be used to feed the encoder
};
template <class T, int N>
class buffer_pool
{
public:
static_assert(std::is_base_of<buffer_pool_buf, T>::value, "T must be subclass of buffer_pool_buf");
buffer_pool()
{
for (size_t i = 0; i < bufs_size; ++i) {
bufs[i] = new T;
}
}
~buffer_pool()
{
for (size_t i = 0; i < N; ++i) {
delete bufs[i];
}
}
size_t size() const
{
return N;
}
const T* at(size_t i) const
{
return bufs[i];
}
T& capture()
{
std::lock_guard<std::mutex> lock(mutex);
return *bufs[capture_idx];
}
T& encode()
{
std::lock_guard<std::mutex> lock(mutex);
return *bufs[encode_idx];
}
// Signal that the current capture buffer has been successfully obtained
// from the compositor and select the next buffer to capture in.
T& next_capture()
{
std::lock_guard<std::mutex> lock(mutex);
bufs[capture_idx]->released = false;
bufs[capture_idx]->available = true;
size_t next = (capture_idx + 1) % bufs_size;
if (!bufs[next]->ready_capture() && bufs_size < N) {
bufs_size++;
next = (capture_idx + 1) % bufs_size;
for (size_t i = N - 1; i > next; --i) {
bufs[i] = bufs[i - 1];
if (encode_idx == i - 1) {
encode_idx = i;
}
}
bufs[next] = new T;
}
capture_idx = next;
return *bufs[capture_idx];
}
// Signal that the encode buffer has been submitted for encoding
// and select the next buffer for encoding.
T& next_encode()
{
std::lock_guard<std::mutex> lock(mutex);
bufs[encode_idx]->available = false;
bufs[encode_idx]->released = true;
encode_idx = (encode_idx + 1) % bufs_size;
return *bufs[encode_idx];
}
private:
std::mutex mutex;
std::array<T*, N> bufs;
size_t bufs_size = 2;
size_t capture_idx = 0;
size_t encode_idx = 0;
};
07070100000012000081A400000000000000000000000164E5C8BE00007559000000000000000000000000000000000000002C00000000wf-recorder-0.4.0+git0/src/frame-writer.cpp// Adapted from https://stackoverflow.com/questions/34511312/how-to-encode-a-video-from-several-images-generated-in-a-c-program-without-wri
// (Later) adapted from https://github.com/apc-llc/moviemaker-cpp
//
// Audio encoding - thanks to wlstream, a lot of the code/ideas are taken from there
#include <iostream>
#include "frame-writer.hpp"
#include <vector>
#include <queue>
#include <cstring>
#include <sstream>
#include "averr.h"
#include <gbm.h>
static const AVRational US_RATIONAL{1,1000000} ;
// av_register_all was deprecated in 58.9.100, removed in 59.0.100
#if LIBAVCODEC_VERSION_INT < AV_VERSION_INT(59, 0, 100)
class FFmpegInitialize
{
public :
FFmpegInitialize()
{
// Loads the whole database of available codecs and formats.
av_register_all();
}
};
static FFmpegInitialize ffmpegInitialize;
#endif
void FrameWriter::init_hw_accel()
{
int ret = av_hwdevice_ctx_create(&this->hw_device_context,
av_hwdevice_find_type_by_name("vaapi"), params.hw_device.c_str(), NULL, 0);
if (ret != 0)
{
std::cerr << "Failed to create hw encoding device " << params.hw_device << ": " << averr(ret) << std::endl;
std::exit(-1);
}
}
void FrameWriter::load_codec_options(AVDictionary **dict)
{
using CodecOptions = std::map<std::string, std::string>;
static const CodecOptions default_x264_options = {
{"tune", "zerolatency"},
{"preset", "ultrafast"},
{"crf", "20"},
};
static const CodecOptions default_libvpx_options = {
{"cpu-used", "5"},
{"deadline", "realtime"},
};
static const std::map<std::string, const CodecOptions&> default_codec_options = {
{"libx264", default_x264_options},
{"libx265", default_x264_options},
{"libvpx", default_libvpx_options},
};
for (const auto& opts : default_codec_options)
{
if (params.codec.find(opts.first) != std::string::npos)
{
for (const auto& param : opts.second)
{
if (!params.codec_options.count(param.first))
params.codec_options[param.first] = param.second;
}
break;
}
}
for (auto& opt : params.codec_options)
{
std::cout << "Setting codec option: " << opt.first << "=" << opt.second << std::endl;
av_dict_set(dict, opt.first.c_str(), opt.second.c_str(), 0);
}
}
void FrameWriter::load_audio_codec_options(AVDictionary **dict)
{
for (auto& opt : params.audio_codec_options)
{
std::cout << "Setting codec option: " << opt.first << "=" << opt.second << std::endl;
av_dict_set(dict, opt.first.c_str(), opt.second.c_str(), 0);
}
}
bool is_fmt_supported(AVPixelFormat fmt, const AVPixelFormat *supported)
{
for (int i = 0; supported[i] != AV_PIX_FMT_NONE; i++)
{
if (supported[i] == fmt)
return true;
}
return false;
}
AVPixelFormat FrameWriter::get_input_format()
{
switch (params.format) {
case INPUT_FORMAT_BGR0:
return AV_PIX_FMT_BGR0;
case INPUT_FORMAT_RGB0:
return AV_PIX_FMT_RGB0;
case INPUT_FORMAT_BGR8:
return AV_PIX_FMT_RGB24;
case INPUT_FORMAT_RGB565:
return AV_PIX_FMT_RGB565LE;
case INPUT_FORMAT_BGR565:
return AV_PIX_FMT_BGR565LE;
#if LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(56, 55, 100)
case INPUT_FORMAT_X2RGB10:
return AV_PIX_FMT_X2RGB10LE;
#endif
#if LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(57, 7, 100)
case INPUT_FORMAT_X2BGR10:
return AV_PIX_FMT_X2BGR10LE;
#endif
case INPUT_FORMAT_RGBX64:
return AV_PIX_FMT_RGBA64LE;
case INPUT_FORMAT_BGRX64:
return AV_PIX_FMT_BGRA64LE;
#if LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(57, 33, 101)
case INPUT_FORMAT_RGBX64F:
return AV_PIX_FMT_RGBAF16LE;
#endif
case INPUT_FORMAT_DMABUF:
return AV_PIX_FMT_VAAPI;
default:
std::cerr << "Unknown format: " << params.format << std::endl;
std::exit(-1);
}
}
static const struct {
int drm;
AVPixelFormat av;
} drm_av_format_table [] = {
{ GBM_FORMAT_ARGB8888, AV_PIX_FMT_BGRA },
{ GBM_FORMAT_XRGB8888, AV_PIX_FMT_BGR0 },
{ GBM_FORMAT_ABGR8888, AV_PIX_FMT_RGBA },
{ GBM_FORMAT_XBGR8888, AV_PIX_FMT_RGB0 },
{ GBM_FORMAT_RGBA8888, AV_PIX_FMT_ABGR },
{ GBM_FORMAT_RGBX8888, AV_PIX_FMT_0BGR },
{ GBM_FORMAT_BGRA8888, AV_PIX_FMT_ARGB },
{ GBM_FORMAT_BGRX8888, AV_PIX_FMT_0RGB },
{ GBM_FORMAT_XRGB2101010, AV_PIX_FMT_X2RGB10 },
};
static AVPixelFormat get_drm_av_format(int fmt)
{
for (size_t i = 0; i < sizeof(drm_av_format_table) / sizeof(drm_av_format_table[0]); ++i) {
if (drm_av_format_table[i].drm == fmt) {
return drm_av_format_table[i].av;
}
}
std::cerr << "Failed to find AV format for" << fmt;
return AV_PIX_FMT_RGBA;
}
AVPixelFormat FrameWriter::lookup_pixel_format(std::string pix_fmt)
{
AVPixelFormat fmt = av_get_pix_fmt(pix_fmt.c_str());
if (fmt != AV_PIX_FMT_NONE)
return fmt;
std::cerr << "Failed to find the pixel format: " << pix_fmt << std::endl;
std::exit(-1);
}
AVPixelFormat FrameWriter::handle_buffersink_pix_fmt(const AVCodec *codec)
{
// Return with user chosen format
if (!params.pix_fmt.empty())
return lookup_pixel_format(params.pix_fmt);
auto in_fmt = get_input_format();
/* For codecs such as rawvideo no supported formats are listed */
if (!codec->pix_fmts)
return in_fmt;
/* If the codec supports getting the appropriate RGB format
* directly, we want to use it since we don't have to convert data */
if (is_fmt_supported(in_fmt, codec->pix_fmts))
return in_fmt;
/* Choose the format supported by the codec which best approximates the
* input fmt. */
AVPixelFormat best_format = AV_PIX_FMT_NONE;
for (int i = 0; codec->pix_fmts[i] != AV_PIX_FMT_NONE; i++) {
int loss = 0;
best_format = av_find_best_pix_fmt_of_2(best_format,
codec->pix_fmts[i], in_fmt, false, &loss);
}
return best_format;
}
void FrameWriter::init_video_filters(const AVCodec *codec)
{
if (params.framerate != 0){
if (params.video_filter != "null" && params.video_filter.find("fps") == std::string::npos) {
params.video_filter += ",fps=" + std::to_string(params.framerate);
}
else if (params.video_filter == "null"){
params.video_filter = "fps=" + std::to_string(params.framerate);
}
}
this->videoFilterGraph = avfilter_graph_alloc();
av_opt_set(videoFilterGraph, "scale_sws_opts", "flags=fast_bilinear:src_range=1:dst_range=1", 0);
const AVFilter* source = avfilter_get_by_name("buffer");
const AVFilter* sink = avfilter_get_by_name("buffersink");
if (!source || !sink) {
std::cerr << "filtering source or sink element not found\n";
exit(-1);
}
if (this->hw_device_context) {
this->hw_frame_context = av_hwframe_ctx_alloc(this->hw_device_context);
AVHWFramesContext *hwfc = reinterpret_cast<AVHWFramesContext*>(this->hw_frame_context->data);
hwfc->format = AV_PIX_FMT_VAAPI;
hwfc->sw_format = AV_PIX_FMT_NV12;
hwfc->width = params.width;
hwfc->height = params.height;
int err = av_hwframe_ctx_init(this->hw_frame_context);
if (err < 0) {
std::cerr << "Cannot create hw frames context: " << averr(err) << std::endl;
exit(-1);
}
this->hw_frame_context_in = av_hwframe_ctx_alloc(this->hw_device_context);
hwfc = reinterpret_cast<AVHWFramesContext*>(this->hw_frame_context_in->data);
hwfc->format = AV_PIX_FMT_VAAPI;
hwfc->sw_format = get_drm_av_format(params.drm_format);
hwfc->width = params.width;
hwfc->height = params.height;
err = av_hwframe_ctx_init(this->hw_frame_context_in);
if (err < 0) {
std::cerr << "Cannot create hw frames context: " << averr(err) << std::endl;
exit(-1);
}
}
// Build the configuration of the 'buffer' filter.
// See: ffmpeg -h filter=buffer
// See: https://ffmpeg.org/ffmpeg-filters.html#buffer
std::stringstream buffer_filter_config;
buffer_filter_config << "video_size=" << params.width << "x" << params.height;
buffer_filter_config << ":pix_fmt=" << (int)this->get_input_format();
buffer_filter_config << ":time_base=" << US_RATIONAL.num << "/" << US_RATIONAL.den;
if (params.buffrate != 0) {
buffer_filter_config << ":frame_rate=" << params.buffrate;
}
buffer_filter_config << ":pixel_aspect=1/1";
int err = avfilter_graph_create_filter(&this->videoFilterSourceCtx, source,
"Source", buffer_filter_config.str().c_str(), NULL, this->videoFilterGraph);
if (err < 0) {
std::cerr << "Cannot create video filter in: " << averr(err) << std::endl;;
exit(-1);
}
AVBufferSrcParameters *p = av_buffersrc_parameters_alloc();
memset(p, 0, sizeof(*p));
p->format = AV_PIX_FMT_NONE;
p->hw_frames_ctx = this->hw_frame_context_in;
err = av_buffersrc_parameters_set(this->videoFilterSourceCtx, p);
av_free(p);
if (err < 0) {
std::cerr << "Cannot set hwcontext filter in: " << averr(err) << std::endl;;
exit(-1);
}
err = avfilter_graph_create_filter(&this->videoFilterSinkCtx, sink, "Sink",
NULL, NULL, this->videoFilterGraph);
if (err < 0) {
std::cerr << "Cannot create video filter out: " << averr(err) << std::endl;;
exit(-1);
}
// We also need to tell the sink which pixel formats are supported.
// by the video encoder. codevIndicate to our sink pixel formats
// are accepted by our codec.
const AVPixelFormat picked_pix_fmt[] =
{
handle_buffersink_pix_fmt(codec),
AV_PIX_FMT_NONE
};
err = av_opt_set_int_list(this->videoFilterSinkCtx, "pix_fmts",
picked_pix_fmt, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
if (err < 0) {
std::cerr << "Failed to set pix_fmts: " << averr(err) << std::endl;;
exit(-1);
}
// Create the connections to the filter graph
//
// The in/out swap is not a mistake:
//
// ---------- ----------------------------- --------
// | Source | ----> | in -> filter_graph -> out | ---> | Sink |
// ---------- ----------------------------- --------
//
// The 'in' of filter_graph is the output of the Source buffer
// The 'out' of filter_graph is the input of the Sink buffer
//
AVFilterInOut *outputs = avfilter_inout_alloc();
outputs->name = av_strdup("in");
outputs->filter_ctx = this->videoFilterSourceCtx;
outputs->pad_idx = 0;
outputs->next = NULL;
AVFilterInOut *inputs = avfilter_inout_alloc();
inputs->name = av_strdup("out");
inputs->filter_ctx = this->videoFilterSinkCtx;
inputs->pad_idx = 0;
inputs->next = NULL;
if (!outputs->name || !inputs->name) {
std::cerr << "Failed to parse allocate inout filter links" << std::endl;
exit(-1);
}
std::cout << "Using video filter: " << params.video_filter << std::endl;
err = avfilter_graph_parse_ptr(this->videoFilterGraph,
params.video_filter.c_str(), &inputs, &outputs, NULL);
if (err < 0) {
std::cerr << "Failed to parse graph filter: " << averr(err) << std::endl;;
exit(-1) ;
}
// Filters that create HW frames ('hwupload', 'hwmap', ...) need
// AVBufferRef in their hw_device_ctx. Unfortunately, there is no
// simple API to do that for filters created by avfilter_graph_parse_ptr().
// The code below is inspired from ffmpeg_filter.c
if (this->hw_device_context) {
for (unsigned i=0; i< this->videoFilterGraph->nb_filters; i++) {
this->videoFilterGraph->filters[i]->hw_device_ctx =
av_buffer_ref(this->hw_device_context);
}
}
err = avfilter_graph_config(this->videoFilterGraph, NULL);
if (err<0) {
std::cerr << "Failed to configure graph filter: " << averr(err) << std::endl;;
exit(-1) ;
}
if (params.enable_ffmpeg_debug_output) {
std::cout << std::string(80,'#') << std::endl ;
std::cout << avfilter_graph_dump(this->videoFilterGraph,0) << "\n";
std::cout << std::string(80,'#') << std::endl ;
}
// The (input of the) sink is the output of the whole filter.
AVFilterLink * filter_output = this->videoFilterSinkCtx->inputs[0] ;
this->videoCodecCtx->width = filter_output->w;
this->videoCodecCtx->height = filter_output->h;
this->videoCodecCtx->pix_fmt = (AVPixelFormat)filter_output->format;
this->videoCodecCtx->time_base = filter_output->time_base;
this->videoCodecCtx->framerate = filter_output->frame_rate; // can be 1/0 if unknown
this->videoCodecCtx->sample_aspect_ratio = filter_output->sample_aspect_ratio;
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
}
void FrameWriter::init_video_stream()
{
AVDictionary *options = NULL;
load_codec_options(&options);
const AVCodec* codec = avcodec_find_encoder_by_name(params.codec.c_str());
if (!codec)
{
std::cerr << "Failed to find the given codec: " << params.codec << std::endl;
std::exit(-1);
}
videoStream = avformat_new_stream(fmtCtx, codec);
if (!videoStream)
{
std::cerr << "Failed to open stream" << std::endl;
std::exit(-1);
}
videoCodecCtx = avcodec_alloc_context3(codec);
videoCodecCtx->width = params.width;
videoCodecCtx->height = params.height;
videoCodecCtx->time_base = US_RATIONAL;
videoCodecCtx->color_range = AVCOL_RANGE_JPEG;
if (params.framerate) {
std::cout << "Framerate: " << params.framerate << std::endl;
}
if (params.bframes != -1)
videoCodecCtx->max_b_frames = params.bframes;
if (!params.hw_device.empty()) {
init_hw_accel();
}
// The filters need to be initialized after we have initialized
// videoCodecCtx.
//
// After loading the filters, we should update the hw frames ctx.
init_video_filters(codec);
if (this->hw_frame_context) {
videoCodecCtx->hw_frames_ctx = av_buffer_ref(this->hw_frame_context);
}
if (fmtCtx->oformat->flags & AVFMT_GLOBALHEADER) {
videoCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
int ret;
char err[256];
if ((ret = avcodec_open2(videoCodecCtx, codec, &options)) < 0)
{
av_strerror(ret, err, 256);
std::cerr << "avcodec_open2 failed: " << err << std::endl;
std::exit(-1);
}
av_dict_free(&options);
if ((ret = avcodec_parameters_from_context(videoStream->codecpar, videoCodecCtx)) < 0) {
av_strerror(ret, err, 256);
std::cerr << "avcodec_parameters_from_context failed: " << err << std::endl;
std::exit(-1);
}
}
#ifdef HAVE_PULSE
static uint64_t get_codec_channel_layout(const AVCodec *codec)
{
int i = 0;
if (!codec->channel_layouts)
return AV_CH_LAYOUT_STEREO;
while (1) {
if (!codec->channel_layouts[i])
break;
if (codec->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
return codec->channel_layouts[i];
i++;
}
return codec->channel_layouts[0];
}
static enum AVSampleFormat get_codec_auto_sample_fmt(const AVCodec *codec)
{
int i = 0;
if (!codec->sample_fmts)
return av_get_sample_fmt(FALLBACK_AUDIO_SAMPLE_FMT);
while (1) {
if (codec->sample_fmts[i] == -1)
break;
if (av_get_bytes_per_sample(codec->sample_fmts[i]) >= 2)
return codec->sample_fmts[i];
i++;
}
return codec->sample_fmts[0];
}
bool check_fmt_available(const AVCodec *codec, AVSampleFormat fmt){
for (const enum AVSampleFormat *sample_ptr = codec -> sample_fmts; *sample_ptr != -1; sample_ptr++)
{
if (*sample_ptr == fmt)
{
return true;
}
}
return false;
}
static enum AVSampleFormat convert_codec_sample_fmt(const AVCodec *codec, std::string requested_fmt)
{
static enum AVSampleFormat converted_fmt = av_get_sample_fmt(requested_fmt.c_str());
if (converted_fmt == AV_SAMPLE_FMT_NONE)
{
std::cout << "Failed to find the given sample format: " << requested_fmt << std::endl;
std::exit(-1);
} else if (!codec->sample_fmts || check_fmt_available(codec, converted_fmt))
{
std::cout << "Using sample format " << av_get_sample_fmt_name(converted_fmt) << " for audio codec " << codec->name << std::endl;
return converted_fmt;
} else
{
std::cout << "Codec " << codec->name << " does not support sample format " << av_get_sample_fmt_name(converted_fmt) << std::endl;
std::exit(-1);
}
}
void FrameWriter::init_audio_stream()
{
AVDictionary *options = NULL;
load_codec_options(&options);
const AVCodec* codec = avcodec_find_encoder_by_name(params.audio_codec.c_str());
if (!codec)
{
std::cerr << "Failed to find the given audio codec: " << params.audio_codec << std::endl;
std::exit(-1);
}
audioStream = avformat_new_stream(fmtCtx, codec);
if (!audioStream)
{
std::cerr << "Failed to open audio stream" << std::endl;
std::exit(-1);
}
audioCodecCtx = avcodec_alloc_context3(codec);
if (params.sample_fmt.size() == 0)
{
audioCodecCtx->sample_fmt = get_codec_auto_sample_fmt(codec);
std::cout << "Choosing sample format " << av_get_sample_fmt_name(audioCodecCtx->sample_fmt) << " for audio codec " << codec->name << std::endl;
} else
{
audioCodecCtx->sample_fmt = convert_codec_sample_fmt(codec, params.sample_fmt);
}
audioCodecCtx->channel_layout = get_codec_channel_layout(codec);
audioCodecCtx->sample_rate = params.sample_rate;
audioCodecCtx->time_base = (AVRational) { 1, 1000 };
audioCodecCtx->channels = av_get_channel_layout_nb_channels(audioCodecCtx->channel_layout);
if (fmtCtx->oformat->flags & AVFMT_GLOBALHEADER)
audioCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
int err;
if ((err = avcodec_open2(audioCodecCtx, codec, NULL)) < 0)
{
std::cerr << "(audio) avcodec_open2 failed " << err << std::endl;
std::exit(-1);
}
swrCtx = swr_alloc();
if (!swrCtx)
{
std::cerr << "Failed to allocate swr context" << std::endl;
std::exit(-1);
}
av_opt_set_int(swrCtx, "in_sample_rate", params.sample_rate, 0);
av_opt_set_int(swrCtx, "out_sample_rate", audioCodecCtx->sample_rate, 0);
av_opt_set_sample_fmt(swrCtx, "in_sample_fmt", AV_SAMPLE_FMT_FLT, 0);
av_opt_set_sample_fmt(swrCtx, "out_sample_fmt", audioCodecCtx->sample_fmt, 0);
av_opt_set_channel_layout(swrCtx, "in_channel_layout", AV_CH_LAYOUT_STEREO, 0);
av_opt_set_channel_layout(swrCtx, "out_channel_layout", audioCodecCtx->channel_layout, 0);
if (swr_init(swrCtx))
{
std::cerr << "Failed to initialize swr" << std::endl;
std::exit(-1);
}
int ret;
if ((ret = avcodec_parameters_from_context(audioStream->codecpar, audioCodecCtx)) < 0) {
char errmsg[256];
av_strerror(ret, errmsg, sizeof(errmsg));
std::cerr << "avcodec_parameters_from_context failed: " << err << std::endl;
std::exit(-1);
}
}
#endif
void FrameWriter::init_codecs()
{
init_video_stream();
#ifdef HAVE_PULSE
if (params.enable_audio)
init_audio_stream();
#endif
av_dump_format(fmtCtx, 0, params.file.c_str(), 1);
if (avio_open(&fmtCtx->pb, params.file.c_str(), AVIO_FLAG_WRITE))
{
std::cerr << "avio_open failed" << std::endl;
std::exit(-1);
}
AVDictionary *dummy = NULL;
char err[256];
int ret;
if ((ret = avformat_write_header(fmtCtx, &dummy)) != 0)
{
std::cerr << "Failed to write file header" << std::endl;
av_strerror(ret, err, 256);
std::cerr << err << std::endl;
std::exit(-1);
}
av_dict_free(&dummy);
}
static const char* determine_output_format(const FrameWriterParams& params)
{
if (!params.muxer.empty())
return params.muxer.c_str();
if (params.file.find("rtmp") == 0)
return "flv";
if (params.file.find("udp") == 0)
return "mpegts";
return NULL;
}
FrameWriter::FrameWriter(const FrameWriterParams& _params) :
params(_params)
{
if (params.enable_ffmpeg_debug_output)
av_log_set_level(AV_LOG_DEBUG);
#ifdef HAVE_LIBAVDEVICE
avdevice_register_all();
#endif
// Preparing the data concerning the format and codec,
// in order to write properly the header, frame data and end of file.
this->outputFmt = av_guess_format(NULL, params.file.c_str(), NULL);
auto streamFormat = determine_output_format(params);
auto context_ret = avformat_alloc_output_context2(&this->fmtCtx, NULL,
streamFormat, params.file.c_str());
if (context_ret < 0)
{
std::cerr << "Failed to allocate output context" << std::endl;
std::exit(-1);
}
init_codecs();
}
void FrameWriter::encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt)
{
/* send the frame to the encoder */
int ret = avcodec_send_frame(enc_ctx, frame);
if (ret < 0)
{
fprintf(stderr, "error sending a frame for encoding\n");
return;
}
while (ret >= 0)
{
ret = avcodec_receive_packet(enc_ctx, pkt);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
{
return;
}
if (ret < 0)
{
fprintf(stderr, "error during encoding\n");
return;
}
finish_frame(enc_ctx, *pkt);
}
}
bool FrameWriter::push_frame(AVFrame *frame, int64_t usec)
{
frame->pts = usec; // We use time_base = 1/US_RATE
// Push the RGB frame into the filtergraph */
int err = av_buffersrc_add_frame_flags(videoFilterSourceCtx, frame, 0);
if (err < 0) {
std::cerr << "Error while feeding the filtergraph!" << std::endl;
return false;
}
// Pull filtered frames from the filtergraph
while (true) {
AVFrame *filtered_frame = av_frame_alloc();
if (!filtered_frame) {
std::cerr << "Error av_frame_alloc" << std::endl;
return false;
}
err = av_buffersink_get_frame(videoFilterSinkCtx, filtered_frame);
if (err == AVERROR(EAGAIN)) {
// Not an error. No frame available.
// Try again later.
av_frame_free(&filtered_frame);
break;
} else if (err == AVERROR_EOF) {
// There will be no more output frames on this sink.
// That could happen if a filter like 'trim' is used to
// stop after a given time.
return false;
} else if (err < 0) {
av_frame_free(&filtered_frame);
return false;
}
filtered_frame->pict_type = AV_PICTURE_TYPE_NONE;
// So we have a frame. Encode it!
AVPacket *pkt = av_packet_alloc();
pkt->data = NULL;
pkt->size = 0;
encode(videoCodecCtx, filtered_frame, pkt);
av_frame_free(&filtered_frame);
av_packet_free(&pkt);
}
av_frame_free(&frame);
return true;
}
bool FrameWriter::add_frame(const uint8_t* pixels, int64_t usec, bool y_invert)
{
/* Calculate data after y-inversion */
int stride[] = {int(params.stride)};
const uint8_t *formatted_pixels = pixels;
if (y_invert)
{
formatted_pixels += stride[0] * (params.height - 1);
stride[0] *= -1;
}
auto frame = av_frame_alloc();
if (!frame) {
std::cerr << "Failed to allocate frame!" << std::endl;
return false;
}
frame->data[0] = (uint8_t*)formatted_pixels;
frame->linesize[0] = stride[0];
frame->format = get_input_format();
frame->width = params.width;
frame->height = params.height;
return push_frame(frame, usec);
}
bool FrameWriter::add_frame(struct gbm_bo *bo, int64_t usec, bool y_invert)
{
if (y_invert)
{
std::cerr << "Y_INVERT not supported with dmabuf" << std::endl;
return false;
}
auto frame = av_frame_alloc();
if (!frame)
{
std::cerr << "Failed to allocate frame!" << std::endl;
return false;
}
if (mapped_frames.find(bo) == mapped_frames.end()) {
auto vaapi_frame = av_frame_alloc();
if (!vaapi_frame) {
std::cerr << "Failed to allocate frame!" << std::endl;
return false;
}
AVDRMFrameDescriptor *desc = (AVDRMFrameDescriptor*) av_mallocz(sizeof(AVDRMFrameDescriptor));
desc->nb_layers = 1;
desc->nb_objects = 1;
desc->objects[0].fd = gbm_bo_get_fd(bo);
desc->objects[0].format_modifier = gbm_bo_get_modifier(bo);
desc->objects[0].size = gbm_bo_get_stride(bo) * gbm_bo_get_height(bo);
desc->layers[0].format = gbm_bo_get_format(bo);
desc->layers[0].nb_planes = gbm_bo_get_plane_count(bo);
for (int i = 0; i < gbm_bo_get_plane_count(bo); ++i) {
desc->layers[0].planes[i].object_index = 0;
desc->layers[0].planes[i].pitch = gbm_bo_get_stride_for_plane(bo, i);
desc->layers[0].planes[i].offset = gbm_bo_get_offset(bo, i);
}
frame->width = gbm_bo_get_width(bo);
frame->height = gbm_bo_get_height(bo);
frame->format = AV_PIX_FMT_DRM_PRIME;
frame->data[0] = reinterpret_cast<uint8_t*>(desc);
frame->buf[0] = av_buffer_create(frame->data[0], sizeof(*desc),
[](void *, uint8_t *data) {
av_free(data);
}, frame, 0);
vaapi_frame->format = AV_PIX_FMT_VAAPI;
vaapi_frame->hw_frames_ctx = av_buffer_ref(this->hw_frame_context_in);
int ret = av_hwframe_map(vaapi_frame, frame, AV_HWFRAME_MAP_READ);
av_frame_unref(frame);
if (ret < 0)
{
std::cerr << "Failed to map vaapi frame " << averr(ret) << std::endl;
return false;
}
mapped_frames[bo] = vaapi_frame;
}
av_frame_ref(frame, mapped_frames[bo]);
return push_frame(frame, usec);
}
#ifdef HAVE_PULSE
#define SRC_RATE 1e6
#define DST_RATE 1e3
static int64_t conv_audio_pts(SwrContext *ctx, int64_t in, int sample_rate)
{
//int64_t d = (int64_t) AUDIO_RATE * AUDIO_RATE;
int64_t d = (int64_t) sample_rate * sample_rate;
/* Convert from audio_src_tb to 1/(src_samplerate * dst_samplerate) */
in = av_rescale_rnd(in, d, SRC_RATE, AV_ROUND_NEAR_INF);
/* In units of 1/(src_samplerate * dst_samplerate) */
in = swr_next_pts(ctx, in);
/* Convert from 1/(src_samplerate * dst_samplerate) to audio_dst_tb */
return av_rescale_rnd(in, DST_RATE, d, AV_ROUND_NEAR_INF);
}
void FrameWriter::send_audio_pkt(AVFrame *frame)
{
AVPacket *pkt = av_packet_alloc();
pkt->data = NULL;
pkt->size = 0;
encode(audioCodecCtx, frame, pkt);
av_packet_free(&pkt);
}
size_t FrameWriter::get_audio_buffer_size()
{
return audioCodecCtx->frame_size << 3;
}
void FrameWriter::add_audio(const void* buffer)
{
AVFrame *inputf = av_frame_alloc();
inputf->sample_rate = params.sample_rate;
inputf->format = AV_SAMPLE_FMT_FLT;
inputf->channel_layout = AV_CH_LAYOUT_STEREO;
inputf->nb_samples = audioCodecCtx->frame_size;
av_frame_get_buffer(inputf, 0);
memcpy(inputf->data[0], buffer, get_audio_buffer_size());
AVFrame *outputf = av_frame_alloc();
outputf->format = audioCodecCtx->sample_fmt;
outputf->sample_rate = audioCodecCtx->sample_rate;
outputf->channel_layout = audioCodecCtx->channel_layout;
outputf->nb_samples = audioCodecCtx->frame_size;
av_frame_get_buffer(outputf, 0);
outputf->pts = conv_audio_pts(swrCtx, INT64_MIN, params.sample_rate);
swr_convert_frame(swrCtx, outputf, inputf);
send_audio_pkt(outputf);
av_frame_free(&inputf);
av_frame_free(&outputf);
}
#endif
void FrameWriter::finish_frame(AVCodecContext *enc_ctx, AVPacket& pkt)
{
static std::mutex fmt_mutex, pending_mutex;
if (enc_ctx == videoCodecCtx)
{
av_packet_rescale_ts(&pkt, videoCodecCtx->time_base, videoStream->time_base);
pkt.stream_index = videoStream->index;
}
#ifdef HAVE_PULSE
else
{
av_packet_rescale_ts(&pkt, (AVRational){ 1, 1000 }, audioStream->time_base);
pkt.stream_index = audioStream->index;
}
/* We use two locks to ensure that if WLOG the audio thread is waiting for
* the video one, when the video becomes ready the audio thread will be the
* next one to obtain the lock */
if (params.enable_audio)
{
pending_mutex.lock();
fmt_mutex.lock();
pending_mutex.unlock();
}
#endif
if (av_interleaved_write_frame(fmtCtx, &pkt) != 0) {
params.write_aborted_flag = true;
}
av_packet_unref(&pkt);
#ifdef HAVE_PULSE
if (params.enable_audio)
fmt_mutex.unlock();
#endif
}
FrameWriter::~FrameWriter()
{
// Writing the delayed frames:
AVPacket *pkt = av_packet_alloc();
encode(videoCodecCtx, NULL, pkt);
#ifdef HAVE_PULSE
if (params.enable_audio)
{
encode(audioCodecCtx, NULL, pkt);
}
#endif
// Writing the end of the file.
av_write_trailer(fmtCtx);
// Closing the file.
if (outputFmt && (!(outputFmt->flags & AVFMT_NOFILE)))
avio_closep(&fmtCtx->pb);
// Freeing all the allocated memory:
avcodec_free_context(&videoCodecCtx);
#ifdef HAVE_PULSE
if (params.enable_audio)
avcodec_free_context(&audioCodecCtx);
#endif
av_packet_free(&pkt);
// TODO: free all the hw accel
avformat_free_context(fmtCtx);
}
07070100000013000081A400000000000000000000000164E5C8BE00000F41000000000000000000000000000000000000002C00000000wf-recorder-0.4.0+git0/src/frame-writer.hpp// Adapted from https://stackoverflow.com/questions/34511312/how-to-encode-a-video-from-several-images-generated-in-a-c-program-without-wri
// (Later) adapted from https://github.com/apc-llc/moviemaker-cpp
#ifndef FRAME_WRITER
#define FRAME_WRITER
#include <stdint.h>
#include <string>
#include <vector>
#include <map>
#include <atomic>
#include "config.h"
extern "C"
{
#include <libswresample/swresample.h>
#include <libavcodec/avcodec.h>
#ifdef HAVE_LIBAVDEVICE
#include <libavdevice/avdevice.h>
#endif
#include <libavutil/mathematics.h>
#include <libavformat/avformat.h>
#include <libavfilter/avfilter.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/pixdesc.h>
#include <libavutil/hwcontext.h>
#include <libavutil/opt.h>
#include <libavutil/hwcontext_drm.h>
}
#include "config.h"
enum InputFormat
{
INPUT_FORMAT_BGR0,
INPUT_FORMAT_RGB0,
INPUT_FORMAT_BGR8,
INPUT_FORMAT_RGB565,
INPUT_FORMAT_BGR565,
INPUT_FORMAT_X2RGB10,
INPUT_FORMAT_X2BGR10,
INPUT_FORMAT_RGBX64,
INPUT_FORMAT_BGRX64,
INPUT_FORMAT_RGBX64F,
INPUT_FORMAT_DMABUF,
};
struct FrameWriterParams
{
std::string file;
int width;
int height;
int stride;
InputFormat format;
int drm_format;
std::string video_filter = "null"; // dummy filter
std::string codec;
std::string audio_codec;
std::string muxer;
std::string pix_fmt;
std::string sample_fmt;
std::string hw_device; // used only if codec contains vaapi
std::map<std::string, std::string> codec_options;
std::map<std::string, std::string> audio_codec_options;
int framerate = 0;
int sample_rate;
int buffrate = 0;
int64_t audio_sync_offset;
bool enable_audio;
bool enable_ffmpeg_debug_output;
int bframes;
std::atomic<bool>& write_aborted_flag;
FrameWriterParams(std::atomic<bool>& flag): write_aborted_flag(flag) {}
};
class FrameWriter
{
FrameWriterParams params;
void load_codec_options(AVDictionary **dict);
void load_audio_codec_options(AVDictionary **dict);
const AVOutputFormat* outputFmt;
AVStream* videoStream;
AVCodecContext* videoCodecCtx;
AVFormatContext* fmtCtx;
AVFilterContext* videoFilterSourceCtx = NULL;
AVFilterContext* videoFilterSinkCtx = NULL;
AVFilterGraph* videoFilterGraph = NULL;
AVBufferRef *hw_device_context = NULL;
AVBufferRef *hw_frame_context = NULL;
AVBufferRef *hw_frame_context_in = NULL;
std::map<struct gbm_bo*, AVFrame*> mapped_frames;
AVPixelFormat lookup_pixel_format(std::string pix_fmt);
AVPixelFormat handle_buffersink_pix_fmt(const AVCodec *codec);
AVPixelFormat get_input_format();
void init_hw_accel();
void init_codecs();
void init_video_filters(const AVCodec *codec);
void init_video_stream();
void encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt);
#ifdef HAVE_PULSE
SwrContext *swrCtx;
AVStream *audioStream;
AVCodecContext *audioCodecCtx;
void init_swr();
void init_audio_stream();
void send_audio_pkt(AVFrame *frame);
#endif
void finish_frame(AVCodecContext *enc_ctx, AVPacket& pkt);
bool push_frame(AVFrame *frame, int64_t usec);
public:
FrameWriter(const FrameWriterParams& params);
bool add_frame(const uint8_t* pixels, int64_t usec, bool y_invert);
bool add_frame(struct gbm_bo *bo, int64_t usec, bool y_invert);
#ifdef HAVE_PULSE
/* Buffer must have size get_audio_buffer_size() */
void add_audio(const void* buffer);
size_t get_audio_buffer_size();
#endif
~FrameWriter();
};
#include <memory>
#include <mutex>
#include <atomic>
extern std::mutex frame_writer_mutex, frame_writer_pending_mutex;
extern std::unique_ptr<FrameWriter> frame_writer;
extern std::atomic<bool> exit_main_loop;
#endif // FRAME_WRITER
07070100000014000081A400000000000000000000000164E5C8BE000093AA000000000000000000000000000000000000002400000000wf-recorder-0.4.0+git0/src/main.cpp#define _XOPEN_SOURCE 700
#define _POSIX_C_SOURCE 199309L
#include <iostream>
#include <list>
#include <string>
#include <thread>
#include <mutex>
#include <atomic>
#include <getopt.h>
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <signal.h>
#include <unistd.h>
#include <wayland-client-protocol.h>
#include <gbm.h>
#include <fcntl.h>
#include "frame-writer.hpp"
#include "buffer-pool.hpp"
#include "wlr-screencopy-unstable-v1-client-protocol.h"
#include "xdg-output-unstable-v1-client-protocol.h"
#include "linux-dmabuf-unstable-v1-client-protocol.h"
#include "wl-drm-client-protocol.h"
#include "config.h"
#ifdef HAVE_PULSE
#include "pulse.hpp"
PulseReaderParams pulseParams;
#endif
#define MAX_FRAME_FAILURES 16
static const int GRACEFUL_TERMINATION_SIGNALS[] = { SIGTERM, SIGINT, SIGHUP };
std::mutex frame_writer_mutex, frame_writer_pending_mutex;
std::unique_ptr<FrameWriter> frame_writer;
static int drm_fd = -1;
static struct gbm_device *gbm_device = NULL;
static std::string drm_device_name;
static struct wl_shm *shm = NULL;
static struct zxdg_output_manager_v1 *xdg_output_manager = NULL;
static struct zwlr_screencopy_manager_v1 *screencopy_manager = NULL;
static struct zwp_linux_dmabuf_v1 *dmabuf = NULL;
static struct wl_drm *drm = NULL;
void request_next_frame();
struct wf_recorder_output
{
wl_output *output;
zxdg_output_v1 *zxdg_output;
std::string name, description;
int32_t x, y, width, height;
};
std::list<wf_recorder_output> available_outputs;
static void handle_xdg_output_logical_position(void*,
zxdg_output_v1* zxdg_output, int32_t x, int32_t y)
{
for (auto& wo : available_outputs)
{
if (wo.zxdg_output == zxdg_output)
{
wo.x = x;
wo.y = y;
}
}
}
static void handle_xdg_output_logical_size(void*,
zxdg_output_v1* zxdg_output, int32_t w, int32_t h)
{
for (auto& wo : available_outputs)
{
if (wo.zxdg_output == zxdg_output)
{
wo.width = w;
wo.height = h;
}
}
}
static void handle_xdg_output_done(void*, zxdg_output_v1*) { }
static void handle_xdg_output_name(void*, zxdg_output_v1 *zxdg_output_v1,
const char *name)
{
for (auto& wo : available_outputs)
{
if (wo.zxdg_output == zxdg_output_v1)
wo.name = name;
}
}
static void handle_xdg_output_description(void*, zxdg_output_v1 *zxdg_output_v1,
const char *description)
{
for (auto& wo : available_outputs)
{
if (wo.zxdg_output == zxdg_output_v1)
wo.description = description;
}
}
const zxdg_output_v1_listener xdg_output_implementation = {
.logical_position = handle_xdg_output_logical_position,
.logical_size = handle_xdg_output_logical_size,
.done = handle_xdg_output_done,
.name = handle_xdg_output_name,
.description = handle_xdg_output_description
};
struct wf_buffer : public buffer_pool_buf
{
struct gbm_bo *bo = nullptr;
struct wl_buffer *wl_buffer = nullptr;
void *data = nullptr;
enum wl_shm_format format;
int drm_format;
int width, height, stride;
bool y_invert;
timespec presented;
uint64_t base_usec;
};
std::atomic<bool> exit_main_loop{false};
buffer_pool<wf_buffer, 16> buffers;
bool buffer_copy_done = false;
static int backingfile(off_t size)
{
char name[] = "/tmp/wf-recorder-shared-XXXXXX";
int fd = mkstemp(name);
if (fd < 0) {
return -1;
}
int ret;
while ((ret = ftruncate(fd, size)) == EINTR) {
// No-op
}
if (ret < 0) {
close(fd);
return -1;
}
unlink(name);
return fd;
}
static struct wl_buffer *create_shm_buffer(uint32_t fmt,
int width, int height, int stride, void **data_out)
{
int size = stride * height;
int fd = backingfile(size);
if (fd < 0) {
fprintf(stderr, "creating a buffer file for %d B failed: %m\n", size);
return NULL;
}
void *data = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if (data == MAP_FAILED) {
fprintf(stderr, "mmap failed: %m\n");
close(fd);
return NULL;
}
struct wl_shm_pool *pool = wl_shm_create_pool(shm, fd, size);
close(fd);
struct wl_buffer *buffer = wl_shm_pool_create_buffer(pool, 0, width, height,
stride, fmt);
wl_shm_pool_destroy(pool);
*data_out = data;
return buffer;
}
static bool use_damage = true;
static bool use_dmabuf = false;
static bool use_hwupload = false;
static void frame_handle_buffer(void *, struct zwlr_screencopy_frame_v1 *frame, uint32_t format,
uint32_t width, uint32_t height, uint32_t stride)
{
if (use_dmabuf) {
return;
}
auto& buffer = buffers.capture();
buffer.format = (wl_shm_format)format;
buffer.width = width;
buffer.height = height;
buffer.stride = stride;
/* ffmpeg requires even width and height */
if (buffer.width % 2)
buffer.width -= 1;
if (buffer.height % 2)
buffer.height -= 1;
if (!buffer.wl_buffer) {
buffer.wl_buffer =
create_shm_buffer(format, width, height, stride, &buffer.data);
}
if (buffer.wl_buffer == NULL) {
fprintf(stderr, "failed to create buffer\n");
exit(EXIT_FAILURE);
}
if (use_damage) {
zwlr_screencopy_frame_v1_copy_with_damage(frame, buffer.wl_buffer);
} else {
zwlr_screencopy_frame_v1_copy(frame, buffer.wl_buffer);
}
}
static void frame_handle_flags(void*, struct zwlr_screencopy_frame_v1 *, uint32_t flags) {
buffers.capture().y_invert = flags & ZWLR_SCREENCOPY_FRAME_V1_FLAGS_Y_INVERT;
}
int32_t frame_failed_cnt = 0;
static void frame_handle_ready(void *, struct zwlr_screencopy_frame_v1 *,
uint32_t tv_sec_hi, uint32_t tv_sec_low, uint32_t tv_nsec) {
auto& buffer = buffers.capture();
buffer_copy_done = true;
buffer.presented.tv_sec = ((1ll * tv_sec_hi) << 32ll) | tv_sec_low;
buffer.presented.tv_nsec = tv_nsec;
frame_failed_cnt = 0;
}
static void frame_handle_failed(void *, struct zwlr_screencopy_frame_v1 *) {
std::cerr << "Failed to copy frame, retrying..." << std::endl;
++frame_failed_cnt;
request_next_frame();
if (frame_failed_cnt > MAX_FRAME_FAILURES)
{
std::cerr << "Failed to copy frame too many times, exiting!" << std::endl;
exit_main_loop = true;
}
}
static void frame_handle_damage(void *, struct zwlr_screencopy_frame_v1 *,
uint32_t, uint32_t, uint32_t, uint32_t)
{
}
static void dmabuf_created(void *data, struct zwp_linux_buffer_params_v1 *,
struct wl_buffer *wl_buffer) {
auto& buffer = buffers.capture();
buffer.wl_buffer = wl_buffer;
zwlr_screencopy_frame_v1 *frame = (zwlr_screencopy_frame_v1*) data;
if (use_damage) {
zwlr_screencopy_frame_v1_copy_with_damage(frame, buffer.wl_buffer);
} else {
zwlr_screencopy_frame_v1_copy(frame, buffer.wl_buffer);
}
}
static void dmabuf_failed(void *, struct zwp_linux_buffer_params_v1 *) {
std::cerr << "Failed to create dmabuf" << std::endl;
exit_main_loop = true;
}
static const struct zwp_linux_buffer_params_v1_listener params_listener = {
.created = dmabuf_created,
.failed = dmabuf_failed,
};
static wl_shm_format drm_to_wl_shm_format(uint32_t format)
{
if (format == GBM_FORMAT_ARGB8888) {
return WL_SHM_FORMAT_ARGB8888;
} else if (format == GBM_FORMAT_XRGB8888) {
return WL_SHM_FORMAT_XRGB8888;
} else {
return (wl_shm_format)format;
}
}
static void frame_handle_linux_dmabuf(void *, struct zwlr_screencopy_frame_v1 *frame,
uint32_t format, uint32_t width, uint32_t height)
{
if (!use_dmabuf) {
return;
}
auto& buffer = buffers.capture();
buffer.format = drm_to_wl_shm_format(format);
buffer.drm_format = format;
buffer.width = width;
buffer.height = height;
if (!buffer.wl_buffer) {
const uint64_t modifier = 0; // DRM_FORMAT_MOD_LINEAR
buffer.bo = gbm_bo_create_with_modifiers(gbm_device, buffer.width,
buffer.height, format, &modifier, 1);
if (buffer.bo == NULL)
{
buffer.bo = gbm_bo_create(gbm_device, buffer.width,
buffer.height, format, GBM_BO_USE_LINEAR | GBM_BO_USE_RENDERING);
}
if (buffer.bo == NULL)
{
std::cerr << "Failed to create gbm bo" << std::endl;
exit_main_loop = true;
return;
}
buffer.stride = gbm_bo_get_stride(buffer.bo);
struct zwp_linux_buffer_params_v1 *params =
zwp_linux_dmabuf_v1_create_params(dmabuf);
uint64_t mod = gbm_bo_get_modifier(buffer.bo);
zwp_linux_buffer_params_v1_add(params,
gbm_bo_get_fd(buffer.bo), 0,
gbm_bo_get_offset(buffer.bo, 0),
gbm_bo_get_stride(buffer.bo),
mod >> 32, mod & 0xffffffff);
zwp_linux_buffer_params_v1_add_listener(params, ¶ms_listener, frame);
zwp_linux_buffer_params_v1_create(params, buffer.width,
buffer.height, format, 0);
} else {
if (use_damage) {
zwlr_screencopy_frame_v1_copy_with_damage(frame, buffer.wl_buffer);
} else {
zwlr_screencopy_frame_v1_copy(frame, buffer.wl_buffer);
}
}
}
static void frame_handle_buffer_done(void *, struct zwlr_screencopy_frame_v1 *) {
}
static const struct zwlr_screencopy_frame_v1_listener frame_listener = {
.buffer = frame_handle_buffer,
.flags = frame_handle_flags,
.ready = frame_handle_ready,
.failed = frame_handle_failed,
.damage = frame_handle_damage,
.linux_dmabuf = frame_handle_linux_dmabuf,
.buffer_done = frame_handle_buffer_done,
};
static void drm_handle_device(void *, struct wl_drm *, const char *name) {
drm_device_name = name;
}
static void drm_handle_format(void *, struct wl_drm *, uint32_t) {
}
static void drm_handle_authenticated(void *, struct wl_drm *) {
}
static void drm_handle_capabilities(void *, struct wl_drm *, uint32_t) {
}
static const struct wl_drm_listener drm_listener = {
.device = drm_handle_device,
.format = drm_handle_format,
.authenticated = drm_handle_authenticated,
.capabilities = drm_handle_capabilities,
};
static void handle_global(void*, struct wl_registry *registry,
uint32_t name, const char *interface, uint32_t) {
if (strcmp(interface, wl_output_interface.name) == 0)
{
auto output = (wl_output*)wl_registry_bind(registry, name, &wl_output_interface, 1);
wf_recorder_output wro;
wro.output = output;
available_outputs.push_back(wro);
}
else if (strcmp(interface, wl_shm_interface.name) == 0)
{
shm = (wl_shm*) wl_registry_bind(registry, name, &wl_shm_interface, 1);
}
else if (strcmp(interface, zwlr_screencopy_manager_v1_interface.name) == 0)
{
screencopy_manager = (zwlr_screencopy_manager_v1*) wl_registry_bind(registry, name,
&zwlr_screencopy_manager_v1_interface, 3);
}
else if (strcmp(interface, zxdg_output_manager_v1_interface.name) == 0)
{
xdg_output_manager = (zxdg_output_manager_v1*) wl_registry_bind(registry, name,
&zxdg_output_manager_v1_interface, 2); // version 2 for name & description, if available
}
else if (strcmp(interface, zwp_linux_dmabuf_v1_interface.name) == 0)
{
dmabuf = (zwp_linux_dmabuf_v1*) wl_registry_bind(registry, name,
&zwp_linux_dmabuf_v1_interface, 3);
}
else if (strcmp(interface, wl_drm_interface.name) == 0)
{
drm = (wl_drm*) wl_registry_bind(registry, name, &wl_drm_interface, 1);
wl_drm_add_listener(drm, &drm_listener, NULL);
}
}
static void handle_global_remove(void*, struct wl_registry *, uint32_t) {
// Who cares?
}
static const struct wl_registry_listener registry_listener = {
.global = handle_global,
.global_remove = handle_global_remove,
};
static uint64_t timespec_to_usec (const timespec& ts)
{
return ts.tv_sec * 1000000ll + 1ll * ts.tv_nsec / 1000ll;
}
static InputFormat get_input_format(wf_buffer& buffer)
{
if (use_dmabuf && !use_hwupload) {
return INPUT_FORMAT_DMABUF;
}
switch (buffer.format) {
case WL_SHM_FORMAT_ARGB8888:
case WL_SHM_FORMAT_XRGB8888:
return INPUT_FORMAT_BGR0;
case WL_SHM_FORMAT_XBGR8888:
case WL_SHM_FORMAT_ABGR8888:
return INPUT_FORMAT_RGB0;
case WL_SHM_FORMAT_BGR888:
return INPUT_FORMAT_BGR8;
case WL_SHM_FORMAT_RGB565:
return INPUT_FORMAT_RGB565;
case WL_SHM_FORMAT_BGR565:
return INPUT_FORMAT_BGR565;
case WL_SHM_FORMAT_ARGB2101010:
case WL_SHM_FORMAT_XRGB2101010:
return INPUT_FORMAT_X2RGB10;
case WL_SHM_FORMAT_ABGR2101010:
case WL_SHM_FORMAT_XBGR2101010:
return INPUT_FORMAT_X2BGR10;
case WL_SHM_FORMAT_ABGR16161616:
case WL_SHM_FORMAT_XBGR16161616:
return INPUT_FORMAT_RGBX64;
case WL_SHM_FORMAT_ARGB16161616:
case WL_SHM_FORMAT_XRGB16161616:
return INPUT_FORMAT_BGRX64;
case WL_SHM_FORMAT_ABGR16161616F:
case WL_SHM_FORMAT_XBGR16161616F:
return INPUT_FORMAT_RGBX64F;
default:
fprintf(stderr, "Unsupported buffer format %d, exiting.", buffer.format);
std::exit(0);
}
}
static void write_loop(FrameWriterParams params)
{
/* Ignore SIGTERM/SIGINT/SIGHUP, main loop is responsible for the exit_main_loop signal */
sigset_t sigset;
sigemptyset(&sigset);
for (auto signo : GRACEFUL_TERMINATION_SIGNALS)
{
sigaddset(&sigset, signo);
}
pthread_sigmask(SIG_BLOCK, &sigset, NULL);
#ifdef HAVE_PULSE
std::unique_ptr<PulseReader> pr;
#endif
while(!exit_main_loop)
{
// wait for frame to become available
while(buffers.encode().ready_encode() != true && !exit_main_loop) {
std::this_thread::sleep_for(std::chrono::microseconds(1000));
}
if (exit_main_loop) {
break;
}
auto& buffer = buffers.encode();
frame_writer_pending_mutex.lock();
frame_writer_mutex.lock();
frame_writer_pending_mutex.unlock();
if (!frame_writer)
{
/* This is the first time buffer attributes are available */
params.format = get_input_format(buffer);
params.drm_format = buffer.drm_format;
params.width = buffer.width;
params.height = buffer.height;
params.stride = buffer.stride;
frame_writer = std::unique_ptr<FrameWriter> (new FrameWriter(params));
#ifdef HAVE_PULSE
if (params.enable_audio)
{
pulseParams.audio_frame_size = frame_writer->get_audio_buffer_size();
pulseParams.sample_rate = params.sample_rate;
pr = std::unique_ptr<PulseReader> (new PulseReader(pulseParams));
pr->start();
}
#endif
}
bool do_cont = false;
if (use_dmabuf) {
if (use_hwupload) {
uint32_t stride = 0;
void *map_data = NULL;
void *data = gbm_bo_map(buffer.bo, 0, 0, buffer.width, buffer.height,
GBM_BO_TRANSFER_READ, &stride, &map_data);
if (!data) {
std::cerr << "Failed to map bo" << std::endl;
break;
}
do_cont = frame_writer->add_frame((unsigned char*)data,
buffer.base_usec, buffer.y_invert);
gbm_bo_unmap(buffer.bo, map_data);
} else {
do_cont = frame_writer->add_frame(buffer.bo,
buffer.base_usec, buffer.y_invert);
}
} else {
do_cont = frame_writer->add_frame((unsigned char*)buffer.data,
buffer.base_usec, buffer.y_invert);
}
frame_writer_mutex.unlock();
if (!do_cont) {
break;
}
buffers.next_encode();
}
std::lock_guard<std::mutex> lock(frame_writer_mutex);
/* Free the PulseReader connection first. This way it'd flush any remaining
* frames to the FrameWriter */
#ifdef HAVE_PULSE
pr = nullptr;
#endif
frame_writer = nullptr;
}
void handle_graceful_termination(int)
{
exit_main_loop = true;
}
static bool user_specified_overwrite(std::string filename)
{
struct stat buffer;
if (stat (filename.c_str(), &buffer) == 0 && !S_ISCHR(buffer.st_mode))
{
std::string input;
std::cout << "Output file \"" << filename << "\" exists. Overwrite? Y/n: ";
std::getline(std::cin, input);
if (input.size() && input[0] != 'Y' && input[0] != 'y')
{
std::cout << "Use -f to specify the file name." << std::endl;
return false;
}
}
return true;
}
static void check_has_protos()
{
if (shm == NULL) {
fprintf(stderr, "compositor is missing wl_shm\n");
exit(EXIT_FAILURE);
}
if (screencopy_manager == NULL) {
fprintf(stderr, "compositor doesn't support wlr-screencopy-unstable-v1\n");
exit(EXIT_FAILURE);
}
if (xdg_output_manager == NULL)
{
fprintf(stderr, "compositor doesn't support xdg-output-unstable-v1\n");
exit(EXIT_FAILURE);
}
if (use_dmabuf && dmabuf == NULL) {
fprintf(stderr, "compositor doesn't support linux-dmabuf-unstable-v1\n");
exit(EXIT_FAILURE);
}
if (available_outputs.empty())
{
fprintf(stderr, "no outputs available\n");
exit(EXIT_FAILURE);
}
}
wl_display *display = NULL;
static void sync_wayland()
{
wl_display_dispatch(display);
wl_display_roundtrip(display);
}
static void load_output_info()
{
for (auto& wo : available_outputs)
{
wo.zxdg_output = zxdg_output_manager_v1_get_xdg_output(
xdg_output_manager, wo.output);
zxdg_output_v1_add_listener(wo.zxdg_output,
&xdg_output_implementation, NULL);
}
sync_wayland();
}
static wf_recorder_output* choose_interactive()
{
fprintf(stdout, "Please select an output from the list to capture (enter output no.):\n");
int i = 1;
for (auto& wo : available_outputs)
{
printf("%d. Name: %s Description: %s\n", i++, wo.name.c_str(),
wo.description.c_str());
}
printf("Enter output no.:");
fflush(stdout);
int choice;
if (scanf("%d", &choice) != 1 || choice > (int)available_outputs.size() || choice <= 0)
return nullptr;
auto it = available_outputs.begin();
std::advance(it, choice - 1);
return &*it;
}
struct capture_region
{
int32_t x, y;
int32_t width, height;
capture_region()
: capture_region(0, 0, 0, 0) {}
capture_region(int32_t _x, int32_t _y, int32_t _width, int32_t _height)
: x(_x), y(_y), width(_width), height(_height) { }
void set_from_string(std::string geometry_string)
{
if (sscanf(geometry_string.c_str(), "%d,%d %dx%d", &x, &y, &width, &height) != 4)
{
fprintf(stderr, "Bad geometry: %s, capturing whole output instead.\n",
geometry_string.c_str());
x = y = width = height = 0;
return;
}
}
bool is_selected()
{
return width > 0 && height > 0;
}
bool contained_in(const capture_region& output) const
{
return
output.x <= x &&
output.x + output.width >= x + width &&
output.y <= y &&
output.y + output.height >= y + height;
}
};
static wf_recorder_output* detect_output_from_region(const capture_region& region)
{
for (auto& wo : available_outputs)
{
const capture_region output_region{wo.x, wo.y, wo.width, wo.height};
if (region.contained_in(output_region))
{
std::cout << "Detected output based on geometry: " << wo.name << std::endl;
return &wo;
}
}
std::cerr << "Failed to detect output based on geometry (is your geometry overlapping outputs?)" << std::endl;
return nullptr;
}
static void help()
{
printf(R"(Usage: wf-recorder [OPTION]... -f [FILE]...
Screen recording of wlroots-based compositors
With no FILE, start recording the current screen.
Use Ctrl+C to stop.)");
#ifdef HAVE_PULSE
printf(R"(
-a, --audio[=DEVICE] Starts recording the screen with audio.
[=DEVICE] argument is optional.
In case you want to specify the pulseaudio device which will capture
the audio, you can run this command with the name of that device.
You can find your device by running: pactl list sources | grep Name
Specify device like this: -a<device> or --audio=<device>)");
#endif
printf(R"(
-c, --codec Specifies the codec of the video. These can be found by using:
ffmpeg -encoders
To modify codec parameters, use -p <option_name>=<option_value>
-r, --framerate Changes framerate to constant framerate with a given value.
-d, --device Selects the device to use when encoding the video
Some drivers report support for rgb0 data for vaapi input but
really only support yuv.
--no-dmabuf By default, wf-recorder will try to use only GPU buffers and copies if
using a GPU encoder. However, this can cause issues on some systems. In such
cases, this option will disable the GPU copy and force a CPU one.
-D, --no-damage By default, wf-recorder will request a new frame from the compositor
only when the screen updates. This results in a much smaller output
file, which however has a variable refresh rate. When this option is
on, wf-recorder does not use this optimization and continuously
records new frames, even if there are no updates on the screen.
-f <filename>.ext By using the -f option the output file will have the name :
filename.ext and the file format will be determined by provided
while extension .ext . If the extension .ext provided is not
recognized by your FFmpeg muxers, the command will fail.
You can check the muxers that your FFmpeg installation supports by
running: ffmpeg -muxers
-m, --muxer Set the output format to a specific muxer instead of detecting it
from the filename.
-x, --pixel-format Set the output pixel format. These can be found by running:
ffmpeg -pix_fmts
-g, --geometry Selects a specific part of the screen. The format is "x,y WxH".
-h, --help Prints this help screen.
-v, --version Prints the version of wf-recorder.
-l, --log Generates a log on the current terminal. Debug purposes.
-o, --output Specify the output where the video is to be recorded.
-p, --codec-param Change the codec parameters.
-p <option_name>=<option_value>
-F, --filter Specify the ffmpeg filter string to use. For example,
-F scale_vaapi=format=nv12 is used for VAAPI.
-b, --bframes This option is used to set the maximum number of b-frames to be used.
If b-frames are not supported by your hardware, set this to 0.
-B. --buffrate This option is used to specify the buffers expected framerate. this
may help when encoders are expecting specifc or limited framerate.
-C, --audio-codec Specifies the codec of the audio. These can be found by running:
ffmpeg -encoders
To modify codec parameters, use -P <option_name>=<option_value>
-X, --sample-format Set the output audio sample format. These can be found by running:
ffmpeg -sample_fmts
-R, --sample-rate Changes the audio sample rate in HZ. The default value is 48000.
-P, --audio-codec-param Change the audio codec parameters.
-P <option_name>=<option_value>
Examples:)");
#ifdef HAVE_PULSE
printf(R"(
Video Only:)");
#endif
printf(R"(
- wf-recorder Records the video. Use Ctrl+C to stop recording.
The video file will be stored as recording.mp4 in the
current working directory.
- wf-recorder -f <filename>.ext Records the video. Use Ctrl+C to stop recording.
The video file will be stored as <filename>.ext in the
current working directory.)");
#ifdef HAVE_PULSE
printf(R"(
Video and Audio:
- wf-recorder -a Records the video and audio. Use Ctrl+C to stop recording.
The video file will be stored as recording.mp4 in the
current working directory.
- wf-recorder -a -f <filename>.ext Records the video and audio. Use Ctrl+C to stop recording.
The video file will be stored as <filename>.ext in the
current working directory.)");
#endif
printf(R"(
)" "\n");
exit(EXIT_SUCCESS);
}
capture_region selected_region{};
wf_recorder_output *chosen_output = nullptr;
zwlr_screencopy_frame_v1 *frame = NULL;
void request_next_frame()
{
if (frame != NULL)
{
zwlr_screencopy_frame_v1_destroy(frame);
}
/* Capture the whole output if the user hasn't provided a good geometry */
if (!selected_region.is_selected())
{
frame = zwlr_screencopy_manager_v1_capture_output(
screencopy_manager, 1, chosen_output->output);
} else
{
frame = zwlr_screencopy_manager_v1_capture_output_region(
screencopy_manager, 1, chosen_output->output,
selected_region.x - chosen_output->x,
selected_region.y - chosen_output->y,
selected_region.width, selected_region.height);
}
zwlr_screencopy_frame_v1_add_listener(frame, &frame_listener, NULL);
}
static void parse_codec_opts(std::map<std::string, std::string>& options, const std::string param)
{
size_t pos;
pos = param.find("=");
if (pos != std::string::npos && pos != param.length() -1)
{
auto optname = param.substr(0, pos);
auto optvalue = param.substr(pos + 1, param.length() - pos - 1);
options.insert(std::pair<std::string, std::string>(optname, optvalue));
} else
{
std::cerr << "Invalid codec option " + param << std::endl;
}
}
int main(int argc, char *argv[])
{
FrameWriterParams params = FrameWriterParams(exit_main_loop);
params.file = "recording." + std::string(DEFAULT_CONTAINER_FORMAT);
params.codec = DEFAULT_CODEC;
params.audio_codec = DEFAULT_AUDIO_CODEC;
params.sample_rate = DEFAULT_AUDIO_SAMPLE_RATE;
params.enable_ffmpeg_debug_output = false;
params.enable_audio = false;
params.bframes = -1;
constexpr const char* default_cmdline_output = "interactive";
std::string cmdline_output = default_cmdline_output;
bool force_no_dmabuf = false;
struct option opts[] = {
{ "output", required_argument, NULL, 'o' },
{ "file", required_argument, NULL, 'f' },
{ "muxer", required_argument, NULL, 'm' },
{ "geometry", required_argument, NULL, 'g' },
{ "codec", required_argument, NULL, 'c' },
{ "codec-param", required_argument, NULL, 'p' },
{ "framerate", required_argument, NULL, 'r' },
{ "pixel-format", required_argument, NULL, 'x' },
{ "audio-codec", required_argument, NULL, 'C' },
{ "audio-codec-param", required_argument, NULL, 'P' },
{ "sample-rate", required_argument, NULL, 'R' },
{ "sample-format", required_argument, NULL, 'X' },
{ "device", required_argument, NULL, 'd' },
{ "no-dmabuf", no_argument, NULL, '&' },
{ "filter", required_argument, NULL, 'F' },
{ "log", no_argument, NULL, 'l' },
{ "audio", optional_argument, NULL, 'a' },
{ "help", no_argument, NULL, 'h' },
{ "bframes", required_argument, NULL, 'b' },
{ "buffrate", required_argument, NULL, 'B' },
{ "version", no_argument, NULL, 'v' },
{ "no-damage", no_argument, NULL, 'D' },
{ 0, 0, NULL, 0 }
};
int c, i;
while((c = getopt_long(argc, argv, "o:f:m:g:c:p:r:x:C:P:R:X:d:b:B:la::hvDF:", opts, &i)) != -1)
{
switch(c)
{
case 'f':
params.file = optarg;
break;
case 'F':
params.video_filter = optarg;
break;
case 'o':
cmdline_output = optarg;
break;
case 'm':
params.muxer = optarg;
break;
case 'g':
selected_region.set_from_string(optarg);
break;
case 'c':
params.codec = optarg;
break;
case 'r':
params.framerate = atoi(optarg);
break;
case 'x':
params.pix_fmt = optarg;
break;
case 'C':
params.audio_codec = optarg;
break;
case 'R':
params.sample_rate = atoi(optarg);
break;
case 'X':
params.sample_fmt = optarg;
break;
case 'd':
params.hw_device = optarg;
break;
case 'b':
params.bframes = atoi(optarg);
break;
case 'B':
params.buffrate = atoi(optarg);
break;
case 'l':
params.enable_ffmpeg_debug_output = true;
break;
case 'a':
#ifdef HAVE_PULSE
params.enable_audio = true;
pulseParams.audio_source = optarg ? strdup(optarg) : NULL;
#else
std::cerr << "Cannot record audio. Built without pulse support." << std::endl;
return EXIT_FAILURE;
#endif
break;
case 'h':
help();
break;
case 'p':
parse_codec_opts(params.codec_options, optarg);
break;
case 'v':
printf("wf-recorder %s\n", WFRECORDER_VERSION);
return 0;
case 'D':
use_damage = false;
break;
case 'P':
parse_codec_opts(params.audio_codec_options, optarg);
break;
case '&':
force_no_dmabuf = true;
break;
default:
printf("Unsupported command line argument %s\n", optarg);
}
}
if (!user_specified_overwrite(params.file))
{
return EXIT_FAILURE;
}
display = wl_display_connect(NULL);
if (display == NULL)
{
fprintf(stderr, "failed to create display: %m\n");
return EXIT_FAILURE;
}
struct wl_registry *registry = wl_display_get_registry(display);
wl_registry_add_listener(registry, ®istry_listener, NULL);
sync_wayland();
if (params.codec.find("vaapi") != std::string::npos)
{
std::cout << "using VA-API, trying to enable DMA-BUF capture..." << std::endl;
// try compositor device if not explicitly set
if (params.hw_device.empty())
{
params.hw_device = drm_device_name;
}
// check we use same device as compositor
if (!params.hw_device.empty() && params.hw_device == drm_device_name && !force_no_dmabuf)
{
use_dmabuf = true;
} else if (force_no_dmabuf) {
std::cout << "Disabling DMA-BUF as requested on command line" << std::endl;
} else {
std::cout << "compositor running on different device, disabling DMA-BUF" << std::endl;
}
// region with dmabuf not implemented in wlroots
if (selected_region.is_selected())
{
use_dmabuf = false;
std::cout << "region capture not supported with DMA-BUF" << std::endl;
}
if (params.video_filter == "null")
{
params.video_filter = "scale_vaapi=format=nv12:out_range=full";
if (!use_dmabuf)
{
params.video_filter.insert(0, "hwupload,");
}
}
if (use_dmabuf)
{
std::cout << "enabled DMA-BUF capture, device " << params.hw_device.c_str() << std::endl;
drm_fd = open(params.hw_device.c_str(), O_RDWR);
if (drm_fd < 0)
{
fprintf(stderr, "failed to open drm device: %m\n");
return EXIT_FAILURE;
}
gbm_device = gbm_create_device(drm_fd);
if (gbm_device == NULL)
{
fprintf(stderr, "failed to create gbm device: %m\n");
return EXIT_FAILURE;
}
use_hwupload = params.video_filter.find("hwupload") != std::string::npos;
}
}
check_has_protos();
load_output_info();
if (available_outputs.size() == 1)
{
chosen_output = &available_outputs.front();
if (chosen_output->name != cmdline_output &&
cmdline_output != default_cmdline_output)
{
std::cerr << "Couldn't find requested output "
<< cmdline_output << std::endl;
return EXIT_FAILURE;
}
} else
{
for (auto& wo : available_outputs)
{
if (wo.name == cmdline_output)
chosen_output = &wo;
}
if (chosen_output == NULL)
{
if (cmdline_output != default_cmdline_output)
{
std::cerr << "Couldn't find requested output "
<< cmdline_output.c_str() << std::endl;
return EXIT_FAILURE;
}
if (selected_region.is_selected())
{
chosen_output = detect_output_from_region(selected_region);
}
else
{
chosen_output = choose_interactive();
}
}
}
if (chosen_output == nullptr)
{
fprintf(stderr, "Failed to select output, exiting\n");
return EXIT_FAILURE;
}
if (selected_region.is_selected())
{
if (!selected_region.contained_in({chosen_output->x, chosen_output->y,
chosen_output->width, chosen_output->height}))
{
fprintf(stderr, "Invalid region to capture: must be completely "
"inside the output\n");
selected_region = capture_region{};
}
}
printf("selected region %d,%d %dx%d\n", selected_region.x, selected_region.y, selected_region.width, selected_region.height);
timespec first_frame;
first_frame.tv_sec = -1;
bool spawned_thread = false;
std::thread writer_thread;
for (auto signo : GRACEFUL_TERMINATION_SIGNALS)
{
signal(signo, handle_graceful_termination);
}
while(!exit_main_loop)
{
// wait for a free buffer
while(buffers.capture().ready_capture() != true) {
std::this_thread::sleep_for(std::chrono::microseconds(500));
}
buffer_copy_done = false;
request_next_frame();
while (!buffer_copy_done && !exit_main_loop && wl_display_dispatch(display) != -1) {
// This space is intentionally left blank
}
if (exit_main_loop) {
break;
}
auto& buffer = buffers.capture();
//std::cout << "first buffer at " << timespec_to_usec(get_ct()) / 1.0e6<< std::endl;
if (!spawned_thread)
{
writer_thread = std::thread([=] () {
write_loop(params);
});
spawned_thread = true;
}
if (first_frame.tv_sec == -1)
first_frame = buffer.presented;
buffer.base_usec = timespec_to_usec(buffer.presented)
- timespec_to_usec(first_frame);
buffers.next_capture();
}
if (writer_thread.joinable())
{
writer_thread.join();
}
for (size_t i = 0; i < buffers.size(); ++i)
{
auto buffer = buffers.at(i);
if (buffer && buffer->wl_buffer)
wl_buffer_destroy(buffer->wl_buffer);
}
if (gbm_device) {
gbm_device_destroy(gbm_device);
close(drm_fd);
}
return EXIT_SUCCESS;
}
07070100000015000081A400000000000000000000000164E5C8BE00000646000000000000000000000000000000000000002500000000wf-recorder-0.4.0+git0/src/pulse.cpp#include "pulse.hpp"
#include "frame-writer.hpp"
#include <iostream>
#include <vector>
#include <cstring>
#include <thread>
PulseReader::PulseReader(PulseReaderParams _p)
: params(_p)
{
pa_channel_map map;
std::memset(&map, 0, sizeof(map));
pa_channel_map_init_stereo(&map);
pa_buffer_attr attr;
attr.maxlength = params.audio_frame_size * 4;
attr.fragsize = params.audio_frame_size * 4;
pa_sample_spec sample_spec =
{
.format = PA_SAMPLE_FLOAT32LE,
.rate = params.sample_rate,
.channels = 2,
};
int perr;
std::cout << "Using PulseAudio device: " << (params.audio_source ?: "default") << std::endl;
pa = pa_simple_new(NULL, "wf-recorder3", PA_STREAM_RECORD, params.audio_source,
"wf-recorder3", &sample_spec, &map, &attr, &perr);
if (!pa)
{
std::cerr << "Failed to connect to PulseAudio: " << pa_strerror(perr)
<< "\nRecording won't have audio" << std::endl;
}
}
bool PulseReader::loop()
{
static std::vector<char> buffer;
buffer.resize(params.audio_frame_size);
int perr;
if (pa_simple_read(pa, buffer.data(), buffer.size(), &perr) < 0)
{
std::cerr << "Failed to read from PulseAudio stream: "
<< pa_strerror(perr) << std::endl;
return false;
}
frame_writer->add_audio(buffer.data());
return !exit_main_loop;
}
void PulseReader::start()
{
if (!pa)
return;
read_thread = std::thread([=] ()
{
while (loop());
});
}
PulseReader::~PulseReader()
{
if (pa)
read_thread.join();
}
07070100000016000081A400000000000000000000000164E5C8BE000001F3000000000000000000000000000000000000002500000000wf-recorder-0.4.0+git0/src/pulse.hpp#ifndef PULSE_HPP
#define PULSE_HPP
#include <pulse/simple.h>
#include <pulse/error.h>
#include <thread>
struct PulseReaderParams
{
size_t audio_frame_size;
uint32_t sample_rate;
/* Can be NULL */
char *audio_source;
};
class PulseReader
{
PulseReaderParams params;
pa_simple *pa;
bool loop();
std::thread read_thread;
public:
PulseReader(PulseReaderParams params);
~PulseReader();
void start();
};
#endif /* end of include guard: PULSE_HPP */
07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!226 blocks