Module documentation¶
VidStab
class¶
-
class
vidstab.
VidStab
(kp_method='GFTT', processing_max_dim=inf, *args, **kwargs)¶ A class for stabilizing video files
The VidStab class can be used to stabilize videos using functionality from OpenCV. Input video is read from file, put through stabilization process, and written to an output file.
The process calculates optical flow (
cv2.calcOpticalFlowPyrLK
) from frame to frame using keypoints generated by the keypoint method specified by the user. The optical flow will be used to generate frame to frame transformations (cv2.estimateRigidTransform
). Transformations will be applied (cv2.warpAffine
) to stabilize video.This class is based on the work presented by Nghia Ho
- Parameters
kp_method – String of the type of keypoint detector to use. Available options are:
["GFTT", "BRISK", "DENSE", "FAST", "HARRIS", "MSER", "ORB", "STAR"]
.["SIFT", "SURF"]
are additional non-free options available depending on your build of OpenCV. The non-free detectors are not tested with this package.processing_max_dim –
Working with large frames can harm performance (especially in live video). Setting this parameter can restrict frame size while processing. The outputted frames will remain the original size.
For example:
If an input frame shape is (200, 400, 3) and processing_max_dim is 100. The frame will be resized to (50, 100, 3) before processing.
If an input frame shape is (400, 200, 3) and processing_max_dim is 100. The frame will be resized to (100, 50, 3) before processing.
If an input frame shape is (50, 50, 3) and processing_max_dim is 100. The frame be unchanged for processing.
args – Positional arguments for keypoint detector.
kwargs – Keyword arguments for keypoint detector.
- Variables
kp_method – a string naming the keypoint detector being used
processing_max_dim – max image dimension while processing transforms
kp_detector – the keypoint detector object being used
trajectory – a 2d showing the trajectory of the input video
smoothed_trajectory – a 2d numpy array showing the smoothed trajectory of the input video
transforms – a 2d numpy array storing the transformations used from frame to frame
-
apply_transforms
(input_path, output_path, output_fourcc='MJPG', border_type='black', border_size=0, layer_func=None, show_progress=True, playback=False)¶ Apply stored transforms to a video and save output to file
Use the transforms generated by
VidStab.gen_transforms
orVidStab.stabilize
in stabilization process. This method is a wrapper forVidStab.stabilize
withuse_stored_transforms=True
; it is included for backwards compatibility.- Parameters
input_path – Path to input video to stabilize. Will be read with
cv2.VideoCapture
; see opencv documentation for more info.output_path – Path to save stabilized video. Will be written with
cv2.VideoWriter
; see opencv documentation for more info.output_fourcc – FourCC is a 4-byte code used to specify the video codec.
border_type – How to handle negative space created by stabilization translations/rotations. Options:
['black', 'reflect', 'replicate']
border_size – Size of border in output. Positive values will pad video equally on all sides, negative values will crop video equally on all sides,
'auto'
will attempt to minimally pad to avoid cutting off portions of transformed frameslayer_func – Function to layer frames in output. The function should accept 2 parameters: foreground & background. The current frame of video will be passed as foreground, the previous frame will be passed as the background (after the first frame of output the background will be the output of layer_func on the last iteration)
show_progress – Should a progress bar be displayed to console?
playback – Should the a comparison of input video/output video be played back during process?
- Returns
Nothing is returned. Output of stabilization is written to
output_path
.
>>> from vidstab.VidStab import VidStab >>> stabilizer = VidStab() >>> stabilizer.gen_transforms(input_path='input_video.mov') >>> stabilizer.apply_transforms(input_path='input_video.mov', output_path='stable_video.avi')
-
gen_transforms
(input_path, smoothing_window=30, show_progress=True)¶ Generate stabilizing transforms for a video
This method will populate the following instance attributes: trajectory, smoothed_trajectory, & transforms. The resulting transforms can subsequently be used for video stabilization by using
VidStab.apply_transforms
orVidStab.stabilize
withuse_stored_transforms=True
.- Parameters
input_path – Path to input video to stabilize. Will be read with
cv2.VideoCapture
; see opencv documentation for more info.smoothing_window – window size to use when smoothing trajectory
show_progress – Should a progress bar be displayed to console?
- Returns
Nothing; this method populates attributes of VidStab objects
>>> from vidstab.VidStab import VidStab >>> stabilizer = VidStab() >>> stabilizer.gen_transforms(input_path='input_video.mov') >>> stabilizer.apply_transforms(input_path='input_video.mov', output_path='stable_video.avi')
-
plot_trajectory
()¶ Plot video trajectory
Create a plot of the video’s trajectory & smoothed trajectory. Separate subplots are used to show the x and y trajectory.
- Returns
tuple of matplotlib objects
(Figure, (AxesSubplot, AxesSubplot))
>>> from vidstab import VidStab >>> import matplotlib.pyplot as plt >>> stabilizer = VidStab() >>> stabilizer.gen_transforms(input_path='input_video.mov') >>> stabilizer.plot_trajectory() >>> plt.show()
-
plot_transforms
(radians=False)¶ Plot stabilizing transforms
Create a plot of the transforms used to stabilize the input video. Plots x & y transforms (dx & dy) in a separate subplot than angle transforms (da).
- Parameters
radians – Should angle transforms be plotted in radians? If
false
, transforms are plotted in degrees.- Returns
tuple of matplotlib objects
(Figure, (AxesSubplot, AxesSubplot))
>>> from vidstab import VidStab >>> import matplotlib.pyplot as plt >>> stabilizer = VidStab() >>> stabilizer.gen_transforms(input_path='input_video.mov') >>> stabilizer.plot_transforms() >>> plt.show()
-
stabilize
(input_path, output_path, smoothing_window=30, max_frames=inf, border_type='black', border_size=0, layer_func=None, playback=False, use_stored_transforms=False, show_progress=True, output_fourcc='MJPG')¶ Read video, perform stabilization, & write stabilized video to file
- Parameters
input_path – Path to input video to stabilize. Will be read with
cv2.VideoCapture
; see opencv documentation for more info.output_path – Path to save stabilized video. Will be written with
cv2.VideoWriter
; see opencv documentation for more info.smoothing_window – window size to use when smoothing trajectory
max_frames – The maximum amount of frames to stabilize/process. The list of available codes can be found in fourcc.org. See cv2.VideoWriter_fourcc documentation for more info.
border_type – How to handle negative space created by stabilization translations/rotations. Options:
['black', 'reflect', 'replicate']
border_size – Size of border in output. Positive values will pad video equally on all sides, negative values will crop video equally on all sides,
'auto'
will attempt to minimally pad to avoid cutting off portions of transformed frameslayer_func – Function to layer frames in output. The function should accept 2 parameters: foreground & background. The current frame of video will be passed as foreground, the previous frame will be passed as the background (after the first frame of output the background will be the output of layer_func on the last iteration)
use_stored_transforms – should stored transforms from last stabilization be used instead of recalculating them?
playback – Should the a comparison of input video/output video be played back during process?
show_progress – Should a progress bar be displayed to console?
output_fourcc – FourCC is a 4-byte code used to specify the video codec.
- Returns
Nothing is returned. Output of stabilization is written to
output_path
.
>>> from vidstab.VidStab import VidStab >>> stabilizer = VidStab() >>> stabilizer.stabilize(input_path='input_video.mov', output_path='stable_video.avi')
>>> stabilizer = VidStab(kp_method = 'ORB') >>> stabilizer.stabilize(input_path='input_video.mov', output_path='stable_video.avi')
-
stabilize_frame
(input_frame, smoothing_window=30, border_type='black', border_size=0, layer_func=None, use_stored_transforms=False)¶ Stabilize single frame of video being iterated
Perform video stabilization a single frame at a time. Outputted stabilized frame will be on a
smoothing_window
delay. When frames processed is< smoothing_window
, black frames will be returned. When frames processed is>= smoothing_window
, the stabilized framesmoothing_window
ago will be returned. Wheninput_frame is None
stabilization will still be attempted, if there are not frames left to process thenNone
will be returned.- Parameters
input_frame – An OpenCV image (as numpy array) or None
smoothing_window – window size to use when smoothing trajectory
border_type – How to handle negative space created by stabilization translations/rotations. Options:
['black', 'reflect', 'replicate']
border_size – Size of border in output. Positive values will pad video equally on all sides, negative values will crop video equally on all sides,
'auto'
will attempt to minimally pad to avoid cutting off portions of transformed frameslayer_func – Function to layer frames in output. The function should accept 2 parameters: foreground & background. The current frame of video will be passed as foreground, the previous frame will be passed as the background (after the first frame of output the background will be the output of layer_func on the last iteration)
use_stored_transforms – should stored transforms from last stabilization be used instead of recalculating them?
- Returns
1 of 3 outputs will be returned:
- Case 1 - Stabilization process is still warming up
An all black frame of same shape as input_frame is returned.
A minimum of
smoothing_window
frames need to be processed to perform stabilization.This behavior was based on
cv2.bgsegm.createBackgroundSubtractorMOG()
.
- Case 2 - Stabilization process is warmed up and
input_frame is not None
A stabilized frame is returned
This will not be the stabilized version of
input_frame
. Stabilization is on ansmoothing_window
frame delay
- Case 2 - Stabilization process is warmed up and
- Case 3 - Stabilization process is finished
None
>>> from vidstab.VidStab import VidStab >>> stabilizer = VidStab() >>> vidcap = cv2.VideoCapture('input_video.mov') >>> while True: >>> grabbed_frame, frame = vidcap.read() >>> # Pass frame to stabilizer even if frame is None >>> # stabilized_frame will be an all black frame until iteration 30 >>> stabilized_frame = stabilizer.stabilize_frame(input_frame=frame, >>> smoothing_window=30) >>> if stabilized_frame is None: >>> # There are no more frames available to stabilize >>> break
Utility functions¶
-
vidstab.
download_ostrich_video
(download_to_path)¶ Download example shaky clip of ostrich used in README (mp4)
Video used with permission the HappyLiving YouTube channel. Original video: https://www.youtube.com/watch?v=9pypPqbV_GM
- Parameters
download_to_path – path to save video to
- Returns
None
>>> from vidstab import VidStab, download_ostrich_video >>> path = 'ostrich.mp4' >>> download_ostrich_video(path) >>> >>> stabilizer = VidStab() >>> stabilizer.stabilize(path, 'output_path.avi')
-
vidstab.
layer_blend
(foreground, background, foreground_alpha=0.6)¶ blend a foreground image over background (wrapper for cv2.addWeighted)
- Parameters
foreground – image to be laid over top of background image
background – image to over laid with foreground image
foreground_alpha – alpha to apply to foreground; (1 - foreground_alpha) applied to background
- Returns
return combined image where foreground is laid over background with alpha
>>> from vidstab import VidStab, layer_overlay, layer_blend >>> >>> stabilizer = VidStab() >>> >>> stabilizer.stabilize(input_path='my_shaky_video.avi', >>> output_path='stabilized_output.avi', >>> border_size=100, >>> layer_func=layer_blend)
-
vidstab.
layer_overlay
(foreground, background)¶ put an image over the top of another
Intended for use in VidStab class to create a trail of previous frames in the stable video output.
- Parameters
foreground – image to be laid over top of background image
background – image to over laid with foreground image
- Returns
return combined image where foreground is laid over background
>>> from vidstab import VidStab, layer_overlay, layer_blend >>> >>> stabilizer = VidStab() >>> >>> stabilizer.stabilize(input_path='my_shaky_video.avi', >>> output_path='stabilized_output.avi', >>> border_size=100, >>> layer_func=layer_overlay)