public class ProresEncoder extends VideoEncoder
Modifier and Type | Class and Description |
---|---|
static class |
ProresEncoder.Profile |
VideoEncoder.EncodedFrame
Modifier and Type | Field and Description |
---|---|
protected ProresEncoder.Profile |
profile |
Constructor and Description |
---|
ProresEncoder(ProresEncoder.Profile profile,
boolean interlaced) |
Modifier and Type | Method and Description |
---|---|
static ProresEncoder |
createProresEncoder(String profile,
boolean interlaced) |
VideoEncoder.EncodedFrame |
encodeFrame(Picture pic,
ByteBuffer buffer)
Encode one video frame.
|
protected void |
encodePicture(ByteBuffer out,
int[][] scaledLuma,
int[][] scaledChroma,
int[] scan,
Picture picture,
int vStep,
int vOffset) |
protected int |
encodeSlice(ByteBuffer out,
int[][] scaledLuma,
int[][] scaledChroma,
int[] scan,
int sliceMbCount,
int mbX,
int mbY,
Picture result,
int prevQp,
int mbWidth,
int mbHeight,
boolean unsafe,
int vStep,
int vOffset) |
protected static void |
encodeSliceData(ByteBuffer out,
int[] qmatLuma,
int[] qmatChroma,
int[] scan,
int sliceMbCount,
int[][] ac,
int qp,
int[] sizes) |
int |
estimateBufferSize(Picture frame)
Estimate the output buffer size that will likely be needed for the
current instance of encoder to encode a given frame.
|
static int |
getLevel(int val) |
ColorSpace[] |
getSupportedColorSpaces()
Native color spaces of this video encoder, i.e.
|
static void |
writeCodeword(BitWriter writer,
Codebook codebook,
int val) |
static void |
writeFrameHeader(ByteBuffer outp,
ProresConsts.FrameHeader header) |
static void |
writePictureHeader(int logDefaultSliceMbWidth,
int nSlices,
ByteBuffer out) |
protected ProresEncoder.Profile profile
public ProresEncoder(ProresEncoder.Profile profile, boolean interlaced)
public static ProresEncoder createProresEncoder(String profile, boolean interlaced)
public static final int getLevel(int val)
protected int encodeSlice(ByteBuffer out, int[][] scaledLuma, int[][] scaledChroma, int[] scan, int sliceMbCount, int mbX, int mbY, Picture result, int prevQp, int mbWidth, int mbHeight, boolean unsafe, int vStep, int vOffset)
protected static final void encodeSliceData(ByteBuffer out, int[] qmatLuma, int[] qmatChroma, int[] scan, int sliceMbCount, int[][] ac, int qp, int[] sizes)
protected void encodePicture(ByteBuffer out, int[][] scaledLuma, int[][] scaledChroma, int[] scan, Picture picture, int vStep, int vOffset)
public static void writePictureHeader(int logDefaultSliceMbWidth, int nSlices, ByteBuffer out)
public VideoEncoder.EncodedFrame encodeFrame(Picture pic, ByteBuffer buffer)
VideoEncoder
encodeFrame
in class VideoEncoder
pic
- The video frame to be encoded. Must be in one of the encoder's
native color spaces.buffer
- The buffer to store the encoded frame into. Note, only the
storage of this buffer is used, the position and limit are
kept untouched. Instead the returned value contains a
duplicate of this buffer with the position and limit set
correctly to the boundaries of the encoded frame. This buffer
must be large enough to hold the encoded frame. It is
undefined what will happen if the buffer is not large enough.
Most commonly some exception will be thrown.public static void writeFrameHeader(ByteBuffer outp, ProresConsts.FrameHeader header)
public ColorSpace[] getSupportedColorSpaces()
VideoEncoder
getSupportedColorSpaces
in class VideoEncoder
public int estimateBufferSize(Picture frame)
VideoEncoder
estimateBufferSize
in class VideoEncoder
frame
- A frame in question.Copyright © 2019. All rights reserved.