Linear Regression

This is some notes taken when I summarize the things learned after taking Andrew Ng’s machine learning course at coursera.


Regression is a technique to model relationships among variables. Typically, there’s one dependent variable y and one or many independent variables. This relationship is usually expressed as a regression function.

Linear regression, as the name suggests, models the relationship using a linear regression function. Depending on how many independent variables we have, we have simple linear regression with one independent variable and multivariate linear regression with more than one independent variables.

The hypothesis of linear regression can be described by the following equation,


The X are called features, and theta are the parameters. Given a set of training samples, we’ll need to choose theta to fit the training examples.

To measure how well we fit the training examples, we define the cost function of linear regression as below,


m represents the number of training samples, h(x) is the predicted value and y is the sample output value. The cost function measures the average square error of all samples and then divide by 2.

This is essentially an optimization problem where we need to choose parameter theta such that the cost defined by the cost function is minimized.

Over-fitting and Regularization

Fitting the regression parameters minimize the error for training samples, however we can run into the problem of trying too hard such that the regression function doesn’t generalize well. i.e.: The hypothesis produce high error for input outside of the training set. This problem is known as overfitting.

Two commonly used techniques to address overfitting is reducing number of features and regularization.

Regularization adds an additional term to the cost function to penalize having large theta value, which tends to produce much more smooth curves.


Note that by convention, the regularization term exclude j=0 case, which is theta 0.

Given the hypothesis and its cost function, there’re many ways to fit the parameter theta (i.e., solve the optimization problem), including conjugate gradient, BFGS, L-BFGS etc. The most commonly used technique is Gradient Descent.

Gradient Descent

The idea of gradient descent is to start at some random values, evaluate the cost. And keep iterating on theta value based on the function below to reduce the cost until we reach a minimal.


The alpha is called the learning rate. It can be proven that if choose a sufficiently small alpha value, the cost will converge at some minimum. However, we don’t want alpha value to be too small in practice because it will take longer time. Typically, we try out a range of alpha values (0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1) and plot the cost to see how fast it converges.

For linear regression with regularization, the above equation is essentially the following,gd

The second term can easily be rewritten as,


Feature Scaling and Mean Normalization

When we do gradient descent, the values for different features normally differ in scale. For example, feature A may have value in the range of [1, 10], feature B varies from [-10000, 10000].

It’s good to have the feature values have similar scales and centered around 0 (i.e.: have approximately mean of 0).

The former can be achieved using feature scaling, just divide every value of that feature by a number such that the range is approximately [-1, 1]. The latter is accomplished using mean normalization (This doesn’t apply to X0). We can usually use (X – mean) to achieve this.

Numerical Analysis

Besides using optimization algorithms to fit theta iteratively, it turns out we can also compute the theta values numerically.

Without regularization, the numerical equation is as below,


While this method doesn’t need to choose learning rate and iterate, it is more computationally expensive as n get large because of the matrix multiplication and inverse. In addition, the inverse may not even exist. This is typically due to redundant features (some features are not linearly independent) or too many features too few samples.

With regularization, the numerical solution is the following,


Note that inverse part will exist even if the equation without regularization is not invertible.

OneClick Movie Maker Released

Here it is, OneClick Movie Maker is released. We can download it from Google Play at:

The idea of this app is to make video editing easy with just a single click. Well, maybe multiple clicks eventually. The screenshot below illustrate the idea.


Figure 1. OneClick Movie Maker Video Effects

As shown above, we can apply a video effects/filters to videos by a single click. The video will starts playing with the effects automatically. Once we’re sure which effect/filter to apply, we click the top right button to trigger the video processing.

The processing is completely asynchronous. Actually, it runs on a remote process on its own. This means one can continue to play with the app without worrying about the video processing, which can take a long time if the video is big.

The app can handle multiple video editing easily. We can apply filters to as many videos as we want without waiting for the previous ones to finish. These video processing will be queued and finished one by one. Better still, the app provides an interface for one to manage the tasks pending, as shown below.


Figure 2. OneClick Video Maker: manages current tasks

Once the videos are processed, they’ll appear in processed page and videos page, where we can play and share the video.

I’ve made a demo video about the app. Hopefully it gives everyone a brief idea of how the app works.


The app is still in beta stage, I’ll add more functions, including adding frame borders, more effects/filters etc. Hope you’ll like it. 😉 The app can be downloaded from Google play OneClick Movie Maker.

How to Build ffmpeg with NDK r9

This is a updated post for a previous post, where we built ffmpeg 0.8 with Android NDK r5 and r6. This post will give instructions of how to build ffmpeg 2.0.1 with Android NDK r9.

0. Download Android NDK

The latest version of Android NDK can be downloaded at Android NDK website. At the time of writing, the newest version is NDK r9. Note that the website provides both current and legacy toolchains. We only need the current toolchain to compile ffmpeg.

After download NDK, simply decompress the archive. Note that we’ll use $NDK to represent the root path of the decompressed NDK.

1. Download ffmpeg source code

FFMPEG source code can be downloaded from the ffmpeg website. The latest stable release is 2.0.1. Download the source code and decompress it to $NDK/sources folder. We’ll discuss about the reason for doing this later.

2. Update configure file

Open ffmpeg-2.0.1/configure file with a text editor, and locate the following lines.





This cause ffmpeg shared libraries to be compiled to<version> (e.g., which is not compatible with Android build system. Therefore we’ll need to replace the above lines with the following lines.





3. Build ffmpeg

Copy the following text to a text editor and save it as





function build_one



















    --extra-cflags="-Os -fpic $ADDI_CFLAGS" 



make clean


make install






We disabled static library and enabled shared library. Note that the build script is not optimized for a particular CPU. One should refer to ffmpeg documentation for detailed information about available configure options.

Once the file is saved, make sure the script is executable by the command below,

sudo chmod +x

Then execute the script by the command,


4. Build Output

The build can take a while to finish depending on your computer speed. Once it’s done, you should be able to find a folder $NDK/sources/ffmpeg-2.0.1/android, which contains arm/lib and arm/include folders.

The arm/lib folder contains the shared libraries, while arm/include folder contains the header files for libavcodec, libavformat, libavfilter, libavutil, libswscale etc.

Note that the arm/lib folder contains both the library files (e.g.: and symbolic links (e.g.: to them. We can remove the symbolic links to avoid confusion.

5. Make ffmpeg Libraries available for Your Projects

Now we’ve compiled the ffmpeg libraries and ready to use them. Android NDK allows us to reuse a compiled module through the import-module build command.

The reason we built our ffmpeg source code under $NDK/sources folder is that NDK build system will search for directories under this path for external modules automatically. To declare the ffmpeg libraries as reusable modules, we’ll need to add a file named $NDK/sources/ffmpeg-2.0.1/android/arm/ with the following content,

LOCAL_PATH:= $(call my-dir)


include $(CLEAR_VARS)

LOCAL_MODULE:= libavcodec





include $(CLEAR_VARS)

LOCAL_MODULE:= libavformat





include $(CLEAR_VARS)

LOCAL_MODULE:= libswscale





include $(CLEAR_VARS)

LOCAL_MODULE:= libavutil





include $(CLEAR_VARS)

LOCAL_MODULE:= libavfilter





include $(CLEAR_VARS)

LOCAL_MODULE:= libwsresample




Below is an example of how we can use the libraries in a Android project’s jni/ file,

LOCAL_PATH := $(call my-dir)


include $(CLEAR_VARS)


LOCAL_MODULE    := tutorial03

LOCAL_SRC_FILES := tutorial03.c

LOCAL_LDLIBS := -llog -ljnigraphics -lz -landroid

LOCAL_SHARED_LIBRARIES := libavformat libavcodec libswscale libavutil



$(call import-module,ffmpeg-2.0.1/android/arm)

Note that we called import-module with the relative path to $NDK/sources for the build system to locate the reusable ffmpeg libraries.

For real examples to how to use the ffmpeg libraries in Android app, please refer to my github repo of android-ffmpeg-tutorial.

Android Animation Playback: Inline Animation with TextView

TextView allows us to attach and detach markup objects to a range of text using Spannable. We can use this to implement inline animation within TextView.

0. AnimatedImageSpan

We implement a AnimatedImageSpan, which extends DynamicDrawableSpan. The object will accept a set of image frames and keep track of the frame to display. A handler and a runnable are implemented to update the frame to display.

Note that AnimatedImageSpan doesn’t update the screen by itself. It calls AnimatedImageUpdateHandler.updateFrame to refresh the entire TextView, which subsequently redraw the AnimatedImageSpan with new image.

public class AnimatedImageSpan extends DynamicDrawableSpan {

    private AnimationAssetsSet mGifAssets;

    private int mCurrentFrameIdx;

    private Context mContext;


    private SimpleImageMemCache mImageCache;

    private AnimatedImageUpdateHandler mImageUpdater;


    private final Handler handler = new Handler();


    public AnimatedImageSpan(Context context) {

        mContext = context;



    public void setImageCache(SimpleImageMemCache pImageCache) {

        mImageCache = pImageCache;



    public void setAnimationAssets(AnimationAssetsSet pGifAssets) {

        mGifAssets = pGifAssets;



    private Runnable mRunnable;

    private int mPlaybackTimes;

    private boolean mPlaying;

    public void playGif(final AnimationSettings pGifSettings, AnimatedImageUpdateHandler pListener) {

        mPlaying = true;

        mImageUpdater = pListener;

        mPlaybackTimes = 0;

        mRunnable = new Runnable() {

            public void run() {

                mCurrentFrameIdx = (mCurrentFrameIdx + 1)%mGifAssets.getNumOfFrames();

//                Logger.d(this, "current frame " + mCurrentFrameIdx);

                handler.postDelayed(this, pGifSettings.mDelay);

                if (null != mImageUpdater) {

//                    Logger.d(this, "update frame using listener " + mImageUpdater.getId());



                if (mCurrentFrameIdx == mGifAssets.getNumOfFrames() - 1) {

                    if (pGifSettings.mPlaybackTimes == 0) {

                        //repeat forever

                    } else {


                        if (mPlaybackTimes == pGifSettings.mPlaybackTimes) {









    public boolean isPlaying() {

        return mPlaying;



    public void stopRendering() {


        mPlaying = false;




    public Drawable getDrawable() {

        Bitmap bitmap = mImageCache.loadBitmap(mContext, mGifAssets.getGifFramePath(mCurrentFrameIdx));

        BitmapDrawable drawable = new BitmapDrawable(mContext.getResources(), bitmap);

        int width = drawable.getIntrinsicWidth();

        int height = drawable.getIntrinsicHeight();

        drawable.setBounds(0, 0, width > 0 ? width : 0, height > 0 ? height : 0);

        return drawable;




    public void draw(Canvas canvas, CharSequence text, int start, int end, float x, int top, int y, int bottom, Paint paint) {

//        Logger.d(this, "draw " + mCurrentFrameIdx);

        Drawable b = getDrawable();;


        int transY = bottom - b.getBounds().bottom;

        if (mVerticalAlignment == ALIGN_BASELINE) {

            transY -= paint.getFontMetricsInt().descent;



        canvas.translate(x, transY);





1. Detect Click Events

We want the animation to start playing when the area is clicked, therefore detecting clicking is necessary. We’ll extend ClickableSpan as below.

private static class AnimationClickableSpan extends ClickableSpan {

        AnimatedImageSpan mAnimatedImage;

        AnimationSettings mSettings;

        AnimatedImageUpdateHandler mHandler;

        AnimationClickableSpan(MyTextView pView, AnimatedImageSpan pSpan, AnimationSettings pSettings) {

            mAnimatedImage = pSpan;

            mSettings = pSettings;

            mHandler = new AnimatedImageUpdateHandler(pView);




        public void onClick(View widget) {

            MyTextView view = (MyTextView) widget;

            if (mAnimatedImage.isPlaying()) {


            } else {

                mAnimatedImage.playGif(mSettings, mHandler);




When the click event is detected, we start the animation playback, if it’s clicked while the animation is playing, we stop the playback.

2. MyTextView

We glue everything together in the MyTextView as below.

public class MyTextView extends TextView {

    private SpannableStringBuilder mSb = new SpannableStringBuilder();

    private String dummyText = "dummy " + System.currentTimeMillis();

    private Context mContext;

    private SimpleImageMemCache mImageCache;

    private ArrayList<AnimatedImageSpan> mAnimatedImages = new ArrayList<AnimatedImageSpan>();


    public MyTextView(Context context) {


        mContext = context;



    public MyTextView(Context context, AttributeSet attrs) {

        super(context, attrs);

        mContext = context;



    public MyTextView(Context context, AttributeSet attrs, int defStyle) {

        super(context, attrs, defStyle);

        mContext = context;



    public void setImageCache(SimpleImageMemCache pImageCache) {

        mImageCache = pImageCache;




    protected void onDetachedFromWindow() {


        Log.d(this.getClass().getName(), "onDetachedFromWindow ");

        for (AnimatedImageSpan ais : mAnimatedImages) {

            Log.d(this.getClass().getName(), "animation playing " + ais.isPlaying());

            if (ais.isPlaying()) {









    public void appendText(String pStr) {




    public void appendAnimation(AnimationAssetsSet pAsset, AnimationSettings pSettings) {


        AnimatedImageSpan ais = new AnimatedImageSpan(mContext);



        mSb.setSpan(ais, mSb.length() - dummyText.length(), mSb.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE);

        AnimationClickableSpan clickSpan = new AnimationClickableSpan(this, ais, pSettings);

        mSb.setSpan(clickSpan, mSb.length() - dummyText.length(), mSb.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE);




    public void finishAddingContent() {





    private static class AnimationClickableSpan extends ClickableSpan {

        AnimatedImageSpan mAnimatedImage;

        AnimationSettings mSettings;

        AnimatedImageUpdateHandler mHandler;

        AnimationClickableSpan(MyTextView pView, AnimatedImageSpan pSpan, AnimationSettings pSettings) {

            mAnimatedImage = pSpan;

            mSettings = pSettings;

            mHandler = new AnimatedImageUpdateHandler(pView);




        public void onClick(View widget) {

            MyTextView view = (MyTextView) widget;

            if (mAnimatedImage.isPlaying()) {


            } else {

                mAnimatedImage.playGif(mSettings, mHandler);





3. The Complete Source Code

The complete source code can found at github repo.

Android Animation Playback: Frame Animation

Android provides AnimationDrawable object to create frame animation as a series of Drawable objects. The details can be found at Drawable Animation guideline at

This post discusses a different approach where we extends an ImageView and update the image displayed on the ImageView to render an animation.

0. Decode Animation Frame

Decoding frames is relatively computation intensive and therefore it’s better to do it in a separate thread. This can be implemented in code below.

class MyThread extends Thread {

    boolean mIsPlayingGif;

    AnimationSettings mGifSettings;

    MyThread(AnimationSettings pGifSettings) {

        mIsPlayingGif = true;

        mGifSettings = AnimationSettings.newCopy(pGifSettings);



    public void run() {

        int repetitionCounter = 0;

        do {

            for (int i = 0; i < mGifAssets.getNumOfFrames(); ++i) {

                if (!mIsPlayingGif) {



                Log.d(this.getName(), FrameAnimationView.this.getWidth()

                        + ":" + FrameAnimationView.this.getHeight());

                switch (mLoadMethod) {

                case ASSETS:

                    mTmpBitmap = mImageCache.loadBitmap(mContext, mGifAssets.getGifFramePath(i));


                case FILES:

                    mTmpBitmap = mImageCache.loadBitmap(mGifFrames.get(i));


                case RESOURCES:





                try {


                } catch (InterruptedException e) {




            if(0 != mGifSettings.mPlaybackTimes) {



        } while (mIsPlayingGif && repetitionCounter <= mGifSettings.mPlaybackTimes);



1. Update Frame

Update the ImageView is required to be done in UI thread. Therefore we need to post a Runnable to a Handler object. The runnable is as below.

final Runnable mUpdateResults = new Runnable() {

    public void run() {

        if (mTmpBitmap != null && !mTmpBitmap.isRecycled()) {





2. Test the Animation

To test the animation, we can declare FrameAnimationView in an XML layout as below.

<;RelativeLayout xmlns:android=""








    tools:context=".FrameAnimationActivity" >;





        android:layout_marginTop="100dp" />;


Suppose we have all frames of an animation under assets/1/ folder, to test the animation, we can code an activity as below.

public class FrameAnimationActivity extends Activity {

    private FrameAnimationView mFrameAniView;

    private SimpleImageMemCache mImageCache;

    private Context mContext;


    public int convertDpToPixel(int dp) {

        Resources r = mContext.getResources();

        return Math.round(TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, dp, r.getDisplayMetrics()));




    protected void onCreate(Bundle savedInstanceState) {


        mContext = this;


        mFrameAniView = (FrameAnimationView) this.findViewById(;

        mImageCache = new SimpleImageMemCache(0.20f, convertDpToPixel(320), convertDpToPixel(200));


        mFrameAniView.setAnimationAssets(new AnimationAssetsSet(this, "1"));




    public void onResume() {


        mFrameAniView.playGif(new AnimationSettings());




    public void onPause() {






    public void onDestroy() {






3. The Complete Source Code

The complete source code can be found at github repo. The code includes a simple image memory cache and a few bitmap utilities to optimize image decoding.

Android Animation Playback: Display GIF Animation in WebView

Android doesn’t support GIF natively. However, there’re a few different approaches to display GIF-like animation. In this and subsequently three blogs, we’ll cover two different approaches to display animation, including WebView and Frame Animation. We’ll start with GIF Animation playback in WebView.

The idea of using WebView to display GIF is simply embed the GIF into a few lines of HTML code as below.



    <style type='text/css'>body{margin:auto auto;text-align:center;} img{width:100%25;} </style>



    <img src="1.gif" width="100%" />



We can create a custom view which extends the WebView to display GIF animation as below.

public class GifWebView extends WebView {


    public GifWebView(Context context) {




    public GifWebView(Context context, AttributeSet attrs) {

        super(context, attrs);



    public void setGifAssetPath(String pPath) {

        String baseUrl = pPath.substring(0, pPath.lastIndexOf("/") + 1);

        String fileName = pPath.substring(pPath.lastIndexOf("/")+1);

        StringBuilder strBuilder = new StringBuilder();

        strBuilder.append("<html><head><style type='text/css'>body{margin:auto auto;text-align:center;} img{width:100%25;} </style>");


        strBuilder.append("<img src="" + fileName + "" width="100%" /></body></html>");

        String data = strBuilder.toString();

        Log.d(this.getClass().getName(), "data: " + data);

        Log.d(this.getClass().getName(), "base url: " + baseUrl);

        Log.d(this.getClass().getName(), "file name: " + fileName);

        loadDataWithBaseURL(baseUrl, data, "text/html", "utf-8", null);



We can declare a XML layout with the custom view.

<RelativeLayout xmlns:android=""








    tools:context=".GifWebviewDisplayActivity" >









Suppose we have an gif animation file 1.gif under assets folder, we can display the gif file in an activity as below.

public class GifWebviewDisplayActivity extends Activity {

    private GifWebView gifView;


    protected void onCreate(Bundle savedInstanceState) {



        gifView = (GifWebView) findViewById(;




The complete source code can be found at github repo.


1. WebView Android doc:

Android Property Animation — ValueAnimator

This is the second post on Android Property Animation. Previous post can be found below.

Android Property Animation Overview.

ValueAnimator is the core of the Property Animation system. It provides a timing engine which calculates the animation values. It also allows us to get notified at every animation frame through ValueAnimator.AnimatorUpdateListener interface.

Animation Handler

All animations created by Property Animation system share a single timing pulse, which is maintained by a custom static handler AnimationHandler at ValueAnimator class. Below is the code extracted from NineOldAndroids library ValueAnimator class.


 * This custom, static handler handles the timing pulse that is shared by

 * all active animations. This approach ensures that the setting of animation

 * values will happen on the UI thread and that all animations will share

 * the same times for calculating their values, which makes synchronizing

 * animations possible.



private static class AnimationHandler extends Handler {


     * There are only two messages that we care about: ANIMATION_START and

     * ANIMATION_FRAME. The START message is sent when an animation's start()

     * method is called. It cannot start synchronously when start() is called

     * because the call may be on the wrong thread, and it would also not be

     * synchronized with other animations because it would not start on a common

     * timing pulse. So each animation sends a START message to the handler, which

     * causes the handler to place the animation on the active animations queue and

     * start processing frames for that animation.

     * The FRAME message is the one that is sent over and over while there are any

     * active animations to process.



    public void handleMessage(Message msg) {

        boolean callAgain = true;

        ArrayList<ValueAnimator> animations = sAnimations.get();

        ArrayList<ValueAnimator> delayedAnims = sDelayedAnims.get();

        switch (msg.what) {

            // TODO: should we avoid sending frame message when starting if we

            // were already running?

            case ANIMATION_START:

                ArrayList<ValueAnimator> pendingAnimations = sPendingAnimations.get();

                if (animations.size() > 0 || delayedAnims.size() > 0) {

                    callAgain = false;


                // pendingAnims holds any animations that have requested to be started

                // We're going to clear sPendingAnimations, but starting animation may

                // cause more to be added to the pending list (for example, if one animation

                // starting triggers another starting). So we loop until sPendingAnimations

                // is empty.

                while (pendingAnimations.size() > 0) {

                    ArrayList<ValueAnimator> pendingCopy =

                            (ArrayList<ValueAnimator>) pendingAnimations.clone();


                    int count = pendingCopy.size();

                    for (int i = 0; i < count; ++i) {

                        ValueAnimator anim = pendingCopy.get(i);

                        // If the animation has a startDelay, place it on the delayed list

                        if (anim.mStartDelay == 0) {


                        } else {





                // fall through to process first frame of new animations

            case ANIMATION_FRAME:

                // currentTime holds the common time for all animations processed

                // during this frame

                long currentTime = AnimationUtils.currentAnimationTimeMillis();

                ArrayList<ValueAnimator> readyAnims = sReadyAnims.get();

                ArrayList<ValueAnimator> endingAnims = sEndingAnims.get();

                // First, process animations currently sitting on the delayed queue, adding

                // them to the active animations if they are ready

                int numDelayedAnims = delayedAnims.size();

                for (int i = 0; i < numDelayedAnims; ++i) {

                    ValueAnimator anim = delayedAnims.get(i);

                    if (anim.delayedAnimationFrame(currentTime)) {




                int numReadyAnims = readyAnims.size();

                if (numReadyAnims > 0) {

                    for (int i = 0; i < numReadyAnims; ++i) {

                        ValueAnimator anim = readyAnims.get(i);


                        anim.mRunning = true;





                // Now process all active animations. The return value from animationFrame()

                // tells the handler whether it should now be ended

                int numAnims = animations.size();

                int i = 0;

                while (i < numAnims) {

                    ValueAnimator anim = animations.get(i);

                    if (anim.animationFrame(currentTime)) {



                    if (animations.size() == numAnims) {


                    } else {

                        // An animation might be canceled or ended by client code

                        // during the animation frame. Check to see if this happened by

                        // seeing whether the current index is the same as it was before

                        // calling animationFrame(). Another approach would be to copy

                        // animations to a temporary list and process that list instead,

                        // but that entails garbage and processing overhead that would

                        // be nice to avoid.





                if (endingAnims.size() > 0) {

                    for (i = 0; i < endingAnims.size(); ++i) {





                // If there are still active or delayed animations, call the handler again

                // after the frameDelay

                if (callAgain && (!animations.isEmpty() || !delayedAnims.isEmpty())) {

                    sendEmptyMessageDelayed(ANIMATION_FRAME, Math.max(0, sFrameDelay -

                        (AnimationUtils.currentAnimationTimeMillis() - currentTime)));






The handler only handles two types of messages, including ANIMATION_START and ANIMATION_FRAME.

ANIMATION_START message is generated when an animation starts, the handler will receive the message and put the animation into delay queue. Next the handler decide if the animations from the delay queue should be started immediately (add to ready queue). The handler will then process all active animations, which includes those in the ready queue.

ANIMATION_FRAME message is sent by the handler itself to schedule next animation frame update. The handler checks if any animation from the delay queue should be started and then process all active animations. (The code to handle ANIMATION_FRAME is actually part of the code for handling ANIMATION_START).

The handler also checks if an animation should be ended or not in the handler.

By using this static handler, all animations are synchronized on a frame by frame basis.

Using ValueAnimator

Animating a single value with ValueAnimator is straightforward. ValueAnimator provides static method like ofInt and ofFloat to animation between integer and floating point values. We will skip that and discuss how to animation multiple values.

Suppose we need to animate an ImageView of a red square from bottom left to top right as shown in the screenshots below.


Figure 1. Animation Bottom Left to Top Right

It is obvious that we need to update both the x and y coordinates. The naive approach is to create two animations and play them simultaneously, but it is not efficient. We can use PropertyValuesHolder to combine multiple values. Alternatively, we can use custom objects as animation values. We’ll explore both approaches.

We will use the layout as indicated by the xml file below. We want to move the ImageView inside the FrameLayout with id “container”.


<RelativeLayout xmlns:android=""




    tools:context=".MainActivity" >


































Using PropertyValuesHolder

PropertyValuesHolder contains information about a property and its values during animation. It is used with ValueAnimator or ObjectAnimator to animate multiple properties at the same time.

In order to support property animation in pre-3.0 Android devices, NineOldAndroids provide a class AnimatorProxy to wrap a view and handles drawing of animated views. We need to wrap the ImageView in our code as below.

ImageView mImageView = (ImageView) this.findViewById(;

mImageAnimatorProxy = AnimatorProxy.wrap(mImageView);

The code below creates an animation which animates two values which represent X and Y position of the image view with respective to the frame layout.

PropertyValuesHolder widthPropertyHolder = PropertyValuesHolder.ofFloat("posX", mImageAnimatorProxy.getX(), container.getWidth() - mImageView.getWidth());

PropertyValuesHolder heightPropertyHolder = PropertyValuesHolder.ofFloat("posY", mImageAnimatorProxy.getY(), 0);

ValueAnimator mTranslationAnimator = ValueAnimator.ofPropertyValuesHolder(widthPropertyHolder, heightPropertyHolder);




We use two PropertyValuesHolder to animate two values named posX and posY. We then created the animation using ValueAnimator. We registered the listener, set the duration and start the animation. The listener interface is implemented by overriding the onAnimationUpdate callback function as below.


public void onAnimationUpdate(ValueAnimator arg0) {

    float posX = (Float) arg0.getAnimatedValue("posX");

    float posY = (Float) arg0.getAnimatedValue("posY");




We retrieve the animated values posX and posY at every animation frame and update it.

Using Custom Objects as Animation Values

ValueAnimator can be used to animate custom objects also. We’ll need to specify a TypeEvaluator to tell the ValueAnimator how the values should be calculated.

In our example, we are moving the position of the ImageView, so we can create a custom object to represent the position of the ImageView and animate it. The code below creates a class Position to represent the position of the ImageView.

private class Position {

    private float posX;

    private float posY;

    public float getPosX() {

        return posX;


    public float getPosY() {

        return posY;


    Position(float pPosX, float pPosY) {

        posX = pPosX;

        posY = pPosY;



We also provide to TypeEvaluator by implementing the android.animation.TypeEvaluator<T> interface.

private class PositionTypeEvaluator implements TypeEvaluator<Position> {


    public Position evaluate(float fraction, Position startValue, Position endValue) {

        float posX = startValue.getPosX() + (endValue.getPosX() - startValue.getPosX()) * fraction;

        float posY = startValue.getPosY() + (endValue.getPosY() - startValue.getPosY()) * fraction;

        return new Position(posX, posY);



At every animation frame, the callback function evaluate will be triggered with three arguments, including the elapsed time so far, the animation start position and end position. We can calculate the current position should be based on the three input arguments and returns it.

With the custom object and TypeEvaluator in place, the animation can be easily created as below.

ValueAnimator mAnimator = ValueAnimator.ofObject(new PositionTypeEvaluator(), new Position(mOriX, mOriY),

                        new Position(container.getWidth() - mImageView.getWidth(), 0));




Of course, we still need to implement the ValueAnimator.AnimatorUpdateListener interface.


public void onAnimationUpdate(ValueAnimator pAnimator) {

    Position currentPos = (Position) pAnimator.getAnimatedValue();




Note that we can also use ViewPropertyAnimator, which I’ll cover in another post if I got time.

You can get the full source code here:


Android ValueAnimator doc:

Android Property Animation Overview

Property animation is introduced in Android 3.0 Honeycomb. A whole new set of APIs comes with the android.animation package (The view animation is exposed through android.view.animation package), which makes animation in Android more flexible and powerful.

1. Limitation of View Animation System

The property animation system is introduced to tackle the limitations of the view animation systems in pre-3.0 Android as described below.

Firstly, we can only animate views. This is enough most of the times since animation manipulates the GUI, which consists of views. However, there are times we want to animate non-view objects or properties, then we are on our own. For example, if we have a custom view implemented by drawing a few drawables on canvas and we want to animate the individual drawable, we won’t be able to use the view animation system.

Secondly, the animation is constrained to position, size, rotation etc. Properties like background color is not supported by the view animation system.

Thirdly, the view animation system only updates where the view is drawn, but not the view itself. This can cause issues sometimes. For example, if we animate a button to move outside of current screen, after the animation ends, the button is not shown on screen, but we can still click the button’s original location to trigger click event. This is because the location of the button is not changed, though its not drawn on the screen. In this case, we’ll need to write additional code to make sure the button behave properly.

2. The android.animation package

The property animation system APIs are exposed by the android.animation package. The hierarchy of the main classes in the package is as below.

Fig 1. android.animation class hierarchy

ValueAnimator: it is the core the property animation system. There’re two steps to animate a property, including calculating the animated values and assign the values to the property of the animated object. The ValueAnimator can handle the first part but not the second.

Fig 2. ValueAnimator class

The class allows us to specify a time interpolator to control the speed and type evaluator to control the animation values. We’ll cover more about those in future posts.

ObjectAnimator: it’s a subclass of ValueAnimator, which handles both steps of the property animation for us. In other words, it updates the target property value when calculating the animated values.

AnimatorSet: it provides interfaces to group animations together. We can play animations together, sequentially, one after another, etc.

TimeAnimator: This class is introduced in API level 16 (Android 4.1 Jelly Bean). It provides a simple callback mechanism with the TimeAnimator.TimeListener interface. All animators in the system is synchronized by Android system animation frame event. TimeAnimator allows us to get notified with the animation frame event through the TimeListener interface. The method onTimeUpdate will be triggered with the TimeAnimator instance, the total animation time elapsed since the animator started and the time elapsed since preview frame, in milliseconds.

3. NineOldAndroids Library

NineOldAndroids is an amazing library which enables property animation all the way back to Android 1.0. Well, not everything of property animation, but most of it. (e.g. layout animation is left out.)

In order to support the property animation, the library use a wrapper class named AnimatorProxy to wrap a view. The wrapper class facilitates modification of post-3.0 view properties on pre-3.0 platforms and draw the view property.


1. NineOldAndroids:

2. Google Developer Blog, Animation in Honeycomb:

3. Android API Guides, Property Animation:

Android NDK Cookbook Published

Note: Android NDK Cookbook ebook 40% discount with promotion code MREANC40 at Packt Publishing. The promotion code is valid until 15th June.

Finally, my first book Android Native Development Kit (NDK) Cookbook is published. I started writing this book from Jun 2012, and the book is published on Mar 26 2013. It’s almost 10 months.

It takes lots of effort to write a book. I wrote lots of sample code; I read a few books to understand certain topics better; the drafts were polished again and again. The reviewers and the editors from the publisher (Packt Publishing) also spent lots of time reviewing and editing the drafts.


The book consists of 11 chapters, divided into 4 parts. Chapter 1 ~ 3 covers the basics, including NDK environment set up, JNI, and NDK build and debug techniques. Chapter 4 ~ 7 introduces various Android NDK libraries, including OpenGL ES 1.x and 2.0, native application library, multithread API, jnigraphics, OpenSL ES, OpenMAX AL etc. Chapter 8 ~ 9 talks about how to port existing libraries and applications to Android with NDK. Chapter 10 ~ 11 discusses how to write multimedia apps and games with Android NDK. Chapter 10 and 11 are available online only to keep the number of pages no more than expected number.

Chapter 8 is a sample chapter available online. One can take a look at the chapter here.

Record WAVE Audio on Android

This post discusses how to record raw audio (PCM) and save it to wave file on Android. If you’re not familiar with WAVE audio file format, please refer to a previous post, WAVE Audio File Format.

The post is a follow up post for Record PCM Audio on Android. The code and working principle are similar. It is strongly suggested you read it first.

WAVE file is used to store PCM data, with 44-byte header. Recording WAVE audio is equivalent to recording PCM audio and adding the 44-byte header in front.

We used a RandomAccessFile to write the data. We first write 44-byte header. Because some fields are not known until we finish the recording, we simply write zeros. This is shown as below.

randomAccessWriter = new RandomAccessFile(filePath, "rw");

randomAccessWriter.setLength(0); // Set file length to 0, to prevent unexpected behavior in case the file already existed


randomAccessWriter.writeInt(0); // Final file size not known yet, write 0 


randomAccessWriter.writeBytes("fmt ");

randomAccessWriter.writeInt(Integer.reverseBytes(16)); // Sub-chunk size, 16 for PCM

randomAccessWriter.writeShort(Short.reverseBytes((short) 1)); // AudioFormat, 1 for PCM

randomAccessWriter.writeShort(Short.reverseBytes(nChannels));// Number of channels, 1 for mono, 2 for stereo

randomAccessWriter.writeInt(Integer.reverseBytes(sRate)); // Sample rate

randomAccessWriter.writeInt(Integer.reverseBytes(sRate*nChannels*mBitsPersample/8)); // Byte rate, SampleRate*NumberOfChannels*mBitsPersample/8

randomAccessWriter.writeShort(Short.reverseBytes((short)(nChannels*mBitsPersample/8))); // Block align, NumberOfChannels*mBitsPersample/8

randomAccessWriter.writeShort(Short.reverseBytes(mBitsPersample)); // Bits per sample


randomAccessWriter.writeInt(0); // Data chunk size not known yet, write 0

We then write the PCM data. This is discussed in detail in post Record PCM Audio on Android.

After the recording is done, we seek to the header and update a few header fields. This is shown as below.


try {; // Write size to RIFF header

    randomAccessWriter.writeInt(Integer.reverseBytes(36+payloadSize));; // Write size to Subchunk2Size field




} catch(IOException e) {

    Log.e(WavAudioRecorder.class.getName(), "I/O exception occured while closing output file");

    state = State.ERROR;


For the complete source code, one can refer to my github Android tutorial project.