Simple Network Diagnosis using Ping

Ping is a simple but powerful tool to test if a host can reach another host in the network, and measure the RTT (round trip time). It works by sending ICMP (Internet Control Message Protocol) echo request packets to destination host and wait for the response.

This post covers how to use ping to diagnose the network connectivity problem. Follow the steps below,

0. ping

This pings your local host. is referred as loopback address. It should get response if your network card and TCP/IP stack software works fine, even if the host is not connected to any network.

Note that the ICMP packets doesn’t really reaches the wire in this command. If you use network capture tool (e.g. wireshark) to capture the traffic, you probably won’t get any.

1. ping your default gateway

1.1. use ipconfig (windows) or ifconfig (linux) to get IP address of your source host and default gateway (e.g.

1.2 ping your gateway (e.g. ping

If this work, it means the host can reach the default gateway, which is like the door to the Internet.

If you cannot get any response, check your wire/wireless connection, make sure it’s connected fine. If everything on your side is fine, it’s probably that the default gateway is done, get it running if you have control over it.

2. ping a public IP address (e.g. ping

If this work, it means your host can reach to a machine on the Internet. If it doesn’t work, it means your default gateway cannot reach the internet or it doesn’t allow you to reach the internet.

Note that not all machines allow ICMP packet, so try with a public IP address that is known to work (e.g.

3.  ping a public domain (e.g. ping

If this works, it means your Internet connection is fine.

If it doesn’t work but step 2 works, it means your DNS server has some issues. You can change your DNS server. Two DNS servers provided by Google available for public use is and Just set your host’s DNS server to one of them should resolve your issue.

My Experience of Working at a Startup Company

It has been a week since I left my last job, a startup company working on video broadcast/streaming product solution.

I joined the company as the first full-time employee in Oct 18 2010 and left at 16 Sep 2011. The 11-month journey is full of experience of all sorts, exciting, depressed, joy, sad, working really hard, tired of working, etc.

Good Stuff about Working at Startup

  1. Working on new stuff, which is not the case normally when working at a big or medium-sized company. In a startup, we’re developing something new. We encountered all sorts of difficulties and solve them one by one. It’s fun and great learning experience.
    • e.g.: I worked on dialing out multiple mobile 3G modems using AT commands. Make the dialing process fast, reliable and automatic is something you cann’t count on the software available publicly.
  2. Working on Linux. I was mainly a Windows guy with some basic Linux knowledge. The startup is developing product based on Linux platform, so I picked up a lot of stuff about Linux. I’ve got to say, once you’re forced to use Linux and figured it out, you’ll feel great. You learn much more stuff than working under Windows through an IDE.
    • e.g.: vim, gnuplot, vlc, ffmpeg, ssh, netfilter, gcc/g++, gdb, Qt, Poco, Linux kernel programming… I learned to use these things under Linux, and see how powerful and amazing they’re when one picks them up.
  3. Learned new programming languages. I learned python and perl. Well, I’m not an expert on these two languages. But I did program in these two languages in some projects.
    • e.g.: I developed the modem dialing programing in python, and a automated testing system in perl.
  4. Worked with People with Dreams. People at Startup companies worked on dreams instead of sleeping on it. Most of the time I’m enjoying working with people of this kind.
  5. Worked with Senior Engineer Closely. For me, it’s lucky that the company has a senior software developer as CTO. He is experienced and willing to discuss programming and IT in general with me. The way he handles certain technique issues is good lesson for me.
  6. Worked on Web Dev. I’ve learned some basics about web development before and took some courses about it, but the work allows me to take one step further.
    • e.g. I developed the Web-based UI for the product, and modified the company website.
  7. Worked on Setting up EC2 Stuff. Cloud computing enpowers developer to deploy their work easier than before. It’s great to know something about it and better still I worked with it.
  8. Experienced the Entire Product Development Phase. If you go to a big/medium size company, you’re probably working on improvements for existing products, or build something for existing products. But at the startup, I’ve worked through the entire product build process.
  9. Experienced a Little Bit of Business Side. Developers are developers at big companies. At the startup, I’m doing programming, testing, customer demonstration, internal training, tech support etc. I interacted with potential customers a little bit, and watched how the founder handles customers etc. Well, who knows whether I’ll need some of the techniques one day.

Downside of Working at Startup

  1. No Time for Myself. For me, this is the biggest issue. I’m taking a part-time master at NUS; I’m developing several android apps (almost stopped completely when working at the startup);And I also have my personal life…
    • 5 days of tiring work; 2 days of course work study + master thesis project; I know I cannot hold it for long.
  2. Hard to Keep Interest if you don’t Share the same Vision with the Founder. OK, this is for me. I’m more interested in developing apps for everyone’s usage, not for businesses, which is my previous company is doing.
  3. Tired of Working on All Sorts of Stuff, Many of Which are Boring. I worked on Website, Web-based UI, modem dialing, simulation, Kernel Module development for Packet Filtering, Amazon and Video Streaming Server Set Up, etc. Well, it’s good to work on lots of different stuff, but I cannot find the focus and depth here.
    • The startup doesn’t have many projects that requires the focus and depth I was looking for. And the senior engineer is better candidate than me to work on these stuff.
  4. Flexible Time could Mean Long Time. Sometimes we come in at weekends. During my school holiday, I worked till 10 pm + almost every day for about two months.

After 10 months, I’ve realized that I’m not passionate about the work any more, and I quit.

Dynamic Adaptive Streaming over HTTP

This is the third method of the http video delivery. Unlike the first two methods: HTTP Progressive Download and Play, and HTTP Pseudo-streaming, this is a real streaming technology and it’s applicable for both live video and video on demand (VOD).

The basic idea of Dynamic Adaptive Streaming over HTTP (DASH) is to divide the video clip/stream into small chunks (called streamlets). These small chunks will be downloaded to the browsers’ cache and combined by the client (browser or browser plug in) for play out.

This technology is new and implemented by several companies in different ways using different names.

  • HTTP Live Streaming (Apple)
  • Smooth Streaming (Microsoft)
  • HTTP Dynamic Streaming (Adobe)
  • Adaptive Bitrate (Octoshape)

All methods are based on the basic idea mentioned above, but unfortunately they’re not compatible. HTTP Dynamic Streaming is supported on Adobe Flash Platform with Adobe Flash Media Server (server side) and Adobe Flash Player 10.1 and above (client side). Smooth Streaming is supported by Microsoft IIS Server (server side) and Silverlight (client side). HTTP Live Streaming is supported by a list of servers (e.g. Wowza Media Server) and clients (e.g. QuickTime Player) can be found here.

The reason that there’re more servers and clients available for Apple’s HTTP Live Streaming is that Apple’s iPhone and iPad devices support HTTP Live Streaming play out and Apple has made its HTTP Live Streaming specification available as RFC here.

Here we illustrate the basic idea using HTTP Dynamic Streaming from Adobe as an example. One can find demo videos here, or here.

Once I started to view the video, I checked my browser (I’m using Chrome on Ubuntu) cache at ~/.cache/google-chrome/Default/Cache/, I see a list of files (using ls –l command). When the video is being played, I kept refreshing the list (by typing ls –l repeatedly). I found there’s a new file created at about every 10 seconds.

Note that if you’re Windows 7, the browser cache should be at

C:Usersroman10AppDataLocalGoogleChromeUser DataDefaultCache
Where roman10 is my username.

Then I paused the video, n new files are created any more. If I resume the play back, new files started to appear again. These files are actually small video chunks with meta data. And the Flash player is combining them dynamically to a video stream as the video is playing out. I tried to play a single chunk using VLC player, it won’t work.

For advantages and disadvantages of DASH in comparison with other two HTTP methods, please refer http video delivery.

Update: Scheduling Logic is in Client (Player)

One important fact about DASH is the scheduling logic is implemented at the client/player side as HTTP is stateless and the server doesn’t keep track of a video session. Therefore, the player needs to measure the network condition and dynamically switch between video chunks of different size and quality.

Network Throughput Measurement using IPerf/JPerf

IPerf is a tool that allows one to measure the throughput between two hosts in the network. It works for both UDP and TCP traffic.

To measure the throughput, one needs to set up an IPerf running in server mode in the server host and another IPerf running in client mode at the other host.

UDP Throughput Measurement

At server side, enter the command below,

iperf –s –u –p 12345 –i 1

For each of the parameter,

-s: run iperf as server.

-u: measure UDP instead of TCP. Note that iperf uses TCP by default.

-p: port number iperf server listen to, in our case, 12345.

-i: report interval. If not set, the report is only printed at the end of the test.

At client side, the command can be,

iperf -c <server ip> -p 12345 -u -i 1 -b 1000000 -t 10

-c: run iperf as client, and connect to <server ip>.

-p: connect to server port number, in our case, 12345.

-u: measure UDP.

-i: report interval, in our case, report every 1 second.

-b: the amount of traffic to send in bits/sec, this parameter is only available for UDP measurement. In our case, it’s around 1Mbps.

-t: number of seconds client is going to send data to server.

Below is a sample capture. For server side,

$ iperf -s -u -p 12345 -i 1
Server listening on UDP port 12345
Receiving 1470 byte datagrams
UDP buffer size:   111 KByte (default)
[  3] local port 12345 connected with port 58414
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  3]  0.0- 1.0 sec    122 KBytes  1000 Kbits/sec  2.209 ms    0/   85 (0%)
[  3]  1.0- 2.0 sec    122 KBytes  1000 Kbits/sec  3.117 ms    0/   85 (0%)
[  3]  2.0- 3.0 sec    122 KBytes  1000 Kbits/sec  4.013 ms    0/   85 (0%)
[  3]  3.0- 4.0 sec    122 KBytes  1000 Kbits/sec  2.725 ms    0/   85 (0%)
[  3]  4.0- 5.0 sec    122 KBytes  1000 Kbits/sec  3.162 ms    0/   85 (0%)
[  3]  5.0- 6.0 sec    122 KBytes  1000 Kbits/sec  3.060 ms    0/   85 (0%)
[  3]  6.0- 7.0 sec    122 KBytes  1000 Kbits/sec  1.985 ms    0/   85 (0%)
[  3]  7.0- 8.0 sec    123 KBytes  1.01 Mbits/sec  1.739 ms    0/   86 (0%)
[  3]  8.0- 9.0 sec    122 KBytes  1000 Kbits/sec  1.337 ms    0/   85 (0%)
[  3]  9.0-10.0 sec    122 KBytes  1000 Kbits/sec  2.694 ms    0/   85 (0%)
[  3]  0.0-10.0 sec  1.19 MBytes  1.00 Mbits/sec  2.828 ms    0/  852 (0%)

For client side,

$ iperf -c -p 12345 -u -i 1 -b 1000000 -t 10
Client connecting to, UDP port 12345
Sending 1470 byte datagrams
UDP buffer size:   112 KByte (default)
[  3] local port 58414 connected with port 12345
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec    123 KBytes  1.01 Mbits/sec
[  3]  1.0- 2.0 sec    122 KBytes  1000 Kbits/sec
[  3]  2.0- 3.0 sec    122 KBytes  1000 Kbits/sec
[  3]  3.0- 4.0 sec    122 KBytes  1000 Kbits/sec
[  3]  4.0- 5.0 sec    122 KBytes  1000 Kbits/sec
[  3]  5.0- 6.0 sec    122 KBytes  1000 Kbits/sec
[  3]  6.0- 7.0 sec    122 KBytes  1000 Kbits/sec
[  3]  7.0- 8.0 sec    122 KBytes  1000 Kbits/sec
[  3]  8.0- 9.0 sec    122 KBytes  1000 Kbits/sec
[  3]  9.0-10.0 sec    122 KBytes  1000 Kbits/sec
[  3]  0.0-10.0 sec  1.19 MBytes  1000 Kbits/sec
[  3] Sent 852 datagrams
[  3] Server Report:
[  3]  0.0-10.0 sec  1.19 MBytes  1.00 Mbits/sec  2.828 ms    0/  852 (0%)

The total amount of data transmitted is 1.19 MBytes, the throughput for the 10 seconds is 1.00Mbps/sec, the jitter(indicates the delay variance) is 2.828 ms and the data loss rate is 0%.

For this sample test, we can probably send more traffic to achieve higher throughput. If we want to estimate the max throughput, we can increase the amount of data sent until we see the packet loss is at a level cannot be ignored.

TCP Throughput Measurement

At server side, enter the command below,

iperf -s -p 12345 -i 1

The parameter has the same meaning as they’re in UDP measurement. Here we don’t specify –u.

At client side, use the command,

iperf -c <server ip> -p 12345 -i 1 -t 10

Again the parameter has the same meaning as in UDP measurement. But we don’t specify –b (amount of data to send per second) as iperf will try to push as much traffic as it can.

For a sample test. At server side,

$ iperf -s -p 12345 -i 1
Server listening on TCP port 12345
TCP window size: 85.3 KByte (default)
[  4] local port 12345 connected with port 51560
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 1.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  1.0- 2.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  2.0- 3.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  3.0- 4.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  4.0- 5.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  5.0- 6.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  6.0- 7.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  7.0- 8.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  8.0- 9.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  9.0-10.0 sec  0.00 Bytes  0.00 bits/sec
[  4]  0.0-10.1 sec  64.2 MBytes  53.2 Mbits/sec

The client side,

$ iperf -c -p 12345 -i 1 -t 10
Client connecting to, TCP port 12345
TCP window size: 16.0 KByte (default)
[  3] local port 51560 connected with port 12345
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  4.79 MBytes  40.2 Mbits/sec
[  3]  1.0- 2.0 sec  6.96 MBytes  58.4 Mbits/sec
[  3]  2.0- 3.0 sec  8.08 MBytes  67.8 Mbits/sec
[  3]  3.0- 4.0 sec  6.59 MBytes  55.2 Mbits/sec
[  3]  4.0- 5.0 sec  6.77 MBytes  56.8 Mbits/sec
[  3]  5.0- 6.0 sec  5.91 MBytes  49.5 Mbits/sec
[  3]  6.0- 7.0 sec  6.89 MBytes  57.8 Mbits/sec
[  3]  7.0- 8.0 sec  6.66 MBytes  55.8 Mbits/sec
[  3]  8.0- 9.0 sec  5.71 MBytes  47.9 Mbits/sec
[  3]  9.0-10.0 sec  5.84 MBytes  49.0 Mbits/sec
[  3]  0.0-10.1 sec  64.2 MBytes  53.6 Mbits/sec

In this test, we have a throughput around 53 Mbits/sec.

Note that iperf actually misuse the term bandwidth. Bandwidth is the max amount of traffic that can be transmitted through a network for a specific period of time, and it’s the theorical value. Throughput is the actual amount of data transmitted through the network during a specific period.

Lastly, if you’re a GUI fan, you can download JPerf. It’s a Java UI for IPerf.

HTTP Video Delivery — HTTP Pseudo Streaming

HTTP Pseudo Streaming is the second method in HTTP Video Delivery. The method is also based on HTTP progressive download as the first method does, and it makes use of the partial download functionality of HTTP to allow user to seek the video to part that has not been downloaded yet.

HTTP Pseudo Streaming requires support from both the client side and server side. For the server side, plug-ins are available for Apache, litghttpd etc. At client side, custom players are required to resynchronize the video, read metadata etc. Two examples of players that supports HTTP Pseudo Streaming ar eJWPlayer and FlowPlayer.

For an example of HTTP Pseudo Streaming, open up any Youtube video. Youtube actually uses lighttpd for server side and its own customized player based on Flash technology for client side (Flash player and Flash based media player are different. Please refer here. )

Below is a screenshot of the Wireshark network traffic capture when I was watching a Youtube video,


Figure 1. Wireshark Capture of HTTP Request for HTTP Pseudo Streaming

As there’re lots of segmented packets, I selected a video packet, right clicked it and selected follow TCP stream to get the screen above. It’s a single HTTP request followed by a response from the Youtube HTTP server.

I tried to seek to a part that is not downloaded yet, then do “follow TCP stream” again. I found the HTTP header contains the following strings for the initial request and the seek,

burst=40&sver=3&signature=B3B26708552F1C9FE68 7AAB13EFE6D73F294624F.0F9EEB822A5CF4AE5443CC5798B2F415C16B75E4&
redirect_counter=1 HTTP/1.1


The seek HTTP request contains a string “begin=1032070”, which should be used at the HTTP server to jump to the corresponding portion of the video.

Same as the first method, HTTP Pseudo Streaming download the video clip to browser cache. For Google Chrome, one can find the video clip at,

Ubuntu Linux: /home/roman10/.cache/google-chrome/Default/Cache/

Windows: C:Usersroman10AppDataLocalGoogleChromeUser DataDefaultCache

Where roman10 is my username for both OS.

For the benefits and drawbacks of this method, and its comparison with other HTTP method, please refer to HTTP Video Delivery. Note that as HTTP psuedo streaming is not real streaming, it doesn’t support live video streaming.

HTTP Video Delivery–HTTP Progressive Download and Play

Progressive Download and Play is the first of the three methods in HTTP Video Delivery. The basic idea of this method is to embed the video through media player (e.g. JWPlayer, FlowPlayer etc.). When user request to play the video, a HTTP Get request will send to the web server, and the video will be downloaded through HTTP for play out.

This method is supported by web servers by default, as it treats a video as a normal file like an image, or a CSS file. The play out, buffering and other video play specifics are handled by the media player which usually consists of some javascript and flash objects. More info about these players be found here.

Below is an example of HTTP Progressive Download and Play Out video, it’s delivered using JWPlayer as client.


Once you click the button to start play, the video download is triggered. Below is a screenshot of the network traffic captured using Wireshark.


Figure 1. Wireshark Capture Screenshot

The video is actually downloaded to browser cache, as I used Google Chrome at Ubuntu Linux to watch the video, the file is saved to

/home/roman10/.cache/google-chrome/Default/Cache/ folder. Below is a screenshot of the files under the directory,


Figure 2. List of Files under Google Chrome Cache

I can either locate the video clip by time or by file size, and play out the video using vlc as shown below,


Figure 3. VLC Play Out of the Cached Video

If you’re using Windows, the video file can be found under,

C:Usersroman10AppDataLocalGoogleChromeUser DataDefaultCache, where roman10 is the username.

For the benefits and drawbacks of this method, and its comparison with other HTTP video delivery method, you can refer here.

You may also want to check out,

HTTP Pseudo Streaming

Dynamic Adaptive Streaming over HTTP

UDP vs TCP–In the Context of Video Streaming

1. TCP Congestion Control and Window Size

TCP maintains a congestion window, which indicates the number of packets that TCP can send before an acknowledge for the first packet sent is received.

The congestion window size is default to 2 times of the maximum segment size (MSS, the largest amount of data in bytes can be sent in a single TCP segment without the TCP header and IP header). TCP follows a slow start process to increase the congestion window by 1 MSS every time a packet acknowledgement is received until the congestion window size exceeds a threshold called ssthresh (This effectively doubles the congestion window every RTT). Then TCP enters congestion avoidance.

In congestion avoidance, as long as non-duplicate ACKs are received, the congestion window is increased by 1 MSS for every RTT. When a duplicate ACK is received, the chance of the packet is lost or being delayed is very high. Depends on the implementation, the actions taken by TCP varies,

  • Tahoe: Triple duplicate ACKs are treated the same as a timeout. Fast retransmission is performed and the congestion window is reduced to 1, and slow start starts again.
  • Reno: If triple duplicate ACKs are received, TCP reduce the congestion window by half, perfrom a fast retransmit, and enter Fast Recovery. If ACK is timeout, it also restart from slow start.

In Fast Recovery, TCP retransmits the missing packets and wait for an acknowledgement of the entire transmit window before returning to congestion avoidance. If there’s no ack, then it’s treated as timeout.

As the window size controls the number of unacknowledged packets that TCP can send, a single packet loss can reduce the window size significantly hence reduce the throughput significantly. Therefore, TCP makes the throughput unpredictable for video streaming application.

2. TCP Reliability and Retransmission

TCP is a reliable service. When a packet is lost, TCP will retransmit the packet. For live streaming application, the retransmitted packets may already be late for playback and take over bandwidth unnecessarily.

3. Multicast + TCP has Problems ???

Multicast can be effective in streaming applications (at least in theory). TCP is connection oriented, it requires two parties to establish a connection. Multicast at TCP layer is tough as it requires the end player to send ACK to the server who is streaming content. It is hard to scale.

4. TCP and UDP Packet Header Size

TCP has a typical header size of 20 bytes, while UDP has header of 8 bytes. It occupies less bandwidth.

JWPlayer and FlowPlayer vs. Adobe Flash Player

JWPlayer and FlowPlayer are video players that support Flash video. What confusing is they’re referred as Flash Video Players sometimes. (e.g. FlowPlayer claims itself as Flash Video Player for the Web.) Why do I need another Flash Video Player if I already have Adobe Flash Player installed?

Adobe Flash Player is actually a virtual machine that runs Flash files, which end with an extension of .swf. SWF files can contain animations, video/audio and web UI. Both JWPlayer and FlowPlayer consist SWF files which is downloaded to browsers’ cache and played by Adobe Flash Player. In other words, JWPlayer and FlowPlayer are “played” by Adobe Flash Player. It’s like Adobe Flash is the JVM (Java Virtual Machine), and JWPlayer and FlowPlayer are two Java programs.

JWPlayer actually supports more than just Flash, it also supports HTML5 video for iPhone and iPad device. In essence, JWPlayer and FlowPlayer are just a collection of javascript and SWF files that allow a website publisher to embed video into a web page, customize the outlook of the video display area, and control the behavior of the video play out etc. And Flash video is one (and probably most important one) of the technologies/platforms they support.

Approach Matters–How a Different Method Solve a two-day Project in 30 Mins

It was friday afternoon. I was still busy debugging. The bug has been bothering me for 2 days. Then a colleage proposed another method, that saved my day and made me laugh at myself.

Let me start from the beginning. Our software has a web-based UI which is only available at client side which doesn’t have a public IP. In order to make the web UI accessible at the server side, we developed a tunnel that forwards the HTTP request from server to client and response from client to server.

This approach works. The issue is the UI at server side is slow when the client is connected to the Internet through a slow link sometimes. We did some simple test and found out it’s mainly due to the transfer of several javascript files.

Then here comes the natural approach, put the javascript files at server side and return the javascript files from server. I made the changes and started testing. However the tunnel started to function because of some changes I made, and there the debugging began.

Then it was friday afternoon. My colleague came to me and told that we can use another approach: host the javascript at our website.

OK, this is simple, but brilliant. We want to get the javascript from somewhere instead of client. It is not necessarily has to be our server. Instead of developing our own embedded web server and make sure it works with our tunnel, I can simply change the source to get the javascript file from another web server which is available publicly.

If I would have spent more time thinking about approaches and discussing with my colleagues, I will not spend two days developing the embedded server approach and debugging it.

Video Delivery in HTTP

As Internet becomes faster and faster, videos become important content of many websites. There’re many methods to publish video on Internet. This blog focuses on technologies around HTTP.

There’re 3 ways to deliver video content in HTTP, progressive download and play, HTTP Pseudostreaming, and Dynamic Adaptive Streaming over HTTP.

0. Why HTTP?

When people choose their video delivery method, HTTP offers a few advantages,

  • HTTP doesn’t require a dedicated media server like RTSP/RTP/RTCP and RTMP/RTMPT/RTMPS streaming technologies require. Web Servers (Some of HTTP method requires Plug-in/server-side scripting to support)
  • HTTP network packets can reach almost any devices that connected to internet, while many other protocols (e.g. RTMP on port 1935) might be blocked by firewalls.
  • HTTP is supported by existing server and caching infrastructure.
  • HTTP is stateless, it doesn’t require a consistent one-to-one connection between client and server when video is deliveried and played. So the number of clients a server can serve is higher than connection-oriented method. In other words, HTTP scales better.

1. Progressive Download and Play

This is the most basic method, supported by almost all web servers. (Apache, lighthttpd, IIS, etc.) The basic principle is to download the entire video file and play the file. If the video file is long, the client can download and play the downloaded part at the same time.

The main advantages of this method are,

  • Easy to configure (from publisher’s POV)
  • Video play can be stopped and buffered (from user’s POV)

The main disadvantages of this method are,

  • Entire video is downloaded to the client, unless user close the browser page. It wastes a lot of bandwidth sometimes. Suppose a user watches the first 4 minutes of an one-hour long video, but the entire video is downloaded if the network is fast enough to download in 5 minutes or the user keeps the page open after watching. (From the publisher’s POV and user’s POV when user pay by usage)
  • Entire video is downloaded to client, the content is not protected. (From publisher’s POV)
  • Video is only seekable for the downloaded part. In other words, if a user want to watch 40~45 minutes in a one-hour long video, he/she needs to wait the video to download the first 40 minutes. (From the user’s POV)

If you watch a unseekable video on a website, and the video can be found on your browser’s cache, most likely the website is using HTTP Progressive download and play.

2. HTTP Pseudostreaming (Pseudo-streaming)

HTTP Pseudo-streaming is, as its name suggests, not real streaming. It is based on progressive download method, but it mimic Video on Demand (VOD) streaming.

It’s not supported on web servers by default, but extensions and plug-ins can be added to Apache, Tomcat, IIS, or lighttpd to support pseudo-streaming.

Besides the advantages metioned for HTTP progressive download and play method, HTTP pseduo-streaming adds the seekable feature to the video. Also as the video is seekable, the skipped part is not downloaded, so it reduces the bandwidth waste at some cases.

If you watch a seekable video delivered through HTTP on a website, and the video can be found on your browser’s cache, then the website is using HTTP Pseudo-streaming to deliver the content to you. As an example, Youtube actually uses this technique with lighthttpd web servers.

3. Dynamic Adaptive Streaming over HTTP (DASH)

This is the latest technology. The basic idea is to divide a video into many small parts and deliver them over HTTP. Those small parts are then combined at the client side and played out. This method supports both video on demand and live streaming.

There are several different implementations of this method with different names, all based on the basic idea mentioned above,

  • HTTP Live Streaming (Apple)
  • Smooth Streaming (Microsoft)
  • HTTP Dynamic Streaming (Adobe)
  • Adaptive Bitrate (Octoshape)

The advantages of DASH are,

  • not dependent on specifized streaming servers or proprietary transfer protocols
  • Better protection for streaming content as the video clip/stream are divided into small chunks

The disadvantages of DASH are,

  • Requires client support. e.g. Adobe Flash Player only supports HTTP Dynamic Streaming after 10.1)
  • Different implementations are not compatible completely (There’re standardization work on-going and the comptability is on the way…)
  • The Live Streaming has longer delay compared with traditional Live streaming technologies (e.g. RTP, RTMP)
  • The Live Streaming quality adaption is not as fast as traditional Live streaming technologies as the quality switch can usually only occurs at streamlet boundaries. If the streamlet is 5 seconds, then the quality adaption usually occurs every 5 seconds.

For examples, some video streaming to iPhone Safari are actually using HTTP Live Streaming.