Named Pipe in Linux with a Python Example

Pipe is a simple FIFO communication channel used for one-way Inter-Process Communication (IPC). Typically a starting process creates a pipe, and one or more processes are spawned to receive messages and form a pipeline. As Pipe is a one-way IPC, it normally requires two pipes for two-way communication.

In Linux, pipes are often created by using the pipe system call, which creates a pair of file descriptors, one for reading (receiver) and one for writing (sender). In Linux shell, the “|” character is used to create pipe.

Named Pipe is an extension to traditional pipe. Pipe exists anonymously only when the process is running, while named pipe is persistent and should be deleted when it’s not needed.

Named pipe uses file system. It’s created using mkfifo() and mknod(). Once it’s created, two separate processes can access the pipe by name, one as a reader, the other one as a writer.

A named pipe supports blocked read and write operations by default: if a process opens the file for reading, it is blocked until another process opens the file for writing, and vice versa. However, it is possible to make named pipes support non-blocking operations by specifying the O_NONBLOCK flag.

A named pipe must be opened either read-only or write-only. It cannot be opened for read-write because it’s a one-way channel. You’ll need to create two named pipes to establish a two-way communication.

Below is an example of using Named Pipe for synchronized communication in Python,

pipe_test_1.py


#!/usr/bin/python
import cPickle
import os
#communicate with another process through named pipe
#one for receive command, the other for send command
wfPath = "./p1"
rfPath = "./p2"
wp = open(wfPath, 'w')
wp.write("P1: How are you?")		
wp.close()
rp = open(rfPath, 'r')
response = rp.read()
print "P1 hear %s" % response
rp.close()
wp = open(wfPath, 'w')
wp.write("P1: I'm fine too.")		
wp.close()


pipe_test_2.py


#!/usr/bin/python
import cPickle
import os
#communicate with another process through named pipe
#one for receive command, the other for send command
rfPath = "./p1"
wfPath = "./p2"
try:
    os.mkfifo(wfPath)
    os.mkfifo(rfPath)
except OSError:
    pass
rp = open(rfPath, 'r')
response = rp.read()
print "P2 hear %s" % response
rp.close()
wp = open(wfPath, 'w')
wp.write("P2: I'm fine, thank you! And you?")		
wp.close()
rp = open(rfPath, 'r')
response = rp.read()
print "P2 hear %s" % response
rp.close()


To run this example,

1. Open a terminal, start pipe_test_2.py by typing: ./pipe_test_2.py

2. Open another terminal, start pipe_test_1.py by typing: ./pipe_test_1.py

The output should be,

1. at terminal running pipe_test_2.py:

P2 hear P1: How are you?

P2 hear P1: I’m fine too.

2. at terminal running pipe_test_1.py:

P1 hear P2: I’m fine, thank you! And you?

As you can imagine, the two little programs can be converted to a simple chat program by adding some looping structure.

Reference: http://developers.sun.com/solaris/articles/named_pipes.html

Side Note: first draft on Apr 21 2011.

Best Combination of Linux and Windows — The Seamless Mode of VM VirtualBox

Programmers and computer geeks alike love Linux. It’s open-source, free, cutomizable …. Well, we also have to face the truth that Windows still dominate the desktop computing world. Many software and its service are only available to Windows.

So here comes the Virtual machine, which allows us to run Linux on Windows, or Windows on Linux. It used to be slow, but the speed is getting better.

Sometimes it can be difficult to view the stuff on your screen when you have two operating system running (one in VM).

VirtualBox actually has a Seamless Mode, as shown below,

image

Once I entered the Seamless mode, my screen becomes like this,

image

On the right, I have access to the Windows toolbars; On the top, I have Linux menus. The background is actually as it is in Windows. It’s just like Windows and Linux are combined together. Isn’t this great?

Side note: First draft on Apr 16 2011

Video Boundary Detection Part 3–Fade In and Fade Out

This is a follow up article on video boundary detection part 2, gradual transition detection.

There’re many types of gradual transitions. For example, fade in/fade out, dissolve, wipe, etc.

In part 2, I’ve covered the detection of gradual transition detection. The twin-comparison method is effective in detecting graudual transitions, but it cannot determine which type of gradual transition it is.

This article introduces another method for video boundary detection, Standard Deviation of Pixel Intensities. This method is effective for detecting fade in/fade out transitions.

What is fade in/fade out transition?

A sample sequence of fade out transition frames is as below,

clip_image002clip_image004clip_image006clip_image008clip_image010clip_image012clip_image014clip_image016clip_image018clip_image020clip_image022clip_image024clip_image026clip_image028clip_image030clip_image032clip_image034clip_image036

This transition is produced by decreasing of pixel intensities over time, until the screen goes completely black. Fade in is the opposite of fade out.

How to Detect Fade in/Fade out?

The scaling of pixel intensities of fade in/out transitions is visible in the standard deviation of the pixel intensities. A plot of the standard deviation of pixel intensities for one of the test videos is as below,

image

The down lines and up lines correspond to fade out and fade in respectively. The zero value corresponds to the frame that is completely black.

Therefore, the fade in / fade out detection problem is equivalent to detection of the down lines and up lines.

Part 1 of video detection has covered the conversion of RGB channels to form an intensity component. The standard deviation calculation is a common mathematical computation and is not covered here. (But you can just google to find tons of materials about it.)

Use it with Twin-Comparison Method

The standard deviation of intensities can be used together with the twin-comparison method to better detect the fade in/fade out transition.

The idea is to use twin-comparison method to detect the gradual transitions, and then use standard deviation of pixel intensity to determine if the gradual transition is fade in/fade out.

A sample plot of the instensity histogram difference (left), accumulated intensitiy histogram difference and the standard deviation of pixel intensities (scaled and overlapped) (right)is as below,

imageimage

The right graph shows the twin-comparison method can detect the transition effectively, and the standard deviation of pixel intensities method can be applied to determine it’s a fade in/fade out transition.

Side note: First Draft on Apr 15 2011.

Video Boundary Detection–Part 2 Gradual Transition and Its Matlab Implementation

Side Note: First draft on Apr 14 2011.

This article is a follow up of the video boundary detection for abrupt transition.  Intensity histogram measurement used in abrupt transition detection is used for gradual transition detection.

Gradual transitions are more difficult to detect than abrupt transitions. It has a lot of forms. Wipe, dissolve, fade in/fade out, just name a few of them.

The frame to frame difference of gradual transition is not as significant as it is in abrupt transitions. However, the difference between the first transition frame and the subsequent frames tend to increase, and this difference (called accumulated difference) will eventually become as comparably large as the difference seen in abrupt transition.

There is a popular method called twin-comparison method for gradual transition detection.

In this method, a lower thread hold Ts is set to detect the candidate frames that start a transition, and the same threshold used for abrupt detection Tb is used to compare against accumulative difference to test whether there really exist a transition. The end frame of the transition is detected when the consecutive difference is less than Ts, and the accumulated difference has gone beyond Tb.

An illustration of the twin-comparison is presented as the figure below,

image

The upper half of the figure shows how the lower threshold detects the potential start frame of the transition based on intensity histogram difference. And the lower half of the figure indicates the accumulated difference goes beyond higher threshold Tb and a gradual transition is detected.

Matlab Implementation

The matlab implementation of this method can be downloaded here. In some of the test videos, the neighboring frames doesn’t always have a difference bigger than Ts, so the implementation sets if there’re 1 out of 3 frames that has a intensity histogram difference bigger than Ts, the trend is considered as continuing.

Video Boundary Detection–Part 1 Abrupt Transition and Its Matlab Implementation

Side Note: First Draft on Apr 13 2011.

Video boundary detection is a foundamental technique for video retrieval. There’re two different types of transitions,  abrupt transition (also called cut) and gradual transitions, which includes fade in/fade out, dissolve, wipe, etc.

This article covers abrupt video transition detection and provide a simple implementation in Matlab.

Abrupt transitions are relatively easy to detect, as there’re normally a big difference between the two transition frames. The problem is equivalent to detect this big difference.

This difference can be measured on a pixel by pixel basis, in block-based manner, or based on some global characteristics of the frames, for example, color histogram and intensity histogram.

One of the effective way is intensity histogram. According to NTSC standard, the intensity for a RGB frame can be calculated as,

I = 0.299R + 0.587G + 0.114B

where R, G and B are Red, Green and Blue channel of the pixel.

For the intensity histogram difference we’re looking for, it can be expressed as,

image

where Hi(j) is the histogram value for ith frame at level j. G denotes the total number of levels for the histogram.

In a continuous video frame sequence, the histogram difference is small, whereas for abrupt transition detection, the intensity histogram difference spikes. Even there is a notable movement or illumination changes between neighboring frames, the intensity histogram difference is relatively small compared with those peaks caused by abrupt changes. Therefore, the difference of intensity histogram with a proper threshold is effective in detecting abrupt transitions.

The threshold value to determine whether the intensity histogram difference indicates a abrupt transition can be set to,

 image

where mu and sigma are the mean value and standard deviation of the intensity histogram difference. The value of alpha typically varies from 3 to 6.

Implementation in Matlab

The implementation of the above method in Matlab can be found here. Note that sometimes gradual transitions can have spikes that even higher than those in abrupt transitions. In order to differentiate the abrupt transitions from graudal transitions, the neighboring frames of a detected spike are also tested, if there’re multiple spikes nearby, the transition is more likely to be gradual transition, and we simply drop this detection.

Test and Result

One of the video sequence I’ve tested present the following graph,

baseball-intensity-histogram-difference

The straight line is the threshold, it’s clear that there’re two abrupt transitions in this video sequence.

All Focused Image by Focal Stacking

Side Note: First Draft on Apr 12 2011. This post is based on Dr. Michael S. Brown’s assignment for a course I took and enjoyed. Some of the test pictures are from the assignment.

Digital Camera has limited depth of field. It’s difficult to capture a single image with all the important elements of the scene in focus sometimes.

Focal stacking is a technique to “stack” multiple images with different focus into a single all focused image.

Below is an example borrowed from wikipedia,

File:Focus stacking Tachinid fly.jpg

If you look at the images carefully, you’ll find that the first image has the first part of the fly focused, the second image has the second part of the fly focused, and the third image is a combined all-focused image.

Focal Stacking Implementation in Matlab

Here I introduce a simple method of focal stacking assuming that the input image sequences are aligned.  The input image sequence can be found at another page here.

The idea is follows,

1. A focused image will have a lot of sharp edges. Edges can normally be found using derivative filters. Laplacian filter is used here to retrieve these sharp edges. Therefore, laplacian values can be used to indicate the sharpness level of each pixel.

2. We want the output image to be all-in-focus, but we also want the image to look smooth and natural. In other words, we want a smooth transition when we take pixels from different images. If we take 10 neighboring pixels from 10 different images, the output is likely to be noisy, as below,

image

A Focused Image based purely on Laplacian

Therefore, we want to somehow smooth the image without introduing noticeable blur. We can do this by smooth the Laplaican we computed for every image. Since Laplacian values are the choice indicator, we smoothed our choice in this manner.

The smooth can be done using an average filter, and the result I got is something below,

image

A Focused Image by Smoothed Laplacian

The image above is much less noisy than using purely laplacian. However, there’re still obvious noise at the boundary of the foreground (with a lot of sharp edges) and background (no sharp edges).

3. An error correction technique is used to improve the result. I still used Laplacian. Firstly, I obtain the sum of Laplacian from all input images, and this sum of laplacian image is considered to contain all the correct edges of the input images. We then start from the coordinate of the sharpest pixel in this sum image to pick up pixels for our output image. For every pixel, if its corresponding sum image pixel has a value bigger than 0 (the pixel falls on an edge), we select the pixel from the input image with highest laplacian value at that pixel. If its corresponding sum image pixel has a value of 0 (the pixel doesn’t fall on an edge), we select the same image as one its neighbors so the final output is smooth. The one image got selected again is the one with the highest Laplacian value. Below is the output,

  clip_image002

A Focused Image by Laplacian with Reference to Sum of Laplacian of Individual Image

Focal Stacking in Photoshop

Photoshop has the functionalities to help one to do focal stacking. You can find details here. Below is the result I got using Photoshop CS5. Honestly, it’s slightly better than my Matlab result.

fly-photo

You can download the matlab code here.

A First Touch on AJAX

Side Note: First Draft on Apr 12 2011. I once had a chance to work part-time on AJAX web development during undergrad, but my friend and I quit half way. Recently, I had the chance to develop some simple AJAX pages, so it’s kind of first touch. Smile

AJAX, stands for Asynchronous JavaScript and XML, is a group of popular technologies used in client-side web development for interactive applications. The main idea of AJAX is to retrieve data from server at the background and dynamically update the web page.

The advantage of this technique is it allows client web page to be updated partially without refreshing the entire page. Therefore, less data is exchanged between client and server, and the web application becomes more responsive.

One of the earliest example of AJAX is Google’s search keyword auto-completion. The web page is constantly sending the letters you typed into the search box back to Google servers, and Google’s backend servers will do the analysis and return a list of possible keywords. Then client side page will parse this data and do the auto-completion for you.

The difference between the traditional web application and AJAX application is as below,

clip_image002

Picture from interakt

The AJAX engine is nothing but some JavaScript downloaded from the web to the user’s browser.

In AJAX apps, data is usually retrieved using XMLHttpRequest object and JavaScript is written to parse the received data and dynamically change the website. XMLHttpRequest object was implemented as ActiveX object in IE and later on become a native JavaScript object in most browsers, including FireFox, Safari and Chrome.

A typical HTTP request using XMLHttpRequest looks like below,

http://127.0.0.1:8080/update.xml

GET /update.xml HTTP/1.1

Host: 127.0.0.1:8080

User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.16) Gecko/20110323 Ubuntu/9.10 (karmic) Firefox/3.6.16

Accept: application/xml, text/xml, */*; q=0.01

Accept-Language: en-us,en;q=0.5

Accept-Encoding: gzip,deflate

Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7

Keep-Alive: 115

Connection: keep-alive

X-Requested-With: XMLHttpRequest

Referer: http://127.0.0.1:8080/update.html

And once the server returned the requested XML file with data, the JavaScript in the web page can parse it and update the web page. The data can be in other formats as well, for example, JSON.

The request and parsing can be simplified by using some JavaScript, for example, the popular JQuery library.

A typical JavaScript function using JQuery JavaScript library is as below,

function receiveServerData() {
    $.ajax({
        type: "GET",
        url: "update.xml",
        dataType: "xml",
        success: function(xml) {
            $(xml).find('s').each(function() {
            //further parsing the xml 
            }
       }
   });
}

This simple script will request for an xml file and then parse the xml file if it returns successfully.

My ffmpeg Commands List

Side note: first draft on Apr 1 2011. It turns out ffmpeg can be really handy when you want to create some image or video data that works for your implementation. Winking smile. This list is on-going…

ffmpeg is a powerful open source video and audio processing tools. It supports a lot of different video/audio formats. It allows you to record, convert, stream and manipulate audio and video files.

This article collects my favourite ffmpeg commands,

1. Extract Frames from Video

1.1 Extract the first frame from video

ffmpeg –vframes 1 –i input_video_file_name –f image2 output_frame.bmp

ffmpeg also supports other image formats like .png or .jpg.

1.2 Extract all frames from video

ffmpeg –i input_video_file_name –f image2 output_file_name_format.bmp

Note that output_file_name_format.bmp can be, for example, %05d.bmp

2. Crop Video

ffmpeg –i input_video_file_name –vf crop=w:h:x:y output_file_name

Note that w and h are width and height, and x, y is the position of the top-left corner before crop. vf means video filter. crop is kind of video filter in ffmpeg.

3. Convert Image between Different Formats

ffmpeg –i input_image_file_name.ext1 output_image_file_name.ext2

4. Create Video from Sequences of Images

ffmpeg –r 20 –i %input_image_file_name_format.ext video_file_name.ext2

As an example, ffmpeg –r 20 –i %05.bmp test.avi

5. Convert Video to Different Container Formats

ffmpeg –i input_video_file_name.ext1 output_video_file_name.ext2

Example 1:  Convert Video from AVI to FLV

Sometimes ffmpeg cannot figure out everything itself, we may need to supply more info. I encountered an example where I’ll need to specify the output video, audio codec and scale.

ffmpeg -i recording_3.avi -f flv -vcodec copy -acodec copy -s 1280×720 recording3.flv

Note that the command above only changes the container format, it specifies same audio/video codec (using the “copy” option) as the avi file for the output flv file. This command runs very fast as it left the codec untouched.

6. Convert Video to Different Codec (Transcoding)

ffmpeg -i input_video_file_name.ext1 -vcodec xxx -r xx -s aaaaxbbbb -aspect xx:xx -b xxxxk output_file_name.ext2

As an example,

ffmpeg -i h1.mp4 -vcodec mpeg4 -r 30 -s 1280×720 -aspect 16:9 -b 10000k h1_1280_720.mp4

This command takes input h1.mp4 (it has h264 codec in my case), transcode it to mpeg4 codec, with frame rate 30 fps, resolution 1280×720, aspect ratio 16:9, bitrate of 10000 kbits/s, and output it to h1_1280_720.mp4 file.

How to Configure a Fix Line Connection Manually in Ubuntu

Side Note: First draft on Apr 7 2011

Normally a fix line interface will have a name like eth0. Below are the step by step instructions of how to configure fix line connection without Network Manager.
1.Configure Your IP address and Subnet Mask

sudo ifconfig eth0 xxx.xxx.xxx.xxx/xx 

Note that xxx.xxx.xxx.xxx is the ip address, xx is the number of 1s in network mask. e.g.

sudo ifconfig eth0 142.178.2.23/24

    Use the following command to check you have configured it correctly,

ifconfig eth0 

2.Configure your gateway

sudo route add default gw xxx.xxx.xxx.xxx 

Note that xxx.xxx.xxx.xxx is the gateway ip address. e.g.

sudo route add default gw 142.178.2.1 

3.Configure your DNS Server

sudo gedit /etc/resolv.conf 

Then add in the DNS IP address in format of nameserver xxx.xxx.xxx.xxx

e.g.

nameserver 142.178.0.2 

nameserver 142.178.0.4 

4.If your interface eth0 is not enabled, enable it

sudo ifconfig eth0 up