Search This Blog

Friday, December 28, 2012

How to emulate Raspberry Pi computer

How much money we would you have to spent to assemble a simple x86 PC (Intel/AMD compatible PC)? With the prices on the market it sounds almost impossible to buy all the necessary elements under a 100$ budget. But if Intel binary compatibility is not your requirements you can try a cheapest ARM based computer called Raspberry Pi.

What is Raspberry Pi

For about only 35$ you can buy a complete ARM compatible PC. 

The Raspberry Pi (short: RPi or RasPi) is an ultra-low-cost ($25-$35) credit-card sized Linux computer.

The Raspberry Pi measures 85.60mm x 56mm x 21mm, with a little overlap for the SD card. It weighs 45g. 

Graphics capabilities are roughly equivalent to Xbox 1 level of performance.

Overall real world performance is something like an old 300MHz Pentium 2, with kind of better graphics.

The device is powered by 5v micro USB.

Raspberry Pi emulator

You can use qemu emulator that works on Linux and Windows to run and test almost any Raspberry compatible distribution. The emulator will take care of abstracting the necessary ARM underlying hardware when the system is turned on. For detailed instruction we can use Google or follow one of these links:

  • windows qemu

  • Linux qemu

Some screens from booting of the Raspbian “wheezy” image can be seen below.

  • Others

Thursday, December 27, 2012

Openstack auto provisioning with Puppet and razor

To build and operate big Openstack infrastructure solutions you have to be able to deploy and provision quickly and effectively many new servers.

This is not definitive list but as a simple task you will need to make sure that all your servers have the right OS version, all dependency packages are installed and finally that the right Openstack code (projects like nova, cinder, etc) are deployed  This list is only as very simple example what you need to think about. As a demonstration in this blog I wanted to show an example how this can be achieved with a Puppet razor tool.


How to provision and configure Openstack servers.

Analysis and results description

Openstack+puppet+razor: all details of how to run this can be found here:


Dev vs Ops team and DevOps anti patterns

In this video from Puppet conf Roy Tyler talks about important aspects and rules that sysadmin, engineering and devops teams do and follow. He gives many examples and tries to explain the principles that lead into many bad ideas within organisations when it comes to run operation, engineering or devs teams. He calls the bad ideas ops anti patterns. The full video can be found here: We'll Do It Live - Operations Anti-Pattern. Below are couple of notes I took when watching it :).
  • ~4.20m; use configuration management; we are in transition and your infrastructure should be treated not different than a software code
  • ~6.00m; use automatic testing
  • ~7.40m; if your infrastructure is treated as code please do implement a release management process
  • ~8.40m; scripts written by ops team are a great way to start but without some good software engineering practices it doesn't work in long time
  • ~9.20m; don't use root user for code deployments into production but rather a well known good tools design for it 
  • ~11.40m; Ops team tends to be reactive instead of proactive; this is dangerous because your SPOF (single point of failures) are changing as your application matures and gets more complex
  • ~14.30m; dev team may take care of HA in software but there is a lot more that needs to be done on the infrastructure level as well; dedicate resources or highly skilled contractors to work on it as soon as possible
  • ~14.40m; HA is never extensively tested so failures should be expected
  • ~15.30m; when building in cloud you have to assume that everything, I mean everything is going to fail
  • ~16.00m; make sure that your alerting system is not over logging
  • ~18.45m; this sysadmin attitude is wrong: never touch a running system
  • ~19.30m; use continues integration to minimize long-term risk
  • ~21.40m; isolated silos like a separate dev and ops team that don't talk to each other is wrong
  • ~24.30m; share the necessary information about your production to developers
  • ~25.00m; strict control is an enemy of creativity
  • ~25.30m; too much IT and ops control will lead to wrong and bad workarounds
  • ~26.50m; silos and poor communication will lead to waist of resources
  • ~29.00m; don't jump and use the hype-bla products/tools only because they are popular on Internet; by choosing the right tools you have to evaluate a risk between:
    • from personal experience knowing the bad design and limitations the old tools have vs
    • knowing the advantages the new products have from only reading about them
  • ~30.40m; don't invent tools in house before you evaluate potential solutions
  • ~33.50m; don't build your own packaging system; use the one that opensource or some vendors offer
  • ~36.40m; don't netboot all our hardware; there is a reason the servers have local disk that can be used
  • ~39.20m; don't delete your production data to test your backup
  • ~40.30m; don't trust your vendor
  • ~43.00 don't use multi data center deployments if your application is not ready for it; use other methods to implement disaster recovery if needed
  • ~45.00 have a centralized location for code that is used for production deployments

Thursday, December 20, 2012

How to document python code

There are couple of way how you can document a python code. In this article I'm not trying to compare them but rather to show how a final documentation may look like.


How many tools can we use to generate documentation from python source.
How a final documentation look like after generation from a source code.


When searching on Google we quickly find that the most popular are (there are more that I'm not listing here):
  • epydoc
  • sphinx 
  • doxygen

These are examples how the documentation looks like.

Epydoc - It has as stile of a classical javadoc documentation introduced by Sun when Java was released.

Sphinx - the layout is very different from Epydoc. It seems to be very likable by the python project itself. All the doc for it is generated using it.


Epydoc installation

# python --version
Python 2.7.3
# aptitude show python-epydoc
# aptitude install  python-epydoc
# aptitude install apache2

The source code

Generation of epydoc documentation
# epydoc --verbose --verbose
Building documentation
[  0%] example_epydoc (
Merging parsed & introspected information
[  0%] example_epydoc
Linking imported variables
[  0%] example_epydoc
[ 12%] example_epydoc.MyClass
Indexing documentation
[  0%] example_epydoc
Checking for overridden methods
[ 12%] example_epydoc.MyClass
Parsing docstrings
[  0%] example_epydoc
[ 12%] example_epydoc.MyClass
Inheriting documentation
[ 12%] example_epydoc.MyClass
Sorting & Grouping
[  0%] example_epydoc
[ 12%] example_epydoc.MyClass
Writing HTML docs to 'html'
[  4%] epydoc.css
[  9%] epydoc.js
[ 13%] identifier-index.html
[ 45%] module-tree.html
[ 50%] class-tree.html
[ 54%] help.html
[ 59%] frames.html
[ 63%] toc.html
[ 68%] toc-everything.html
[ 72%] toc-example_epydoc-module.html
[ 77%] example_epydoc-module.html
[ 81%] example_epydoc.MyClass-class.html
[ 86%] example_epydoc-pysrc.html
[ 90%] redirect.html
[ 95%] api-objects.txt
[100%] index.html

Timing summary:
  Building documentation.............     0.2s |=================================================
  Merging parsed & introspected i....     0.0s |
  Linking imported variables.........     0.0s |
  Indexing documentation.............     0.0s |
  Checking for overridden methods....     0.0s |
  Parsing docstrings.................     0.0s |=
  Inheriting documentation...........     0.0s |
  Sorting & Grouping.................     0.0s |
  Writing HTML docs to 'html'........     0.0s |=======

Showing the doc

You have to configure the Apache and points it to the generated 'html' directory. Once the page is loaded in the browser the documentation looks like this:


What do you need to implement virtual network and build hybrid cloud

To build an infrastructure that can be used to host a hybrid cloud environments or to benefit from the flexibility that a virtual networking provides we need software and hardware components. Below are couple of links I found when researching on this topic.

VMware ESXi/vSphere
Citrix XenServer
Opensource Linux alternatives
Microsoft Hyper-V

Monday, December 10, 2012

A simple GIMP snippets for graphic files manipulations

In MS Windows we can use Paint [1] program to manipulate graphical files. In Linux the most famous and often recommend alternative to is is GIMP.

The problem with GIMP is that as it is a very powerful tool it isn't intuitive and simple to use like Paint. Below are some tricks I use when work with GIMP.

How to draw a square or border line

Once you create a selection you can draw a line to make it visible. An example is shown below.

You can do it from GIMP by using the options: Edit -> Stroke Selection.
More info about his can be found here [2].

How to create a new image from a selected region

With a help of it we can for example extract from an original picture below the Ubuntu logo only.

Original picture:

After extracting and croping:

You can do it from GIMP by using the option: Image  -> Crop to selection
More info about his can be found here [3].




Proprietary AMD graphics drivers for Linux

How do I start Catalyst Control Center (CCC) from bash

I created a demo user to test my X server config and ATI graphic driver. I run into a problem that my user didn't have relevant permissions to run commands with sudo. Every time I tried to lunch the Catalyst Control Centre I got a popup windows asking for password to perform administrative task.

  1. The name of the program is displayed in the popup windows: amdcccle
  2. If you didn't notice you can find this using these methods
$ dpkg -l | grep ati | egrep -iv 'configuration|application|automatic|compatible|compatibility|static|foomatic|ating|ation|ative' | grep ati
$ dpkg -l | grep amd

$ dpkg -L fglrx-amdcccle | grep bin

How do verify that my driver is loaded

More checks can be found in [1]. As a simple check you can run:

$ lsmod | grep fglrx
$ dmesg | grep fglrx

Execpt CCC what command can I used to list, print and change the graphic driver settings

You can use a command 'aticonfig'.  Example output can look like this:

$ aticonfig --lsa --odgc
* 0. 06:00.0 ATI Radeon HD 5700 Series

$ aticonfig --odgt
Default Adapter - ATI Radeon HD 5700 Series
                  Sensor 0: Temperature - 37.00 C

$ aticonfig --odgc
Default Adapter - ATI Radeon HD 5700 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    850           1200
             Current Peak :    850           1200
  Configurable Peak Range : [600-960]     [1200-1445]
                 GPU load :    0%

  1. Unofficial Wiki for the AMD Linux Driver

Sunday, December 9, 2012

High availability pattern using anycast IP addresses for cloud and applications

Anycast architecture that helps to create and achieve HA

To run efficiently applications demands more and more resources. Even with the right amount of computational resources like servers, CPU, RAM, storage for them to be considered efficient and successfully on the market they have to meet many more requirements. It is impossible to list here all of them as they can depend on internal factors (for example driven by the application architecture itself ) or relay on external factors that may be specific and unique to a customer and an environment.

Although in this short blog post, I would like to discuss the importance of scalability factor and show one patters that can be used to build a highly available and efficient infrastructure systems.

There are two concept how we can try to implement a scalability: scale up vs scale out. For more information about scale up (or vertical scaling) these links provide further information [1]. We will concentrate here only on the scale out option. All the pictures below are base on this presentation that slides can be found here: OpenStack-Design-Summit-HA-Pairs-Are-Not-The-Only-Answer.
  1. To fully benefit from the HA pattern your application architecture should relay use share nothing paradigm

  2. That way if failure occurs only an isolated, small part of the computational resources will be impacted.

  3. Next we have to configure our routers and implementing necessary changes for a routing protocol.
  4. OSPF routing protocols is an examples and others can be used in similar way as well. For more info can be found here [4].

    The slides show only a fraction of the configuration. Another good example with con figs can be found here:  Anycast DNS - Part 4, Using OSPF

  5. As last you have to configure you servers to listen and accept traffic for our anycast IP.
A best practice is to configure the external IP on the loopback interface, disable ARP protocols for it and bind our application specifically to this IP.

  3. Seattle Conference on Scalability: Lessons In Building Scalable Systems
  4. What is “anycast” and how is it helpful?