Showing posts with label tech. Show all posts
Showing posts with label tech. Show all posts

Wednesday, August 29, 2007

Image for: Wednesday, August 29, 2007

Shell date and time tricks

One of the things I hate the most when programming is dealing with time and date functions and all the special cases which exists. Even more, I have the feeling I'm walking a path many times walked before, so it becomes one of the most unpleasant tasks for me.

Fortunately, it's true, so many others walked before that path so here are some quick tricks for shell programming using the wonderful date UN*X program:
  1. Converting epoch:
    1. From epoch to anything else: date -d @$epoch_value +FORMAT (where FORMAT is of course as described on date(1), and '@' makes the actual undocummented trick).
    2. From anything else to epoch: date +%s
  2. Calculating times
    1. One day forward: date -d "1 day"
    2. One day backwards: date -d "1 day ago"
    3. Just imagine "1 month", "3 months ago" and the like. Not only google is so friendly with human language ;-)
  3. More format conversion: '-d' option accepts several other formats as input, even with calculations:
    1. date -d "1977-08-19 30 years", yeah! my 30th birthday was on sunday. Thanks date, and it was (as epoch): date -d "1977-08-19 30 years" +%s... 1187474400 :-D
    2. Funny ls:

      ls -l | while read perms links user group size d t name
      do
      echo $perms $links $user $group $size $( date -d "$d $t 1 day" ) $name
      done
      You can, of course, change the way date is shown. That's because 'YYYY-MM-DD hh:mm' is also a valid input format for date as it is 'YYYY/MM/DD'.
So '@', '1 day' and 'date' saved me the day.

Update: This features and more are indeed described in the coreutils info manual. Thanks mp for pointing it out.

Thursday, July 19, 2007

Image for: Thursday, July 19, 2007

Damn GFS

Just a few hours before the public release of the portal we work on one of the cluster machines get overloaded by issues no relevant for the point. The fact is once the machine started to not answer properly so it was fenced by some other node, but the problem was once this automatic action was held, the whole 6 machines GFS cluster went down letting all the machines unusable.

This, in addition to all of the previous issues we suffered on GFS, made us really thinking about purging GFS in favor of NFS. It was no an easy decision as it was fully against all our previous decisions but we weren't confident about GFS in the production systems. So we migrated it in a time record configuring everything by night so at six o'clock service would be held properly. And we managed to fulfill this purpose. We made it!

We are now tired after about 27 continuous working hours but the overall result was quite acceptable. I'm still proud about or design (not so much about my own decissions) which allowed us to make this king of changes so quickly.

But you can be sure I don't think we will never again think about installing GFS on any system as it seems not being production suitable (as RedHat even says so). And it is not only because the buggy GFS2 (at least, at present date) but for the sensation of instability all over the time we had it installed.

So in a few hours our architecture has been changed, but it was setup for the very moment we were accepting requests.

Monday, June 18, 2007

Image for: Monday, June 18, 2007

IPMI

IPMI stands for "Intelligent Platform Management Interface", an interesting feature deployed on nowadays servers. I can't forget that question I made some months ago about an extra ethernet port on the IBM server machines we were working with on those days. The answer was quite simple: it is almost of no use at all; at most you can get some stats and diagnostics but it is mainly intended for hardware technicians.

Liars (or ignorants)! Anyway, it is quite useful. It is the IPMI port which can be used for several managements actions (even from the running system) such as power cycle the machine, get some stats, establish a watchdog interface and the like.

Uses?, are they need to be told?. As a first glance you can get reports about the machine from the operating system itself without the need of physical access to the datacenter. You can also restart a machine which is failing to reboot by itself in case you can login one more time in the system. But... even more, you can do all that (and more) from a remote system (don't exhitate, it is password protected, at least, in case you configure it so). So the next obvious uses are for cluster fencing and stonith (shot the other node in the head) without the need of power control hardware.

It is a pity we didn't knew that when we configured the SAP cluster in my previous job despite the serial cable almost did it work. Sure Mr Navas will be interested in knowing about this technology :-)

Oh, a simple example:

# ipmitool -I open chassis status
System Power : on
Power Overload : false
Power Interlock : inactive
Main Power Fault : false
Power Control Fault : false
Power Restore Policy : always-off
Last Power Event :
Chassis Intrusion : inactive
Front-Panel Lockout : inactive
Drive Fault : false
Cooling/Fan Fault : false
Sleep Button Disable : not allowed
Diag Button Disable : allowed
Reset Button Disable : not allowed
Power Button Disable : allowed
Sleep Button Disabled: false
Diag Button Disabled : true
Reset Button Disabled: false
Power Button Disabled: true


Another one:

ipmitool -I open chassis power reset

Ooooooops. Next post after crash :-)

Wednesday, June 6, 2007

Image for: Wednesday, June 6, 2007

GFS choice

We are designing at work a clustered architecture for the site we are developing. For that purpose we needed a clustered/distributed filesystem such as NFS, GFS, OCFS, and the like. We all have already had bad experiences using NFS as it usually hangs a client if server goes down. In addition, on that design we have a SPOF (single point of failure) we were trying to avoid. So we give up with it.

The following candidates were GFS and OCFS. Despite there are some other suitable filesystems they are not so widely supported in the linux kernel and distributions so we couldn't fit with our hosting service level agreements bringing them out of question. And before you ask, linux itself was indeed a requirement. On a housing situation, things would have been different.

Back on FS, at first glance on our tests OCFS results wasn't as good as GFS ones. In addition, no all distribution supported online resizing in the packaged version of ocfs tools so we finally decided to use GFS as the clustered filesystem. But once we have almost all infrastructure ready, later tests showed it wasn't as good as we thought, and not so documented neither supported by RedHat itself as we expected. In fact, performance was an issue which was worrying us as it would slow down the whole site. Fortunately, after several dark options tests the solution was much simpler than that: GFS is a distributed FS which relies on network performance. Switching to giga ethernet made the trick. It boosted performance enough to make us more confident about our design and inversion.

Now we can focus on another of all that matters remaining before our product launching soon, quite soon, frightening soon.

Tuesday, May 22, 2007

Image for: Tuesday, May 22, 2007

Emulation vs. Virtualization

These days virtualization is the breaking new appliance for the enterprise. Everybody is migrating to this "new" concept but, which is the difference with emulation?

And interesenting one. Just for the sake of correctness:
  • Emulation involves emulating the virtual machines hardware and architecture. Microsoft's VirtualPC is an example of an emulation based virtual machine. It emulates the x86 architecture, and adds a layer of indirection and translation at the guest level, which means VirtualPC can run on different chipsets, like the PowerPC, in addition to the x86 architecture. However, that layer of indirection slows down the virtual machine significantly.
  • Virtualization, on the other hand, involves simply isolating the virtual machine within memory. The host instance simply passes the execution of the guest virtual machine directly to the native hardware. Without the translation layer, the performance of a virtualization virtual machine is much faster and approaches native speeds. However, since the native hardware is used, the chipset of the virtual machine must match. Usually, this means the Intel x86 architecture. VMWare is an example of this type of application
    for Windows.

Monday, April 16, 2007

Image for: Monday, April 16, 2007

Photo tech

In the last two weeks I've discovered (or was told about) some programs quite useful for an outdoor technician (amateur photographer):
  • gpscorrelate which adds GPS tags on exif fields correlating gps tracks and photgraph shooting time. Finally, I am able to geoposition my trekking pictures automagically!. Next step: some way to create google maps paths with POIs.
  • hugin for image blending. The following image was obtained in Gredos composed from four pictures.