Monday, November 25, 2013

Maze algorithm in Python.

I was so bored during the weekend and I thought, may be I should play around python a little. So I wrote the following code to make a Maze. It uses a backtracking algorithm to make a Maze of any given size. I personally enjoyed writing it.


And the output:

##############################
#       ##                   #
# ## ## ##### ############## #
# ## ## ##### ############## #
# ## ##       ##             #
# ## ########### #############
# ## ########### #############
# ##    ##    ## ##          #
####### ## ##### ##### ##### #
####### ## ##### ##### ##### #
#       ##    ##    ##    ## #
# ######## ## ##### ## ## ####
# ######## ## ##### ## ## ####
#    ## ## ##    ## ## ##    #
#### ## ## ## ##### ##### ## #
#### ## ## ## ##### ##### ## #
#    ## ## ##    ##    ## ## #
# ##### ## ##### ##### ##### #
# ##### ## ##### ##### ##### #
#    ##    ##       ##       #
# ## ##### ## ############## #
# ## ##### ## ############## #
# ##    ## ##          ##    #
# ##### ## ######## ##### ####
# ##### ## ######## ##### ####
#    ## ## ##    ##       ## #
####### ## ##### ########### #
####### ## ##### ########### #
#       ##                   #
##############################

Friday, November 08, 2013

Auto-compile Jasper subreports in OpenNMS

I was trying to make a database report in OpenNMS to collect CPU utilization information from JRobin files.  To use a subreport and make it compile the subreport every time you run the report, I ended up with the following way of presentation of jrxml file. May it be useful to someone. 



The parent report file:
The subreport file:
/opt/opennms/etc/report-templates/subreports/PeakCPU_subreport.jrxml


Friday, June 28, 2013

RHEL4 or an old Linux but you want to roll back a LVM snapshot

LVM has supported snapshot since long time ago but it is so recent ( 2-3 years) since merge support is introduced in RHEL6 which means you can roll back to the snapshot points. Before this, I think snapshots were only a solution when it was needed to freeze a filesystem to be backed up.

Well ... what about an old kernel which only knows how to snapshot but not to merge !?

Easy, boot the system with a kernel which knows how to do this. like, booting from a RHEL6 DVD in rescue mode. LVM is a LVM anyway.

Or, you can dd the snapshot device file to an equally sized disk to create an equivalent filesystem to the snapshot.

OK ... to show you how, follow the following guidelines:

1. Create a snapshot:
There should be enough free space in the VG for the snapshot. It was nice if you could use other VGs to host the snapshot LV, but this feature is not available. Add a need disk to the VG to make some room if you need to.


#lvcreate -L<size> -s -n LogVol00-backup

 2.Do changes
Cool, now you have the snapshot. Do your changes

3.Clean up
If it went well, remove the snapshot LV and release the disk:

#lvremove LogVol00-backup

3-1. If you are on RHEL6 or a new version or a high end kernel:

#lvconvert --merge LogVol00-backup

If your KVs are mounted nothing will happen unless you reboot or unmount, deactivate and reactivate the LV.

#umount /mountpoint
#lvchange -an LogVol00-backup
#lvchange -ay LogVol00-backup
#mount /mountpoint

 Obviously  you can't do this if your LV id the root partition. Just reboot if you are stuck.

3-2. What if your OS does not support  merge ? Boot from a RHEL6 DVD and go into rescue mode and feel free to do the step3.

Or, alternatively, add a disk or create a LV equally sized to the original LV and do the following:

dd if=/dev/MV/snapshot_lv of=/dev/DV/DestinationLV

It will generate an LV with the contents of the snapshot LV. Don't be scared of dd. You know what you are doing, aren't you ?






 

Thursday, May 16, 2013

OpenVPN and XOR obfuscation

UPDATED: 13/09/2016

I patched the the current version 2.3.10 and pushed it in to my git hub: 
https://github.com/shenavaa/openvpn


UPDATED 15/07/2014:

I managed to patch and compile the latest version of Openvpn-2.3.4 for Windows. Please download compiled windows Autoinstaller binaries from here and the sources from here.

----


I went somewhere for a while and during my visit, I had a chance to play around OpenVPN. During a lazy afternoon I came up with a silly idea adding a layer of XOR obfuscation on top of whatever already exists in OpenVPN. I even managed to compile the windows client of openVPN and run it on windows.

The good thing about XOR obfuscation is that, it has no overhead on top of packets and it is so fast and easy.

The bigger an organization is, the harder it would be for LI/Security layers to detect the algorithm or the protocol of the packets on the network. I have seen AI engines learning protocols and used to block unwanted and recently undetected packets ! - Their solution is sillier than what I just did. Trust me. ;)

I've generally done it by adding one simple function obviously and couple of hacks in other source files.

## in xor.h
#ifndef _XOR_H
#define _XOR_H

void encbuffer(unsigned char* buf,int size ,unsigned char key);
#endif /* _XOR_H */
## in xor.c
#include "xor.h"
void encbuffer(unsigned char* buf,int size ,unsigned char key) {
    int i;
    for (i = 0; i < size ; i++) {
        *(buf + i) = *(buf + i) ^ key;
    }
}


So my OpenVPN configutation file simply turns to something as follows:

## On the server
local X.X.X.X
dev tap
verb 4
#mute 10
port 36
tun-mtu-extra 32
tun-mtu 1500
up-delay
ifconfig 172.16.4.1 255.255.255.0
ping 10
comp-lzo yes
fragment 1100
xorkey 52


## On the client
remote X.X.X.X
dev tap
verb 4
#mute 10
port 36
tun-mtu-extra 32
tun-mtu 1500
up-delay
ifconfig 172.16.4.2 255.255.255.0
ping 10
comp-lzo yes
fragment 1100
xorkey 52


My sources are here for whoever is interested to see. It's Openvpn 2.3.1. I've cleaned it up and all you need to compile the source, after unpacking, is "./configure; make; make install"


This is the beauty of open source software. Feel free to distribute the love.

Monday, April 08, 2013

Running jobs in different Timezones and Day light saving or DST by crontab

Oh, This day light saving thing. Why people don't set their whole environment clock on UTC ?
Believe me, I all will have better life if that happens.

Any way, The newer versions of crontab have introduced a good way to run job in different time zones. Just look at the following example of crontab file for instance.


#######################IMPORTANT#######################
# Anything beyond this line will run in Australia/Victoria time zone
#######################################################
CRON_TZ=Australia/Victoria
#######################################################

00 08 * * * /home/healthcheck/linux/send-email.sh

#######################IMPORTANT#######################
# Anything beyond this line will run in America/New_York time zone
#######################################################
CRON_TZ=America/New_York  
#######################################################

00 08 * * * /home/healthcheck/linux/syslog-check.sh

#######################IMPORTANT#######################
# Anything beyond this line will run in UTC time zone
#######################################################
CRON_TZ=UTC
#######################################################
01 16 * * * /home/healthcheck/linux/runDST.sh


And to check when this year's day light saving happens ?

[healthcheck@adm01 ~]$  zdump -v Australia/Victoria| grep `date +%Y`
Australia/Victoria  Sat Apr  6 15:59:59 2013 UTC = Sun Apr  7 02:59:59 2013 EST isdst=1 gmtoff=39600
Australia/Victoria  Sat Apr  6 16:00:00 2013 UTC = Sun Apr  7 02:00:00 2013 EST isdst=0 gmtoff=36000
Australia/Victoria  Sat Oct  5 15:59:59 2013 UTC = Sun Oct  6 01:59:59 2013 EST isdst=0 gmtoff=36000
Australia/Victoria  Sat Oct  5 16:00:00 2013 UTC = Sun Oct  6 03:00:00 2013 EST isdst=1 gmtoff=39600
[healthcheck@adm01 ~]$


Thursday, March 14, 2013

I/O Scheduler algorithms in Linux

 I just remembered once someone asked me about available I/O scheduler algorithms in Linux during a job interview. I still have not come across a situation to change any system's I/O scheduler algorithm during the 14 years of Linux experience I have. and personally I don't believe this is needed to be changed on any normal server environment. But may be this is a good reviews by RedHat.
 Really, What that guy has been thinking of !
 http://www.redhat.com/magazine/008jun05/features/schedulers/

Choosing an I/O Scheduler for Red Hat® Enterprise Linux® 4 and the 2.6 Kernel

The Linux kernel, the core of the operating system, is responsible for controlling disk access by using kernel I/O scheduling. Red Hat Enterprise Linux 3 with a 2.4 kernel base uses a single, robust, general purpose I/O elevator. The 2.4 I/O scheduler has a reasonable number of tuning options by controlling the amount of time a request remains in an I/O queue before being serviced using the elvtune command. While Red Hat Enterprise Linux 3 offers most workloads excellent performance, it does not always provide the best I/O characteristics for the wide range of applications in use by Linux users these days. The I/O schedulers provided in Red Hat Enterprise Linux 4, embedded in the 2.6 kernel, have advanced the I/O capabilities of Linux significantly. With Red Hat Enterprise Linux 4, applications can now optimize the kernel I/O at boot time, by selecting one of four different I/O schedulers to accommodate different I/O usage patterns:
  • Completely Fair Queuing—elevator=cfq (default)
  • Deadline—elevator=deadline
  • NOOP—elevator=noop
  • Anticipatory—elevator=as
Add the elevator options from Table 1 to your kernel command in the GRUB boot loader configuration file (/boot/grub/grub.conf) or the eLILO command line. Red Hat Enterprise Linux 4 has all four elevators built-in; no need to rebuild your kernel.
The 2.6 kernel incorporates the best I/O algorithms that developers and researchers have shared with the open-source community as of mid-2004. These schedulers have been available in Fedora Core 3 and will continue to be used in Fedora Core 4. There have been several good characterization papers on using evaluating Linux 2.6 I/O schedulers. A few are referenced at the end of this article. This article details our own study based on running Oracle 10G in both OLTP and DSS workloads with EXT3 file systems.

Red Hat Enterprise Linux 4 I/O schedulers

Included in Red Hat Enterprise Linux 4 are four custom configured schedulers from which to choose. They each offer a different combination of optimizations.
The Completely Fair Queuing (CFQ) scheduler is the default algorthim in Red Hat Enterprise Linux 4. As the name implies, CFQ maintains a scalable per-process I/O queue and attempts to distribute the available I/O bandwidth equally among all I/O requests. CFQ is well suited for mid-to-large multi-processor systems and for systems which require balanced I/O performance over multiple LUNs and I/O controllers.
The Deadline elevator uses a deadline algorithm to minimize I/O latency for a given I/O request. The scheduler provides near real-time behavior and uses a round robin policy to attempt to be fair among multiple I/O requests and to avoid process starvation. Using five I/O queues, this scheduler will aggressively re-order requests to improve I/O performance.
The NOOP scheduler is a simple FIFO queue and uses the minimal amount of CPU/instructions per I/O to accomplish the basic merging and sorting functionality to complete the I/O. It assumes performance of the I/O has been or will be optimized at the block device (memory-disk) or with an intelligent HBA or externally attached controller.
The Anticipatory elevator introduces a controlled delay before dispatching the I/O to attempt to aggregate and/or re-order requests improving locality and reducing disk seek operations. This algorithm is intended to optimize systems with small or slow disk subsystems. One artifact of using the AS scheduler can be higher I/O latency.

Choosing an I/O elevator

The definitions above may give enough information to make a choice for your I/O scheduler. The other extreme is to actually test and tune your workload on each I/O scheduler by simply rebooting your system and measuring your exact environment. We have done just that for Red Hat Enterprise Linux 3 and all four Red Hat Enterprise Linux 4 I/O schedulers using an Oracle 10G I/O workloads.
Figure 1 shows the results of running an Oracle 10G OLTP workload running on a 2-CPU/2-HT Xeon with 4 GB of memory across 8 LUNs on an LSIlogic megraraid controller. The OLTP load ran mostly 4k random I/O with a 50% read/write ratio. The DSS workload consists of 100% sequential read queries using large 32k-256k byte transfer sizes.

Figure 1. Red Hat Enterprise Linux 4 IO schedulers vs. Red Hat Enterprise Linux 3 for database Oracle 10G oltp/dss (relative performance)

The CFQ scheduler was chosen as the default since it offers the highest performance for the widest range of applications and I/O system designs. We have seen CFQ excel in both throughput and latency on multi-processor systems with up to 16-CPUs and for systems with 2 to 64 LUNs for both UltraSCSI and Fiber Channel disk farms. In addition, CFQ is easy to tune by adjusting the nr_requests parameter in /proc/sys/scsi subsystem to match the capabilities of any given I/O subsystem.
The Deadline scheduler excelled at attempting to reduce the latency of any given single I/O for real-time like environments. A problem which depends on an even balance of transactions across multiple HBA, drives or multiple file systems may not always do best with the Deadline scheduler. The Oracle 10G OLTP load using 10 simultaneous users spread over eight LUNs showed improvement using Deadline relative to Red Hat Enterprise Linux 3's I/O elevator, but was still 12.5% lower than CFQ.
The NOOP scheduler indeed freed up CPU cycles but performed 23% fewer transactions per minute when using the same number of clients driving the Oracle 10G database. The reduction in CPU cycles was proportional to the drop in performance, so perhaps this scheduler may work well for systems which drive their databases into CPU saturation. But CFQ or Deadline yield better throughput for the same client load than the NOOP scheduler.
The AS scheduler excels on small systems which have limited I/O configurations and have only one or two LUNs. By design, the AS scheduler is a nice choice for client and workstation machines where interactive response time is a higher priority than I/O latency.

Summary: Have it your way!

The short summary of our study indicates that there is no SINGLE answer to which I/O scheduler is best. The good news is that with Red Hat Enterprise Linux 4 an end-user can customize their scheduler with a simple boot option. Our data suggests the default Red Hat Enterprise Linux 4 I/O scheduler, CFQ, provides the most scalable algorithm for the widest range of systems, configurations, and commercial database users. However, we have also measured other workloads whereby the Deadline scheduler out-performed CFQ for large sequential read-mostly DSS queries. Other studies referenced in the section "References" explored using the AS scheduler to help interactive response times. In addition, noop has proven to free up CPU cycles and provide adequate I/O performance for systems with intelligent I/O controller which provide their own I/O ordering capabilities.
In conclusion, we recommend baselining an application with the default CFQ. Use this article and its references to match your application to one of the studies. Then adjust the I/O scheduler via the simple command line re-boot option if seeking additional performance. Make only one change at a time, and use performance tools to validate the results.

 

/proc/cpuinfo, what cpu flags mean?

I alwas keep forgetting what CPU flags mean in Linux and what CPU architecture I have.

http://www.gentoo-wiki.info/Gentoo:/proc/cpuinfo

Intel flags (This table is currently identical with /usr/include/asm/cpufeature.h. Hopefully some hardware god will share his wisdom and expand this table. )
FlagDescriptionCommon in processor types
fpuOnboard (x87) Floating Point Unit
vmeVirtual Mode Extension
deDebugging Extensions
psePage Size Extensions
tscTime Stamp Counter: support for RDTSC and WRTSC instructions
msrModel-Specific Registers
paePhysical Address Extensions: ability to access 64GB of memory; only 4GB can be accessed at a time though
mceMachine Check Architecture
cx8CMPXCHG8 instruction
apicOnboard Advanced Programmable Interrupt Controller
sepSysenter/Sysexit Instructions; SYSENTER is used for jumps to kernel memory during system calls, and SYSEXIT is used for jumps back to the user code
mtrrMemory Type Range Registers
pgePage Global Enable
mcaMachine Check Architecture
cmovCMOV instruction
patPage Attribute Table
pse3636-bit Page Size Extensions: allows to map 4 MB pages into the first 64GB RAM, used with PSE.
pnProcessor Serial-Number; only available on Pentium 3
clflushCLFLUSH instruction
dtesDebug Trace Store
acpiACPI via MSR
mmxMultiMedia Extension
fxsrFXSAVE and FXSTOR instructions
sseStreaming SIMD Extensions. Single instruction multiple data. Lets you do a bunch of the same operation on different pieces of input in a single clock tick.
sse2Streaming SIMD Extensions-2. More of the same.
selfsnoopCPU self snoop
accAutomatic Clock Control
IA64IA-64 processor Itanium.
htHyperThreading. Introduces an imaginary second processor that doesn't do much but lets you run threads in the same process a bit quicker.
nxNo Execute bit. Prevents arbitrary code running via buffer overflows.
pniPrescott New Instructions aka. SSE3
vmxIntel Vanderpool hardware virtualization technology
svmAMD "Pacifica" hardware virtualization technology
lm"Long Mode," which means the chip supports the AMD64 instruction set
tm"Thermal Monitor" Thermal throttling with IDLE instructions. Usually hardware controlled in response to CPU temperature.
tm2"Thermal Monitor 2" Decrease speed by reducing multipler and vcore.
est"Enhanced SpeedStep"



Tuesday, January 15, 2013

Attach multiple attachments to an email in shell

This is a sample I have written. This may give you an Idea.

#!/bin/sh
BOUNDARY="=== This is the boundary between parts of the message. ==="
DATE=`date +%Y%m%d`

(
#echo "To:  ashenavandeh@??????.com"
echo "Subject: last 24 hours high priority Syslog messages"
echo "MIME-Version: 1.0"
echo "Content-Type: MULTIPART/MIXED; "
echo "    BOUNDARY="\"$BOUNDARY\"
echo
echo "--${BOUNDARY}"
echo "Content-Type: TEXT/html;"
echo
echo "<html><body>"
echo "<H3>Last 24 hours high priority Syslog messages -"
date
echo "</H1>"
psql -U rsyslog syslog --html -c "select devicereportedtime as Date,Priority,fromhost as Source,syslogtag as Proc_Info,message as Message from systemevents where priority < 3 and devicereportedtime < now() - interval '1 day';"
echo "</body></html>"
echo
echo "--${BOUNDARY}"
echo "Content-Type: application/vnd.ms-excel charset=US-ASCII"
echo "Content-disposition: attachment; filename=syslog-$DATE.csv"
echo
psql -U rsyslog syslog -A -F ',' -c "select devicereportedtime as Date,Priority,fromhost as Source,syslogtag as Proc_Info,message as Message from systemevents where priority < 3 and devicereportedtime < now() - interval '1 day';"
echo
echo "--${BOUNDARY}"
) | /usr/sbin/sendmail -t

Outputting to CSV from Postgresql

If I am going to do it in shell, I will do it like this:

psql -U user db -A -F ',' -c "select 1+1 as A,2+2 as B;"

Monday, January 14, 2013

How to make rsyslog to write syslogs in a database(PostgreSQL):


I am using rsyslog as it is more common in RHEL enviroments these days but I am sure you can find the equvalent  packages in other OS and distributions:

Install postgreSQL module for rsyslog:
 # yum install rsyslog-pgsql


In /etc/rsyslog.conf add following lines:

$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imklog   # provides kernel logging support (previously done by rklogd)

# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514


# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf


Make the /etc/rsyslog.d/psql.conf file with the following contents:

$ModLoad ompgsql.so

$WorkDirectory /var/tmp/rsyslog/work

# This would queue _ALL_ rsyslog messages, i.e. slow them down to rate of DB ingest.
# Don't do that...
# $MainMsgQueueFileName mainq  # set file name, also enables disk mode

# We only want to queue for database writes.
$ActionQueueType LinkedList # use asynchronous processing
$ActionQueueFileName dbq    # set file name, also enables disk mode
$ActionResumeRetryCount -1   # infinite retries on insert failure

*.*             :ompgsql:127.0.0.1,syslog,rsyslog,secret;


The format is:

*.*           :ompgsql:<DB HOST>,<DB USERNAME>,<DB NAME>,<PASSWORD>;

Now, to config postgreSQL, do the following changes in postgresql config file:
In /var/lib/pgsql/data/postgresql.conf :

listen_addresses = 'localhost'
port = 5432
max_connections = 100

And following changes to /var/lib/pgsql/data/pg_hba.conf to grant the local accesses:

# "local" is for Unix domain socket connections only
#local   all         all                               ident sameuser
local    all         all                               trust
# IPv4 local connections:
#host    all         all         127.0.0.1/32          ident sameuser
host    all         all         127.0.0.1/32          trust
# IPv6 local connections:
#host    all         all         ::1/128               ident sameuser
host    all         all         ::1/128               trust


Now restart the postgreSQL server:

# service postgresql restart

Create the database:

#su - postgres
-bash-4.1$ createuser rsyslog;
Shall the new role be a superuser? (y/n) y
-bash-4.1$ createdb -T template0 -E SQL_ASCII syslog;


-bash-4.1$ psql -l
                                  List of databases
   Name    |  Owner   | Encoding  |  Collation  |    Ctype    |   Access privil
eges
-----------+----------+-----------+-------------+-------------+----------------
-------
 postgres  | postgres | UTF8      | en_US.UTF-8 | en_US.UTF-8 |
 syslog    | postgres | SQL_ASCII | en_US.UTF-8 | en_US.UTF-8 |
 template0 | postgres | UTF8      | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
(3 rows)

-bash-4.1$


 
 Now we show create the database schema.  The package has a file located at /usr/shade/doc//rsyslog-pgsql-5.8.10/createDB.sql which has the requiered schema. But I had to comment out the first line to make it work:

 -- CREATE DATABASE Syslog WITH ENCODING 'SQL_ASCII';
\c syslog;
CREATE TABLE SystemEvents
(
        ID serial not null primary key,
        CustomerID bigint,
        ReceivedAt timestamp without time zone NULL,
        DeviceReportedTime timestamp without time zone NULL,
        Facility smallint NULL,
        Priority smallint NULL,
        FromHost varchar(60) NULL,
        Message text,
        NTSeverity int NULL,
        Importance int NULL,
        EventSource varchar(60),
        EventUser varchar(60) NULL,
        EventCategory int NULL,
        EventID int NULL,
        EventBinaryData text NULL,
        MaxAvailable int NULL,
        CurrUsage int NULL,
        MinUsage int NULL,
        MaxUsage int NULL,
        InfoUnitID int NULL ,
        SysLogTag varchar(60),
        EventLogType varchar(60),
        GenericFileName VarChar(60),
        SystemID int NULL
);

CREATE TABLE SystemEventsProperties
(
        ID serial not null primary key,
        SystemEventID int NULL ,
        ParamName varchar(255) NULL ,
        ParamValue text NULL
);


Use the following line to apply the table schema assuming you are already in the right path:


#psql -U rsyslog syslog -f ./createDB.sql
 
Reload the rsyslog service and check if there is any error in /var/log/messages:

# service rsyslog reload


Did I say how to set a password for the rsyslog user in postgreSQL ?

# su - postgres
-bash-4.1$ psql
psql (8.4.13)
Type "help" for help.

postgres=# Alter user rsyslog with password 'secret';
ALTER ROLE
postgres=# \q
-bash-4.1$



This should work. you can see the logs in systemevents table:

# psql -W -Ursyslog syslog
Password for user rsyslog:
psql (8.4.13)
Type "help" for help.

syslog=# select count(*) from systemevents;
 count
-------
  6596
(1 row)

syslog=#