When you start hacking Raspberry Pi’s GPIO, first thing to keep in mind is the +3.3V CMOS logic level voltage despite the 5V supply. Secondly, read references about current limiting resistors (eLinux Wiki has a section there). What to avoid then? Try not to physically short the GPIO pin to ground when it is programmed as the opposite output-high.
GPIO pin logic state (meaning voltage) are both programmable and driven by physical-connection. I choose wiringPi for practical reasons: availability of its Python wrapper and its simple syntax (glancing it at first sight). WiringPi has an option of using its own pin numbering to address it in the code instead of the original GPIO numbering (there are board revisions to watch for in some cases of usage, not mine). Every pin can be initialized as input or output.
GPIO pin voltage swings to high logic after reboot (the meter’s needle is rocking for a second or so). It is in low voltage afterward until being programmed or physically pulled-up.
Two pins, wiringPi pin 8 and 9, remain in high logic voltage after reboot. They are SDA0 and SCL0 to be used as an I²C, however they can be used to read button as well. (eLinux Wiki: “there are 1.8 k pulls up resistors on the board for these pins”)
I did short the high logic to ground accidentally, it invoked reboot. (I don’t know how many times or how long of this will brick the Raspberry Pi)
Prior to trying the push-button switch, I didn’t have proper circuitry and working with wires as probes, prone to accident that was. I found an advice to insulate the +5V pin voltage so I could worry less.
Insulate the +5V pin of Raspberry Pi
A 10k pull-up resistor and a button are enough to test the following. Go to interactive Python shell and run line by line until the button push is read as low logic (GPIO7 in this example):
Python 2.7.3rc2 (default, May 62012, 20:02:25)[GCC 4.6.3] on linux2
Type "help", "copyright", "credits"or"license"for more information.
>>>importtime>>> INPUT=0>>> OUTPUT=1>>> HIGH=1>>> LOW=0>>> SETUP=wiringpi.wiringPiSetup()>>>print SETUP
0>>> wiringpi.pinMode(7,INPUT)>>> RESULT=wiringpi.digitalRead(7)>>>print RESULT
1>>> RESULT=wiringpi.digitalRead(7)>>>print RESULT
1>>> RESULT=wiringpi.digitalRead(7)>>>print RESULT
Multiple command lines in the process of generating certificates using openssl can be quite confusing and easily mixed up over which-do-what. Most of them are repetitions of almost the same syntax (where the confusion comes).
I need to setup an HTTPS site with not just server certificate to secure it, but requiring also client side certificate. The site will only show the content to authorized users with the correct pair of server-client certificate. It will also expire after a certain date. The certificates are self-signed as they’re for closed environment usage.
This post covers two general processes: generating and signing.
How to generate SSL certificate using openssl is a straightforward process of:
generate its key
create certificate request with that key
generate certificate from request and key
Hence, in any type of the certificate I have a general <some-cert-key>.key, <some-cert-request>.csr, and <some-cert>.crt. When I mean “type”, they are CA (Certificate Authority), one/more server certificate, and one/more client certificate.
Generating Pairs of Key-Certificate with openSSL: CA, server, & client
In terms of signing the certificates, all of them are signed using the CA. Which files to be used in the server will become the subject of the next post.
Ubuntu has been my desktop for some years. I’ve become so attached in a way that I no longer know how to work without. Then of course, the pain of moving to Mac is one foreseeable future. Unlike my usual repository where everything is on the table and for free, this one is a little tougher to handle. Well, if I were to cut to the chase, tap interface for instance, was nowhere near to be found. My VirtualBox depends a lot on these virtual networks as I’m used to try things out.
..you’ll always have love-hate relationship with the tools you work with.. blindly turning yourself into devoted-fanatic is another thing..
Enough bragging (let’s spare that). First we need to have /dev/tap0, /dev/tap1, etc., available using TunTap kernel extension.
I modify the small script from this post to have it available on background instead of keeping a shell open all the time:
This has been a delayed post since I only figure out how to solve configure error just now. It wasn’t really necessary to cross-compile last year because compiling ffmpeg directly on the ARM-board finished overnight (despite the low CPU). Anyway, in this post I’m still using the board to easily retrieve package dependencies from Ubuntu ARM repository. Toolchain used is arm-linux-gnueabi-gcc-. The following steps will apply in general:
download source file (that is ffmpeg in this example)
retrieve development library/build dependecies
place header files and dependency libraries into toolchain path
configure with toolchain
make and create .deb installer package with checkinstall
cross-compile: Illustration of what's done on ARM-board and what's done on desktop
Prior to anything, install Linaro toolchain by adding their repo first:
I really mean simple, this is a one command line UDP stream from the BeagleBoard (running Ubuntu ARM) to my desktop (IP address 192.168.1.19 in this example), the source video is an FLV file inside the ARM-board:
As already mentioned in previous post’s introduction, proof of concept on how Zabbix Proxy works under unreliable communication is what this next post about. The idea is to have these scenarios tested:
independent SNMP data polling by individual NMS proxy (embedded system)
intermittent connection between main/master NMS server with its proxy
Zabbix Proxy - distributed NMS over embedded Linux
In the technical details, I already followed pretty much the same steps showed in installation wiki (with the exception that I didn’t put the process into startup init.d script as I would start it manually as needed).
The first test is to make our Ubuntu ARM board accepts configuration done in Zabbix Server web-GUI, retrieves SNMP data to it, and sends collected data to server. Mode passive is chosen so that the server will be the one initiating contact with the board. After enabling a certain SNMP host, i.e. host-aaa, as monitored-by-proxy, the Zabbix Server log will show:
15677:20120601:022022.960 Sending configuration data to proxy 'beaglexm1'. Datalen 9205
Our board, ‘beaglexm1‘, will start to poll SNMP data from host-xxx to then store it in designated sqlite database.
975:20120531:212024.098 proxy #8 started [poller #5]984:20120531:212024.128 proxy #17 started [discoverer #1]968:20120531:212025.423 Received configuration data from server. Datalen 9205971:20120531:212030.388 Enabling SNMP host [host-aaa]
From time to time as configured, our board will also run housekeeping to keep our board database file in small size.
1081:20120531:224513.922 Executing housekeeper
1081:20120531:224514.505 Deleted 4885 records from history[0.453765 seconds]
Under normal connection, in main Zabbix Server we’ll start seeing the collected data from host-aaa the same way as when the SNMP data is polled by the server directly. To emulate disruption to the connectivity, I choose to use simple iptables drop packet. This time the server will no longer seeing new data.
After the drop block is released, the server we’ll start seeing new data as well as data from the period of intermittent connection. It will take some time before all of that being sent to the server.
In general that proves it. I collected 44 OID values from the host-aaa every 30 seconds and in this case BeagleBoard utilization was very low as this wasn’t a stress test to see its performance.
I needed to tackle practical limitation of SNMP monitoring under unreliable communication, a serious consideration was made for Zabbix Proxy. It was an option said to be ready for embedded hardware. I already had BeagleBoard xM Rev C running Ubuntu 11.10 Oneiric and needed to proof that it would port functionally to this Linux ARM board. There existed zabbix-proxy-mysql in Oneiric repository, but sqlite seemed to be a better scale option for embedded deployment.
Zabbix Proxy Distributed NMS
An example set out for Zabbix Proxy on Debian appears to be pretty straightforward to follow. With the ARM board already installed with build-essential, snmp, and snmpd, the following needs to be added:
When LinkedIn privacy breach was about to be revealed during Yuval Ne’eman workshop in Tel Aviv University, suddenly the timeline trends were that of friends, telling people to change LinkedIn password. Both were separate issues and of course the privacy breach was then subsided from people’s attention. To tell you the truth, as a secret admirer of conspiracy theory (whether I admit it or not), this coincident was just too perfectly timed. But, I’m also curious whether my password was among the stolen 6,458,020 (yes: 6,4 millions) uploaded by the hacker in hashed SHA-1 without the user name.
Snapshot of uploaded contact data from calendar (skycure.com)
There is not other way but to check my password against the combo_not.txt found via Filestube. People already posted howto check this, the easiest way is doing a single line in the shell:
Bingo! a match there for the password “bandito” (I choose this randomly expecting some person out there is using it). Another way (for comparison as I’m no security expert) is by this short python script (slightly altered from Phobos Technology blog post):
Save this file as linkedin_hash.py and ensure it's
in the same folder as combo_not.txt
Usage: python linkedin_hash.py hunter2
"""from hashlib import sha1
password = sys.argv
hsh = sha1(password).hexdigest()print"SHA-1: %s"% hsh
x = 0for line inopen('combo_not.txt','r'):
if hsh == line.strip():
x += 1elif"00000" + hsh[5:] == line.strip():
x += 1print"Matching line: %s"% line
print"Number of matches: %d"% x
My verdict is: my password is on the list and I’m considering a leap of faith from devoted conspiracy believer.
PS: I don’t find that “password” or “123456″ as common passwords used by many people.
PPS: A side story: Indonesians are found to be using weakest passwords (as research over Yahoo ID revealed)
Acknowledgement is one of the things that is provided by Zabbix API in its event section. We can make use of the methods get() and acknowledge() to automatically acknowledge an event. Digging through the attributes of those methods, the official doc doesn’t provide complete example to follow. Added with some luck, my search get me to have the following JSON RPC that works in Zabbix 1.8.7 using Gescheit API implementation written in Python.
When the goal is to acknowledge a specific event, the JSON call is however limited to some basic responses. It means that you can’t query a single RPC request for i.e. the following combination: list of acknowledged events with “problem” status and trigger ID 500 (500 is merely an example).
However, combination in a single request do exist i.e. I’ve tried: list of acknowledged events with “problem” status as the following GET request:
Yes, they are three requests. We can then filter out a single event to be acknowledged by intersecting the three results. The first intersection is between event_trigger_list, problem_list, and ack_list as illustrated below:
Intersection of problem list, event list with specific trigger ID, and acknowledged event list
After we opt-out those intersections, we can then get the event with problem status and choose the latest event only to be acknowledged. Check the code on my github.