Kernel word recovery crack, Mobile themes website for samsung wave 525, Dekha ek khwab sony tv title song, Song zombies on your lawn, Bloody mary nerve endings mp3, Digittrade tv jukebox, Padosan songs mp3, Gold cup series 70 manual, Nieuwste versie van mozilla firefox en, Cant install spotify on mac, Web brute force password cracker, Belkin g peek driver windows 7

Archive for July, 2009

Generate PHP accessors

July 29th, 2009

I’m working on a PHP project right now, and finally got tired of writing the same code over and over again (and a bit nostalgic of Perl, my first love), so here is gen_php_accessors.

# Martijn van der Kwast 2009
# Use and distribute freely
use Getopt::Std;
$indent = '    ';
$indent2 = $indent x 2;
if ( ! getopts('csGatrh') || $opt_h ) {
	print STDERR <<HELP;
gen_php_accessors [-c] [-s] [-G] [-a] [-t] [-h] < variable_declarations.php
Generate PHP accessors from member variables.
 -c Also generate constructor
 -s Also generate setters
 -G Don't generate getters
 -a Generate getters, setters, static getters and constructor
 -t Generate static getters
 -r Generate static setters
 -h Display this help
	exit -1;
$opt_s = 1 if $opt_a;
$opt_c = 1 if $opt_a;
$opt_t = 1 if $opt_a;
$visibility = '(?:(?:var|public|private|protected)\s+)';
$identifier = '[a-z_]\w*';
while(<>) {
	if ( /^\s*(?:static\s+)${visibility}?\$($identifier)\s*[;=]/i ||
	     /^\s*${visibility}?(?:static\s+)\$($identifier)\s*[;=]/i )
		push @static, $1
	elsif ( /^\s*${visibility}?\$($identifier)\s*[;=]/i ) {
		push @fields, $1 
if ( $opt_c ) {
	print "\n";
	if ( @fields ) {
		$args = join(', ', map { "\$$_" } @fields );
		print "${indent}public function __construct( $args ) {\n";
		print "${indent2}\$this->$_ = \$$_;\n"
			for @fields;
		print "${indent}}\n";
	else {
		print "${indent}public __construct() {  }\n";
if ( @fields ) {
	if ( ! $opt_G ) {
		print "\n";
		print "${indent}public function $_() { return \$this->$_; }\n"
			for @fields;
	if ( $opt_s ) {
		print "\n";
		print "${indent}public function set_$_( \$$_ ) { \$this->$_ = \$$_; }\n"
			for @fields;
if ( @static && $opt_t ) {
	if ( $opt_t ) {
		print "\n";
		print "${indent}public static function $_() { return self::\$$_; }\n"
			for @static;
	if ( $opt_r ) {
		print "\n";
		print "${indent}public static function set_$_( $_ ) { self::\$$_ = $_; }\n"
			for @static;

Put it in a directory in $PATH, chmod +x it, and use it as a filter on the member variables you just selected. (vim: '<,'>!gen_php_accessors -a ).

You may want to adapt a little to match your PHP coding style (indentation, function names).

(modified to include constructors and static members)

Dynamic parser engine for Qt

July 20th, 2009

I’m still working on a long term project to write a music editor, something I will describe more in detail at another time. Suffice it to say that the vi editor was an important model for it, and that I want to be able to perform every operation using commands I can type on a command-line, and using keyboard shortcuts in a command-mode that generates sentences in the command language. I also want the language to be extensible by plugins and other commands. Using a command language as a base for the program’s operation also brings me the benefits of easy undo implementation, macros and save files with full history.

I could probably have used QtScript instead of defining my own command language, however I wanted the syntax to be close to vim’s, I wanted the syntax to be easy to use as a command language, not as a scripting language, and I needed a parser anyway to grok command-mode commands. I didn’t like any of the existing parser generators and parser engines I could find, so I spent some time writing a dynamic parser engine that is nearing completion now.

I tried hard to break free from the Yacc/Lexx tradition, and used some concepts borrowed from NLP. The idea was not to create the most efficient parser (computers are fast anyway), but one that is easy to use and reuse (and probably abuse). It allows for optional words and ambiguous sentences. It is not token based, ie. there is no separate tokenizer and it can be specified per terminal whether it requires whitespace at the end or not. Commands are parsed as soon as the user starts to type so feedback can be given whether the syntax is valid, and completions can be shown. This also makes for a powerful context help mechanism.

The following piece of code shows how to create a basic grammar. A “sentence” is a toplevel command. The “verb”, “object” and “complement” functions specify named groups. The “cut” functions here are borrowed from prolog, and are just optimizations: they tell the parser to commit to the current parse tree (no backtracking) up to the cut when one is encountered.

Grammar* g = new Grammar();
g->addSentence( "debug_cmd", 
    sentence( verb("debug"), object(r(GRAMMAR_TOP))) );
g->addSentence( "goto_cmd", 
    sentence( verb("goto"), cut(), complement("position", r("position_expr"))) );
g->addSentence( "move_cmd", 
    sentence( verb("move"), cut(), object(optional(r("object"))),
    complement("position", r("position_expr"))));
g->addSentence( "mark_cmd", 
    sentence( verb("mark"), cut(), object(r("mark"))) );
g->addSentence( "dm_cmd", 
    sentence( verb("dm"), cut(), object(oneormore(r("mark")))) );
g->addSentence( "quit_cmd", 
    sentence( verb("quit"), cut() ) );
g->addRule( "object", choice( w("cursor"), r("mark") ) );
g->addRule( "position_expr", 
    choice( r("time_expr"), r("metric_expr"), r("mark") ) );
g->addRule( "time_expr", 
    seq( complement("value", number()), complement("unit",r("time_unit")) ) );
g->addRule( "time_unit", wordchoice( "ms", "s", "m", "h" ) );
g->addRule( "metric_expr", seq( number(), r("metric_unit") ) );
g->addRule( "metric_unit", 
    wordchoice( "measure", "measures", "bar", "bars" ) );
g->addRule( "mark", choice( r("global_mark"), r("local_mark") ) );
g->addRule( "global_mark", rx( "\\b[A-Z]\\b" ) );
g->addRule( "local_mark", rx( "\\b[a-z]\\b" ) );

The next piece of code illustrates how to bind commands to the (transformed) parse trees. The ArgSpec provided in registerCommand define the mandatory named groups in a matched sentence. The command with the most specific matching ArgSpec will be executed. This allows commands to be overloaded.

Executer* x = new Executer();
x->setContext( new Context() );
executer->registerCommand( "debug_cmd", ArgsSpec(), &Context::debugCommand );
ArgsSpec goto_args;
goto_args.insert( "position", "position_expr" );
x->registerCommand( "goto_cmd", goto_args, &Context::gotoCommand );
ArgsSpec move_args;
move_args.insert( "object", "" );
move_args.insert( "position", "position_expr" );
x->registerCommand( "move_cmd", move_args, &Context::moveCommand );
ArgsSpec mark_args;
mark_args.insert( "object", "mark", ArgsSpec::ArrayType );
x->registerCommand( "mark_cmd", mark_args, &Context::markCommand );
ArgsSpec dm_args;
dm_args.insert( "object", "mark" );
x->registerCommand( "dm_cmd", dm_args, &Context::deleteMarksCommand );
x->registerCommand( "quit_cmd", ArgsSpec(), &Context::quitCommand );

And finally, a function that demonstrates how to implement a command function that can be called by the Executer.

void Context::gotoCommand( Args args ) {
    ConstituentNode* position = args[ "position" ];
    if ( position->isa( "time_expr" ) ) {
	log( "goto time " + position->value( "value" ) 
	      + " " + position->value( "unit" ) );
    else if ( position->isa( "metric_expr" ) ) {
	log( "goto metric " + position->value( 0 ) + " "
	      + position->value( 1 ));
    else if ( position->isa( "local_mark" ) ) {
	log( "goto local mark " + position->value() );
    else if ( position->isa( "global_mark" ) ) {
	log( "goto global mark " + position->value() );

Some random notes to finish this quick tour:

– This is unfinished work. The API will change. (ConstituentNode is a bad name, registerCommand
may take an extra instance argument, etc)
– A mechanism will be added to do automatic type conversion. For instance, convert a time_expr
(value + units) to a normalized time in second that is easier to use in a function, and easier to extend
(the move command doesn’t need to be modified if a time unit is added).
– Custom completers (for instance for filenames) are not shown here. (work in progress)
– The library includes a command-line widget with autocomplete functionality.
– A Context base class is provided with functions to access history and ease debugging.
– I don’t share the source at the moment, but it will be open source once it is in a more finished state.

Natural Language Processing Techniques in Prolog

July 10th, 2009

Currently reading: Natural Language Processing Techniques in Prolog, by Patrick Blackburn and Kristina Striegnitz. A course in (many) parser algorithms with implementations in Prolog.

On Lisp

July 9th, 2009

Currently reading: On Lisp by Paul Graham, also available online as HTML at Shame on me for never having written any lisp (except for some basic Autocad scripts at the university) despite my interest in AI matters.

This book explains in great depth not how to write Lisp code (in fact, it assumes basic knowledge of the language), but rather the Lisp approach to programming, and what distinguishes it from other languages. The tone is often amusingly defensive, and the author hardly ever misses an opportunity  to show the superiority of its pet language in comparison to others. Despite the subjectivity, it’s an enlightening read. Approaches to programming that are usually taken for granted are reversed: bottom-up instead of top-down, iterative construction instead of planning, functional instead of OO (although the latter is becoming popular again ). Some advanced and hard-to-implement-in-other-languages concepts like continuations and non-deterministic algorithms (illustrated with a parser and a prolog implementation) are treated.

The code examples are mainly in Common Lisp. After reading a great part of this book, I feel I still dislike the parentheses jungle and the overall look of Lisp code (and emacs !), but the concepts are very refreshing (despite Lisp’s venerable age!) and merit to be kept in mind. Hmmm. Javascript.

Gwaaaaaaah, this is brainfucked !

July 3rd, 2009

It’s hot. Very hot. So hot that the heat messed up the working of my brain and reduced my thinking power to that of a below average sized stranded dead blue whale, or maybe a rabbit,  and caused me to—instead of finishing the parser library that I was hoping to finish days ago—letting my drifting thoughts guide my mouse arm into the darker places of the web, or at least a fairly darkish yellowish one, namely Blog Jaune.

What I found there could be an eternal source of horror to some, yet an source of extreme hilarity to others. To me, it was a source of shameful amusement: GWAAAAAAAH, a brainfuck-like language. Shame because I never took the time to play with brainfuck which set back from the role of  überhacker to that of newbie brainfuck virgin. Amusement because the code sample looks like this:

GobidoaaahH !

This of course displays “Hello World!”. Amused as I was, I still felt my amusement level  dwell a little on the low side of the amusement scale, and suddenly it came to me: GWAAAAAAAH needed an interpreter written in perl. And some time later I had one, nice and short as perl programs can be.

Of course it needed some testing. To achieve this I looked for some brainfuck programs, translated them to GWAAAAAAAH, and tried to run them. Some bugs later, I had a nice working perl interpreter.

die "syntax: perl <file.gwa>"
    unless defined $ARGV[0] && -f $ARGV[0];
open F, '<', $ARGV[0];
@CODE = split //, <F>;      # read-only code segment
$PC = 0;                    # program counter points to the byte after the last
                            # executed instruction in @CODE.
@M = ();                    # heap
$X = 0;                     # data register points to a byte in the heap.
@S = ();                    # looping stack
%inst = (
    G => sub { $X++ },
    W => sub { die '*___--- GAAAAAAA! ---___*' if --$X < 0 },
    A => sub { ++$M[$X]; $M[$X] %= 0xff },
    O => sub { --$M[$X]; $M[$X] %= 0xff },
    H => sub { print chr($M[$X]) },
    Z => sub { $M[$X]=getc; exit 0 unless defined $M[$X] },
    R => sub {
        if ( $M[$X] ) { push @S, $PC }
        else {
            $sb = 1;
            while ( $PC < @CODE ) {
                $mb = $CODE[ $PC++ ];
                if ( $mb eq 'R' ) {++$sb }
                elsif ( $mb eq 'M' ) { last unless --$sb }
    M => sub { $M[$X] ? $PC = $S[ $#S ] : pop @S },
while ( $PC < @CODE ) {
    $op = $CODE[$PC++];
    next unless exists $inst{$op};

The 99 bottles of beer program was running very slowly though. Admittedly, it could count bottles of beer faster than any human could drink, but it was not *instantaneous*, like how a good computer would do it. It struck me: I needed a compiler ! Interpreters are slow, GWAAAAAAAH needed to be compiled directly to assembler.

Some more hacking, and the interpreter was transformed into a compiler that generates nasm code. Some Makefile magic allowed to transform a .bf file into .gwa, then into .asm, into .o and finally into an executable.

I still had a last reason to be unsatisfied: the generated code was fairly verbose, matching the verbosity of the source: GWAAAAAAAH source files tend to contain many repeated characters, and it was easy to replace the generated increment/decrement instructions with add/sub instructions.

Voilà, small linux executables from GWAAAAAAAH sources. Now I can go back to hacking on my universal programmer-friendly not-so-optimized parser design.

Attached Files:

The Art of Unix Programming

July 3rd, 2009

Currently reading: the online version of the Art of Unix Programming by Eric S. Raymond, distributed under the terms of the Creative Commons license.  Not only interesting to programmers, there is a lot about unix philosophy, hacker culture and history. The many anecdotes and non-Unix OS bashing make it an entertaining read. The design principles mentioned here, and elevated to an art, are more or less the ones that I adhere to. Long live the CLI.