Showing posts with label computers. Show all posts
Showing posts with label computers. Show all posts

Tuesday, 15 February 2011

Dropbox config change from the CLI

I use Dropbox on my MacBook. It's neat. However for some reason it's not really autodetecting my proxy, which completely set up via a master proxy.pac file.

I have already a shell script that takes care of adjusting my SSH configuration and my custom proxy.pac depending on where I am. So I just extended it to change Dropbox's configuration and restart it. Here's the gist of it:

Tuesday, 3 November 2009

Ubuntu upgrade woes

After having upgraded my laptop (a Dell Latitude D420) to Ubuntu Karmic Koala, it refused to boot. The normal boot process was only showing the usual splash screen, that only consists in the Ubuntu logo, then switched to a black screen, where nothing was possible anymore -- no switching to a text console, nothing. Damn.


Here's how I fixed that (in the hope that could be useful to someone). I found out, by booting without the splash screen, that the root partition wasn't mounted on boot, which was the cause of all problems. For the record, to boot without the splash screen, you have to select the kernel you want in the grub menu, edit its command-line, and remove the words "quiet splash" from it.


So I booted on the rescue kernel (by selecting it in grub), which gave me a basic busybox shell in RAM. There, I mounted manually my root partition, /dev/sda1, to a newly created directory /grumpf. I moved /grumpf/etc/fstab away and wrote a basic working fstab with the commands:

echo proc /proc proc defaults 0 0 > /grumpf/etc/fstab
echo /dev/sda1 / ext3 defaults,errors=remount-ro 0 1 >> /grumpf/etc/fstab

Then I rebooted. In the grub selection menu, I selected the regular kernel, but edited its command-line: I replaced the part root=UUID=deafbeef... by root=/dev/sda1, actually telling grub to look up the device by symbolic name instead of UUID. At this point the computer successfully booted.


Once there, I could log in as root, edit /boot/grub/menu.lst to make permanent my changes to the kernel command-line, and complete the fstab with appropriate lines for the swap, the cdrom and my /home partition. One last reboot, and voilà, the system was fully functional again.


This doesn't explain why device UUIDs aren't supported in the boot sequence on that hardware, though.

Tuesday, 14 October 2008

Git: on rebasing

(This is a follow-up to How remotes work).

We've seen how git manages to merge your local changes when you pull from a remote repository.

This approach has a small downside, aesthetically: that is, the creation of a large number of merge commits, making the history more difficult to read. Wouldn't it be nice if git offered you the possibility to simply re-apply your local changes on top of what you just pulled?

Rejoice, because that's what the git rebase command is for.

git rebase origin/master will take the local commits (that are reachable from the master head, but not from origin/master), remove them from the commit tree, and re-apply them on top of origin/master, before moving the master head to the top of the new line of commits it just created. That way, the history is kept linear:

You can afterwards just push your new commits.

Important warning: git rebase changes your commits. Because their place in the tree will be different, their SHA1 will be different as well; and the old ones will disappear. For that reason, you must not manipulate commits with rebase if you have already published them in a shared repository from which someone else might have fetched.

Rebasing is a powerful tool that will enable you to manipulate your branches, moving lines of commits from one location to another. The git-rebase man page has more examples.

Friday, 10 October 2008

Git: how remotes work

One of the difficult things for a git beginner to understand is how remote branches work.

Basically, as git is a distributed version control system, every developer has a full and independent repository. So, how can you pass changes around?

In the examples below, we'll consider a remote repository, that we'll call origin, and a local one (that we'll call local). The remote repository has one branch, called master, that has been cloned as origin/master on the local repository. Moreover, the local repository has one local branch, called master as well (but it doesn't need to be), which is set up to track changes that happen on origin/master. Note that, as origin/master is a remote branch, it cannot be checked out -- only master can.

The fetch operation (command git fetch) copies the latest commits from the master on origin to origin/master, and updates the HEAD of the origin/master branch:

git fetch origin
The circles on the schema (click for a larger version) represent commits, and the arrows are the parent->child relationship between commits. The labels indicate the various HEADs (or branches). It is to be noted that a branch is nothing more than a label following the HEAD of a series of commits.

At the end of this operation, origin/master matches the master branch on the origin, but master on the local repository is still behind. We need to use the git merge command to make master point at the same commit than origin/master:
git merge origin/master
This kind of merge is called a fast-forward because no actual merging of changes is involved. No new commit is created; we have just moved a HEAD forward in history. And that's fast.

Now, what happens if you committed a change on your master on local? Nothing changes for the fetch; the two new commits are still created from origin's master so origin/master matches it exactly:
git fetch origin
However, on local, master and origin/master have bifurcated. To reunite them, you'll need to use git merge, that will create another commit, and make master point to it:
git merge origin/master
The new commit (in orange) is a merge commit: it has two parents. (If conflicts happens, git will ask you to resolve them.)

Ah, but now, your master has two more commits that the origin's master. And you surely want to share your changes with your fellow developers. That's where the git push command is useful:
git push origin

git push will start by copying your two commits to the origin, and ask it to update its master to point at the same location than yours. At the end of the operation, both commit trees should match exactly. Note that git push will refuse to push if your origin/master is not up to date.

The "origin" argument to git fetch and git push is optional; git will use all remotes if you don't specify one.

Finally, a note: as fetch and merge are often done together, a git command combines both: git pull. It's smarter than the addition of the two commands, because it knows how to look in your git config what remote branch is actually tracked by your current local branch, and merge from there -- so you don't even need to type the name of origin/master for the merge.

Next time, we'll speak about rebasing.

Wednesday, 16 January 2008

Disk usage graphical presentation

I found out by accident about this GNOME tool, the "Disk Usage Analyzer". It has a surprisingly good UI, displaying subdirectories as concentric circular arcs. It makes visually obvious the spots where all the place is wasted. Or spent.

As an illustration, here's a screenshot of it displaying the disk usage taken by a fresh checkout of Perl 5's sources.


I wouldn't have thought that Encode was taking so much space. (This is, of course, due to all the files that describes the various encodings recognized by this module.)

Tuesday, 30 October 2007

Ubuntu, Dell laptop and hard disk power management

There has been some talk those days on laptop hard disk lifespan. See, for example, what Pascal says about it.

So, after some investigation, I saw that on my laptop (Dell Latitude D420) the BIOS doesn't handle an APM value of 255. By default the startup scripts execute hdparm -B 255 /dev/sda (or other devices) and that actually sets the APM value to 128 (as given by hdparm -I /dev/sda | grep Advanced). (I'm using Ubuntu 7.10 -- the script I'm talking about is /etc/acpi/power.sh.)

On the other hand, using -B 254 seems to disable APM. So here's a way to do it, by default, on every boot:

  • Add those lines to /etc/hdparm.conf:
    /dev/sda {
    apm = 254
    }

  • Make /etc/init.d/hdparm run at startup:
    ln -s /etc/init.d/hdparm /etc/rcS.d/S07hdparm

And now the load cycle count reported by SMART remains stable. Which means that hopefully my hard disk will live longer.

Addendum: if you don't use SMART, you should. Install the smartmontools package, enable SMART on your disks with smartctl -s on, and read the smartctl(8) manpage. Optionally, enable the smartd monitoring daemon (via /etc/default/smartmontools).

Wednesday, 24 October 2007

urxvt + perl

I'm playing with urxvt (a.k.a. rxvt-unicode), a fully Unicode-aware terminal forked off the popular rxvt.

One of the good things with it is that it's fully scriptable in Perl. Here's my first attempt, a small plugin that adds an item in the popup menu (given by the standard Perl plugin selection-popup) to draw in bold font whatever matches the selection. It's not extremely useful, but that's a start.

our $selection_hilight_qr;

sub on_start {
my ($self) = @_;
$self->{term}{selection_popup_hook} ||= [];
push @{ $self->{term}{selection_popup_hook} },
sub { hilight => sub { $selection_hilight_qr = qr/\Q$_/ } },
sub { 'remove hilight' => sub { undef $selection_hilight_qr } };
();
}

sub on_line_update {
if (defined $selection_hilight_qr) {
my ($self, $row) = @_;
my $line = $self->line($row);
my $text = $line->t;
while ($text =~ /$selection_hilight_qr/g) {
my $rend = $line->r;
for (@{$rend}[$-[0] .. $+[0] - 1]) {
$_ |= urxvt::RS_Bold;
}
$line->r($rend);
}
}
();
}


As you can see, this code is pretty small (although not very readable maybe -- I dislike using @+ and @-, but my urxvt isn't compiled against a Perl 5.10 :).

Well, I'm now looking for ideas. What would be cool for a terminal to do for you (and that urxvt doesn't already provide?)

Monday, 6 August 2007

mysql command-line tricks

I use the MySQL shell a lot. A couple of tricks can make it more usable:

First, prompt customisation. I work with lots of different databases on different hosts. So, I customised my prompt via an environment variable:

export MYSQL_PS1="\\d@\\h> "

This makes mysql display the database name and the hostname instead of the fixed string "mysql":
$ mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 143955 to server version: 5.0.27-standard-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

test@counterfly>

Secondly, readline configuration. When editing long SQL statements, I prefer to use vi-like keybindings. That can be selected by adding the following lines to your ~/.inputrc file:
$if Mysql
set keymap vi
set editing-mode vi
$endif

You can then navigate history and edit like like you would do (almost) in vi. See your readline manual for more details.

Friday, 13 July 2007

$* replacement

So, the special variable $* has been removed from perl 5.10, after having been deprecated for years. For those who don't remember Perl4-style programming, $* was used to enable multi-line matching in regular expressions. In Perl 5, the preferred way to do it is to use the more flexible regexp flag /m.

I removed that poor old variable not because I like removing old things, but because it was standing in the way of a bug I wanted to fix (bug #22354). Anyway. Apparently there is still out there some old Perl code that fears not using $*. And notably ghc's evil mangler, which broke with perl 5.9.5.

But Audrey Tang (who else?) found an elegant way to emulate $* = 1, in a characteristic perl-zen manner. Here's how: the current evil mangler contains this simple line of code, in a BEGIN block:

require overload; overload::constant( qr => sub { "(?m:$_[1])" } );

Wednesday, 11 July 2007

Saving GNOME settings

Here's a small tip I got from GNOME expert Pascal Terjan, and that I'm copying here because I don't trust my memory:

Want to inspect the settings of some GNOME application? Use gconf-editor.

Want to copy the settings of some GNOME app (like, say, metacity) from one desktop to another? Use the command-line tool gconftool, specifically the options --dump and --load. (The paths you need to feed to gconftool can be retrieved via gconf-editor.)

Tuesday, 10 July 2007

Munin plugin for ping response time

My ADSL router (a freebox) shows a recent tendency to desynchronize itself. That's annoying, even it that only lasts a few minutes from time to time. Maybe a cable needs to be replaced (advice?). So, I quickly hacked this plugin for Munin, an excellent system monitoring package:


#!/usr/bin/perl -wT

use strict;

our @HOSTS = qw(free.fr);
if (@ARGV && $ARGV[0] eq 'config') {
print <<CONFIG;
graph_title Ping response time
graph_vlabel time (ms)
graph_args --base 1000 -l 0
graph_scale no
graph_category network
CONFIG
for my $host (@HOSTS) {
my $name = $host;
$name =~ tr/./_/;
print "$name.label $host\n";
}
}
else {
@ENV{qw(PATH)} = qw(/bin);
for my $host (@HOSTS) {
my $name = $host;
$name =~ tr/./_/;
my @ping = qx(/bin/ping -nc1 $host 2>/dev/null);
my $times = $ping[-1];
my $val = '';
if ($times =~ m{^rtt min/avg/max/mdev = ([\d.]+)}) {
$val = $1;
}
print "$name.value $val\n";
}
}

Feel free to adapt/improve. This code is of course released under whatever license Munin is released under (I didn't bother to check.)

Monday, 9 July 2007

Perl 5.9.5

I've released Perl 5.9.5 on saturday. Get it while it's hot. Get the official announcement as well.

Wednesday, 4 July 2007

Sub::Current

I've released another new heavily magic Perl module on CPAN, this time scratching an itch of Yves Orton. It's called Sub::Current and allows you to get a reference to the currently executing subroutine.

At first I wanted to use a tied scalar instead of a function to get this reference; however, due to the way parameter passing is implemented in Perl, that's not easily possible: the tied variable (let's call it ${^ROUTINE}) is FETCHed at the innermost scope, so this won't work:


sub foo {
# here ${^ROUTINE} points to bar(), not to foo() !
bar( ${^ROUTINE} );
}

I think that could be done by tweaking some of the ck_ functions that are used to optimize Perl's internal optree during the compilation phase. I know that some people are not afraid to do this, but I feel that this would be a rather fragile solution, for only a little bit of syntactic sugar.

Wednesday, 20 June 2007

encoding::source

I'm happy to announce to the unsuspecting world that I've released to the CPAN a new Perl module, encoding::source. Like I say in the docs, this is like the encoding pragma, but done right. In other words, it allows you to change, on a per-file or per-block basis, the encoding of the string literals in your programs.

That's probably some of the scariest Perl code I've written. Note that it won't run on any released perl. You'll nead bleadperl (or the upcoming 5.9.5) for that. That's because it uses the new support for user-defined lexical pragmas.

Monday, 28 May 2007

Vanity projects

These days the correct vanity project is yet another useless ORM.
-- Matt S. Trout on the london.pm mailing list

Vanity projects used to be templating systems, remember? That was the obligatory small project a beginner ought to write for himself. Seems that we move on to higher abstractions.

However, I still think that there's room for a good open source OODBMS. That would be an interesting project. Maybe an interesting vanity project, even!

Thursday, 24 May 2007

MySQL annoyance

Got bitten by a bit of insanity in MySQL 5.0.26. Imagine you have a bogus query, SELECT poo FROM SomeTable, that looks correct, except that there is no "poo" column in the said table. (You mispelled "foo". So much for your brain.) MySQL will correctly return an error, Unknown column or somesuch, when you try to run it.

Except in a subquery. Like, for example, in:

DELETE FROM SomeOtherTable WHERE id IN (SELECT poo FROM SomeTable)
which will be then exactly equivalent to a simple, unadorned DELETE FROM SomeOtherTable. And you loose your data.

Friday, 4 May 2007

Musings on diff -u

diff(1) and patch(1) are wonderful tools, but there might be still room for improvement. As someone who deals with a large number of patches, I find that patches that just move code around contain too much redundant information, and are thus difficult to read.

I'd like an addition to the unified diff format. Instead of showing a large chunk being deleted and added again later, it would factorize it, for example like this:


non modified text
-first line of moved text
[-... block number #1 ...-]
-last line of moved text
continuing...
+first line of moved text
[+... block number #1 ... +]
+last line of moved text
rest of the context goes here

For extra points, that should work across files. That could be first implemented as a post-processor to diff(1).

For extra extra bonus points, some clever version control system would use this for an enhanced version of the annotate/blame/praise command, so it could show history even for code that was moved around.