I posted the following on Google+, but it is important enough to be reproduced on Planet. I'm editing it a bit, as it is a followup to my previous post.
While improving the packaging of MongoDB, there was one thing that caught my attention: that Ubuntu had already done some of the embedded/convenience libraries work, but they had not pushed that work to Debian.
Of course, discovered this only after I started working on the improvements of the package.
What gives, Ubuntu people?
Another thing that I saw is that they have patches enabling mongodb working on armhf. Again, they did not push those to Debian.
Why this lack of cooperation?
Why not push this work and avoid duplication of work?
By being a good downstream, I intend to push some of the patches to MongoDB upstream (if they want it), so that we (Debian) have a smaller delta. This will benefit you Ubuntu guys. Why not join forces and help have a world-class set of packages?
Please, be good netizens and share the work that you have. I firmly believe that the armhf people will be happy to have one of the fancy "cloud" software available on ARM, especially since the prospects of having ARM machines on datacenters.
Oh, just for the record, the kFreeBSD people have sent their contribution and I would love to see (if possible) this running on the HURD.
I have been occasionally working on some Debian-related tasks.
One of those was to get chrony is a slightly better shape, by using, at least, a patch system (indeed, I "modernized" its packaging with the format "3.0 (quilt)"), put it in a git repository and would like to receive some comments on what I have so far.
The bug Debian bug #694690 contains a very brief description of my intentions and of the problems that I see in the current package. IMVHO, it is a very nice NTP client and server and it could even be used as the default for Debian, once it gets in shape. There is at least another high-profile distribution, namely Fedora, that switched to chrony as its default NTP software. We can certainly take a look at what they are doing and join forces here.
Another package where I spent some time was with mongodb: MongoDB is a tricky package that is only built for 2 arches: amd64 and i386. The version in unstable for i386 was 2.0.x (roughly the same as for wheezy), which the version in unstable for amd64 was 2.4.1, which has many features that 2.0 lacks.
The packaging of it is a bit tricky, since the source tree has bazillion embedded/convenience libraries (e.g., Google's v8, Mozilla's spidermonkey, BOOST, Google's Snappy, PCRE 3 etc.). Up to version 2.4.1-2, it used all these convenience copies, which is of course, a problem for a distribution like Debian.
I changed part of the build process to use the libraries that we already have in Debian and, as Antonin Kral uploaded this newer version, unsurprisingly the binary packages are smaller (especially if you take into account that a handful of the libraries may already be installed on the system).
A few hours later, Antonin uploaded a new upstream version, which means that we now have better MongoDB packages to play with. I am, in fact, really playing with MongoDB as my first NoSQL database, since 10gen is giving an introductory course on how it works and my motivation was to get what we have in Debian in shape for the course.
You can say that I am a firm believer of the "eat your own dogfood" principle.
Regarding MongoDB only being built for i386 and amd64, the BTS has a patch to enable building for kFreeBSD, but the patch is for the 2.0 series and the code has changed so drastically in relation to the 2.4 series that there is no hope of it applying. It would be super nice to have MongoDB working on kFreeBSD and on HURD also, though.
There is a very nice command line program called
nocache that was packaged
by Dmitry Smirnov (and just approved by the FTP masters!) whose packaging I
briefly reviewed per Dmitry's request and this is an amazing utility whose
purpose is to bypass/minimize file system caching for a program.
This is especially useful when you are making backups (reading lots of files that would, otherwise, fill the filesystem cache, even if they are not used frequently) or if you are just streaming one file (possibly larger than the system's RAM) to another computer and you have no need to use the file immediately after that.
It performs its job by using the
LD_PRELOAD mechanism and using
POSIX_FADV_DONTNEED for the files that will be
Oh, just one aside: for the readers of Debian Planet and other aggregation services which are not Debian Developers/Maintainers, I contributed to these packages without being the maintainer of them, just scratching some itches and contributing back what I produced.
Let's suppose that you went to a show of your favourite some time ago and you were able to sneak in a camera (well, in those times, cell phones weren't able to record much more than 176x144 pixels at 12fps).
But, then, you suddenly found that some people uploaded (short) fragments of that very same show to youtube and, collecting those, you may be able to create a "multi-camera" version of the video that you can record to keep of your memorable concert.
The multi-camera, in the sense above, is not the same as multiple angles (like some DVDs), but something like a TV broadcast, where the stage is filmed by cameras positioned at some places and the image that is broadcast is switched from time to time according to that who is singing, or playing etc.
So, Dear Lazyweb, do we happen to have any Free Software (preferably already packaged in Debian) that is able to help with the task of "aligning" (in time) videos from various (different) sources so as to produce one multi-camera video?
Any comments are welcome, thanks, and if I am successful, I will upload the final video.
I was with some spare time a few days ago and I took one of those stupid tests that are so popular in the Internet. Well, here are the results, saying that I am, indeed, a Geek, which does not surprise me that much.
I did expect my Math score to be higher than everything else (I'm only in the 90th percentile with respect to this), while I am high on the Computer side of things (in the 97th percentile).
I guess that I should devote myself to more Mathematics and writing less code.
Continuing the ZFS evaluation journey (which I will summarize here with the things that I learned), I was able to fit about 2.5TB of data on a single 2TB drive, with deduplication enabled.
Unfortunately, even moving files around in ZFS (which you would think would be a cheap operation) takes ages. Removing files also takes ages.
In a completely unloaded Phenom II X4 910 system running Linux 3.5.2 (actually, from Debian's experimental), and zfsonlinux version 0.6.0.71-0ubuntu1~precise1 (recompiled from source to work with my Debian system), I tried to remove a subdirectory that had about 4000 files. According to time, it took:
real 42m51.487s user 0m0.188s sys 0m16.769s
Which is a bit too slow.
I just (read: "yesterday") upgraded this machine from 4GB to 6GB of RAM (well, I would have updated it to 8GB of RAM as that is what I ordered, but it one of the RAM chips arrived dead here and I simply took one of the new 4GB card and one of the older 2GB card).
But, honestly, I don't see change in upgrading the RAM from 4GB to 6GB and, unless something magic happens with 2GB more, I wouldn't expect a whole different story with 8GB of RAM. Oh, no swap is being used (well, it is an unloaded machine after all), and the Linux kernel swappiness knob is at its default of 60.
What I do see, though, is that even for a single file removal, the disk is thrashing a lot. I mean like crazy and that is, of course, the prime suspect of the bad performance (of course, of course). So, there must be some really crazy metadata churning going on here and I guess that the people from the LLNL would like to know of their modules producing this behavior.
Keep tuned for some future impressions.
Well, that was a short-lived experience. On the 5th of September, I received the following e-mail, which, sincerely, is as uninformative as it could get:
Date: Wed, 05 Sep 2012 16:52:22 -0000 From: firstname.lastname@example.org To: email@example.com Subject: Google AdSense Support Hello, As mentioned in our welcome email, we conduct a second review of your AdSense application once AdSense code is placed on your site(s). As a result of this review, we have disapproved your account for the following violation(s): Issues: - Site does not comply with Google policies --------------------- Further detail: Site does not comply with Google policies: We're unable to approve your AdSense application at this time because we feel that your site does not comply with Google AdSense policies or webmaster quality guidelines. It's our goal to provide our advertisers sites that offer rich and meaningful content, receive organic traffic, and allow us to serve well-targeted ads to users. We believe that currently your site does not fulfill this criteria. For more details, please read the webmaster quality guidelines at http://www.google.com/support/webmasters/bin/answer.py?answer=35769 and the AdSense program policies at http://support.google.com/adsense/bin/answer.py?answer=48182, https://support.google.com/adsense/bin/topic.py?topic=1271507 (...)
OK, I went on and read all the documentation on program policies etc. I investigate each point here:
- Invalid clicks and impressions: No violation here.
- Encouraging clicks: No violation here.
- Content guidelines: No violation here.
- Copyrighted material: No violation here.
- Webmaster guidelines: No violation here.
- Traffic sources: No violation here.
- Ad behavior: No violation here.
- Ad placement: No violation here.
- Site behavior: No violation here.
- Competitive ads and services: No violation here.
- Google advertising cookies: No violation here.
To cut a long story short, I RTFM and did everything correctly. So, dear lazyweb, what would my options be to fund a tiny bit of my software contributions?
I plan on blogging more frequently with the results of the experiments, but it will take me a few days, as I want to offer people only the "meat" of what I actually did, pointing out some pitfalls that made me waste a lot of time.
Anyway, back to AdSense, one interesting thing about trying it is that I applied to be part of the program for 3 or 4 times late in August and, due to unspecified reasons, my application was rejected for "not complying with their policies". I read the policies from left to side, from up and down and nowhere I saw anything that my blog infringed.
This morning, I tried once again and I'm now in a sort of "probation period" for AdSense, which is progress I think.
So far, I only put one advertisement at the top of my pages, but I am curious to know how "targeted" those ads would be to a blog that has articles that were written more than a decade ago, with posts in at least two languages.
I guess that this is a weaker version of what Raphaël Hertzog is trying to do with his funding campaigns, except that I only expect to cover the costs of my hardware (even if I am skeptical that I will actually manage to reach that point of self-sufficiency).
This is the result of an old meme, but it is still quite interesting:
What American accent do you have?
Your Result: The Northeast
|The Inland North|
What American accent do you have?
Quiz Created on GoToQuiz
Due to some real life impediments (including a beautiful one with just over 2 months of life called Daniel), I will not be able to go to this year's DebConf, even though it seems that this DebConf will exceed many expectations with the support that the governement of Nicaragua just offered.
Things that I would like to have worked on
Unfortunately, this means that I will not be working with some fellows that
may be there, like people from the
In particular, as I am working on some packages that are not that well suited to distributions (especially regarding HandBrake), it would be highly productive to have a hacking session or two with other people interested in having this popular and easy-to-use video ripper/transcoder on Debian.
There are many things else that I would love to work on, besides helping a bit with some of the still pending transitions of the the Debian Fonts Task Force umbrella, meeting friends and other social stuff.
Oh, well. I guess that working at home will be the solution for some time.
My last cellphone was a dumb phone. I seem to remember that it was called something like Samsung Voicer. It didn't have those hip things like SIM cards.
OK. Fast forward to December of 2010, and after a long hiatus, I decided to get a new phone. It was this shiny Nokia N900. The biggest thing that made me want this one was that some people at DebConf 10 were praising it for having a Debian-like distribution in it, and that it worked very well.
I saw that Phil Hands was enthusiastically talking about the bargain of getting one in New York and that it would have cost him many pounds in the UK and, so, he was happy with it.
Otávio Salvador also got one in NY and during one of the nights of that DebConf with him, Tiago Vaz, Daniel Baumann, and Chris Lamb, I asked him to call my parents here in Brazil and he let me use his N900. I had more time with it and played a little bit with the command line, opened the stock media player and played a beautiful trailer of 9 (2009 film), leaving me a very good impression.
I bought one, and, indeed, it is a very good machine that can even make phone calls. The ability to run Linux on it (even if it contains some non-free pieces of software) was decisive in me getting it. It runs this distribution called Maemo 5 which is loosely based on Debian.
But now, Maemo is not the "cool thing" to run, regarding Linux on these portable devices. Meego is. But with the advent of many Internet forums has brought many people writing some long, convoluted howto documents for things that would be better done as, say, providing a script or, better yet, preparing a package.
In the case of getting Meego running on the N900, only three things are required:
uboot-pr13while in Maemo.
- Uncompress the MeeGo 1.2 Community Edition for Nokia N900 image to a µSD card.
- Turn the phone off, plug in the µSD card, and PUT THE BACK COVER on the phone.
The last bit of the third step is crucial, as, otherwise, the SD card won't be detected and you will get kernel panics with the device trying to mount the root filesystem from a device that is not there.
As an aside, the official documentation tells us one should uncompress the
available pre-made images and write them with
dd to the device. In my
experience, it is completely unnecessary to use it and, in fact, it is so 6
times slower than using a simple shell redirection. That is, instead of:
bzip2 -d < mg-handset-armv7nhl-n900-whateverwhatever-mmcblk0p.raw.bz2 | dd bs=4096 of=/dev/mmcblk0
you can get better results with the simpler:
bzip2 -d < mg-handset-armv7nhl-n900-whateverwhatever-mmcblk0p.raw.bz2 > /dev/mmcblk0
I actually used lbzip2 instead of bzip2, but that shouldn't matter. The use
of the small block size for the
dd command is probably the culprit, but I
don't see the need to write the uncompressed data in chunks this small. If
there is a problem, I would love to be informed of that.
As trying it with
uboot and a µSD card doesn't mess with your "safe" Maemo
installation, this is a good way to play with the successor of Maemo.
Perhaps, if we give good feedback to the project, we can influence the
direction that it is taking.
It will surely be nice to learn about this new "consumer-oriented" distribution.