Maja can be used in any environments, not only web browsers.
Hello, in this post I'd like to hint you a very nice search engine: DuckDuckGo. I've been pointed out to this by a post here on why both Google and Yahoo suck (and why Google search does more). In older posts of mine you can find some examples on how Google search can miserably fail a search. The main problem with the two approaches is: Google (and Bing too) sorts by popularity, Yahoo by match. So, what can happen is: Google finds something completely unrelated to what you need, Yahoo is much sensible to the keywords you put.
This DuckDuckGo search engine instead, looks up the search keywords and asks you for the meaning of the words (Yahoo phase), and then sorts it by popularity (Google phase). Especially with web 2.0 and publicity, popularity of web sites loses much more importance over the actual relationship between the meaning of the search and the web page itself.
So, as usual the conclusion is: don't use only one search engine, use multiple ones because implementation matters; don't say people "Google is your friend" because it can happen to be offensive, say instead "use your favourite search engine (YFSE)".
Hello, today is the Debian birthday. For this event a Debian Appreciation Day has been prepared to thank all the Debian organization, all the contributors, developers, teams and everything related to the universal operating system. If you want to thank Debian and make developers feel loved :P here's the page: thanks.debian.net.
Hello, recently Mash has been released. It is a library for reading models in PLY format and creating Clutter actors from them. For reference, Blender is able to export to PLY. It means you can draw your models with Blender and use Clutter as rendering engine. Clutter is a 3D canvas and animation toolkit while Blender is a 3D modelling suite.
Hello, I've written a MIPS32 (Release 2) disassembler for ELF files. It is not a simple disassembler, it's mostly made for reverse engineering proprietary boxes for educational purposes. It has been successfully tested on Vodafone Station which has Broadcom binaries. These boxes don't have a sections table, therefore normal disassemblers don't work. Mipsdis instead will guess the bounds of those sections (most important ones are TEXT and RODATA for strings).
This console program outputs a friendly assembly code, whose each instruction is commented (comments copied directly from the mips specification). It also features labels for branches and symbol resolution for strings, global variables and functions.
Hello, I'd like to point you to this firefox extension HTTPS Everywhere (June 17th, 2010) and SSLGuard which is also a firefox extension (first released Oct 14th, 2009).
The code of the former extension is a lot more complicated and the result is not always quite the same as SSLGuard. In fact, while they support secure cookies and per-website custom rules, SSLGuard lets you add custom websites to be secured directly from a friendly graphical dialog.
You could even install both of them, apparently they don't conflict.
It's the second time decrew ideas are being duplicated. This happened sometimes ago with SSLtoHTML (ettercap plugin) and sslstrip (standalone application), but they released the code before us. Funny isn't it?
I'm not complaining about anything (I'm not saying "copy", I say "duplicate"), just clearing things out. Of course, better have more choice and more works.
Hello, Google Reader has just wiped out my subscriptions list, which made me scream like a crazy monkey (other than cleaning up the list).Now I'm using bloglines, which looks pretty good except there's nothing like "read all items", but you must clean on each feed to see the items (any alternative to bloglines supporting this?).
But as far as I can see, I'm not the only one that lost the feeds, the difference is that I just opened the reader without any other operation.
I've also heard of MyYahoo! being a good aggregator, the only problem is that Epiphany/Webkit is broken with yahoo (no css), anybody experiencing this?
I think it's far from being perfect, and actually I haven't tried integrating it with autotools, but it shouldn't be that hard. Unfortunately, you have to somehow break the gtk-doc rule "do not run it manually" because valagtkdoc goes in the middle between gtkdoc-scan and gtkdoc-mkdb.
If anybody has a better solution, please tell me :)
Hello, lately I've received some feedback, thanks for this.
1) Is it compatible with apt? Can I use dpkg back again after using tdpkg? The answer is... yes! You can use what you want in the order you want, and use tdpkg when you want. Take in consideration that after using dpkg (or apt) without tdpkg, then you use tdpkg the cache will be rebuilt for consistency.
2) It's not working here (Ubuntu, other distro...), doesn't create the cache. First of all you have to be root when first running tdpkg in order to create the cache. If this didn't solve the problem you are maybe using an untested platform. Debian uses eglibc and tdpkg has been tested on i386 and amd64. Since tdpkg does wrapping around glibc calls it might happen to not wrap the right functions. If you want tdpkg to be ported to your platform please comment here with the result of these commands: objdump -T /usr/bin/dpkg|grep open objdump -T /usr/bin/dpkg|grep stat objdump -T libtdpkg.so|grep open objdump -T libtdpkg.so|grep stat
3) Should I put the alias also for apt-get and aptitude? Yes you have to. Aptitude and apt-get bypass the shell so the only alias for dpkg won't work.
Another thing I'd like to say is that dpkg/experimental has a patch that speeds up a lot database reading by asking the kernel to cache .list files... i.e. dpkg will avoid cold start. This brings timing from 14 seconds to about 3 seconds! At all, using tokyocabinet you get 1 second. Think that including a cache inside dpkg would mean even less than 1 second.
Hello, you may have noticed that dpkg takes a long time reading the database the first time you run it (e.g. through apt). This is because of the huge number of /var/lib/dpkg/info/*.list files (1700+ on my desktop machines). It can take up to 14 seconds and more at cold start to install/remove a single package. Since 2007 in dpkg mailing list a first proposal (by Sean Finney) to using sqlite as cache has been posted, then a couple of weeks ago I reproposed it. No reply since then from the maintainers.
My first idea was to fork dpkg and only change the part about reading the list files. This means you had to install another dpkg version, and I haven't done it for two main reasons: most of people wouldn't have replaced dpkg and it'd have been too hard to maintain it. The solution is tdpkg, a shared library that wrappes around glibc function calls of dpkg. You'll find in README to backup your /var/lib/dpkg/info but tdpkg is robust enough to not fuck it up.
Tdpkg comes with tokyocabinet (faster) and sqlite (handles concurrency better) cache backends. I've managed to bring cold startup time from about 14 seconds down to about 2 seconds. I will definitely have fun installing and removing applications back again.
Hello, it's often useful for Vala hackers to have a graphical representation of the code tree and control flow blocks. Therefore I've created a simple application called Valag which generates four types of diagrams using Graphviz for each state of the Vala compiler.
This is the first release, there're many things to fix and enhance (like command line options) but it is working quite good already to give support when hacking Vala.
Hello, sometimes ago I posted twice about google search having wrong results. Now it happens again after a few months (it's like google breaks page weights like every 4-5 months). The search is "debian dpkg list", I really expect the mailing list info page but after 3-4 pages of google search (and also bing this time) I couldn't find it. With yahoo search it's at the first position (lists.debian.org/debian-dpkg).
Like for the previous posts, using different search engines matters and matches your needs.
Hello, I'm writing about the work done in the bugtriage weekend of the Debian/GNOME team, started at 27th Feb and ended in 28th Feb. The result is great, 167 bugs have been closed and many more have been triaged and forwarded upstream.
Thanks to everybody who contributed, especially to whom has done it for the first time (well, you can still continue working on the remaining bugs ;) ).