Skip to content

Free software

Why EOMA68 will advance both free software and free hardware

If you’re not familiar with EOMA68, it’s an open electronic interface standard specifically designed to support the development of small computing devices built-up of free hardware and free software. It is mostly known for it’s involvement in the third attempt for creating the KDE-tablet, known as the Spark tablet and the Vivaldi tablet. In this project it was found that it is impossible to rely on the continuity of hardware specifications by Asian electronic vendors. If your goal is to develop a software stack, targeting changing hardware will consume most of the development resources, rendering the project useless. So it became clear that control of the hardware is very important in the fast-paced world of embedded and mobile computing. The EOMA68 standard is an important stepping stone in this regard, because it defines a strict interface between the processing board which includes the main component drivers and the board it is inserted into to provide all the necessary interfaces for the final use-case. This means that the processing boards can be produced at sufficient volumes to enable the desired control over the internal components and thus the free software support. The devices interfacing with the processing boards might be subject to electronic changes, but due to the EOMA68 abstraction, the impact on the software stack will at the very least exclude the basic working of the operating system.

So in this way EOMA68 enables the development of free software for this kind of hardware, but it also increases the ability to design free hardware. If a more free option for chips becomes available, the only step involved for freeing the end-user devices is to develop and build new processing boards. This is far easier of a task than incorporating all the interfaces (like screen drivers) and also the production count can be higher since it is more widely applicable. Also in the process of development the new processing board, it could be tested on the existing EOMA68 platforms without having to develop specific setups. For instance new processing boards can be beta-tested by swapping new cards around between people having EOMA68 compatible devices. Likewise new EOMA68 platforms can be developed and tested by comparing the performance between different processing platforms. So say a driver is functional on a general 64-bit architecture, the driver on the other architecture can be tested to produce the same results, all without creating specific setups for each hardware component.

Then in addition the standard brings the advantage of upgradeable hardware and even shared hardware to the table. The PCMCIA-based boards can be handled by consumers without risking ESD-issues and the interface allows repeated plugging and unplugging without deprecating the contacts. So if your laptop gets slower you just buy a new board for it. And by switching your boards around like a domino-game you can consequently upgrade your netbook, tablet, router or even your smartphone as well. You can leave the now spare processing board on the shelf as a back-up or buy an additional platform to fill another need. This type of upgrading reduces cost and e-waste. Another option would be to have true continuity by carrying a processing board and changing its interfaces depending on the need. You could even change to device with another screen type if you would like to work out in the sun or you could use the built-in connectors of the processing board to watch your holiday pictures at a friend’s place.

So how can you get on board with this? Well, there is a crowdfunding campaign about to launch in order to bootstrap this new paradigm. And just as this system enables, a new and more free processing board is already in development.

IEEE Open Source Software Task Force

Sometimes an open initiative just ‘clicks’, because it fills a growing need and does so in the right way. Great non-software examples I have come across in recent history are Wikipedia OpenStreetMap RepRap DIY Book Scanner WikiHouse OpenDesk and EOMA68. Just yesterday I experienced another such a ‘click’ initiative: the IEEE Task Force on Open Source Software for Power Systems. This initiative has a clear mission in encouraging free software adoption in this rather conservative field:

This Task Force explores the potential for open source software (OSS) in the Power Engineering Society (PES). The mission of the Task Force is twofold:

  1. diffuse the philosophy of OSS in the power systems community
  2. promote OSS for the benefit of the PES ranging all the way from simple pedagogical OSS to commercial-grade OSS.

— IEEE Open Source Software Task Force

Having a power system background, ever since I’ve become aware of free software I’ve wondered about why so little free software is being developed and used in the field of power systems. This concerns both software for calculations and simulations, but also operational systems like SCADA which could certainly benefit from having more eyes on the code. Also the calculation and simulation software is entering the operation domain now that the increased number of measurements and the available computation power allow for real-time grid analysis.

In any case power system software is becoming an ever more important part of the core business of power system development and management. Some vendor-independence and collaboration in development therefore seems to be important and sensible. Current practice however seems to indicate a low level of adoption by the industry, probably because free software has only recently come to the attention of the industry, and because of the lack of companies offering support. The latter has proven to work for the software industry, with Red Hat as a great example.

Two listed presentations at the panel sessions of 2009 because it shows the task force cares about software integration. An interchangeable data format was discussed which expands upon existing standards to better allow software programs to tie in with each other. Likewise GIS integration has been discussed, which is an important development in bridging the gap between the real-world and the simulation model, since the scope of a power system is greater than its individual components.

The software list published by the task force certainly lists some projects I’ll look further into. I certainly hope the efforts of this task force and the listed projects will contribute to a bright power system future.

The fun of free software

Despite running Linux for over 6 years now, I just recently converted my machine to Debian Testing. My initial reason for running Testing was to obtain newer versions of packages I value, like the GNOME desktop environment, the LaTeXila editor, and the Scilab simulation software. Therefore right out of the box it was very satisfying getting to experience the progress that had been made since the last Debian Stable release. However in Scilab I experienced the problem of graphs not displaying as they should. Of course I filed a bug report and by creating a workaround (writing svg images) I was able to continue business as usual. Now what I wasn’t expecting, is the level of excitement I got from having a bug that was bothering you finally solved. When I upgraded my packages today I found out that this specific bug has been fixed and even though it was a minor issue, it is amazing that all the people in patching the software and releasing it cared about my issues, and that so many other users will benefit from this patch as well. Not just seeing the larger updates but especially seeing the smaller improvements does shed a different light on software development in the free software community and I’d like to think it is very addictive, especially for the more technical users, to continually be supplied with small improvements.

Circumventing Google on mobile

Nowadays there are many ways to circumvent Google’s services for mobile, which is especially important to Android users who would like to take the next step in freeing their Android. There are other email providers, other PIM syncing services and other application distributors. However I would assume that sometimes a couple of non-free applications might be holding users back from freeing their Android, for instance because no free alternative is available or because their friends are tied in a non-free environment. Luckily the Linux Action Show made me aware of GooglePlayDownloader a project which enables the user to download .apk files from the Google Play Store whilst circumventing the logging and syncing required by Google. This is of course a cat and mouse game, with the associated projects reverse-engineering the API’s and store navigation to keep track of this moving target. With most software creators targeting just the Google Play Store for Android applications, this is a valuable addition to the set of tools that aid in freeing mobile users.

Truly user-centered design

Federico Mena Quintero just published an extensive write-up about the reason for having the Linux-desktop (GNOME) focus on user-security and user-safety. Federico in return was inspired by the talk by Matthew Garret at GUADEC 2014, as featured by the Linux Weekly News. By using the parallelism of city-safety, Federico attempts to describe the way in which the total (desktop/city)-environment is benefiting from the established level of security and the achieved level of safety. I’d like to think that security is about the hard-limits, whilst safety is about the soft-limits, both of which can be crossed depending on the experience of the user. In a sense serving a secure and safe freedom-oriented system would make it impossible for users to compromise their own safety, security and privacy unless specific additional features are enabled. Of course the details about these features should be made very clear to the user, in order to avoid users unknowingly endangering themselves. The small bits which can be worked on at GNOME are listed in the meeting documents of the GNOME safety team.

Great insights from Flock 2014

This month the Fedora Flock conference was held in Prague. Even though I haven’t used Fedora in a while now, the conference was interesting to me because of other topics discussed. I already reported on the Novena presentation, but below I’ve listed other presentations.

Free And Open Source Software In Europe: Policies And Implementations – Gijs Hillenius

This presentation gives a nice overview of various initiatives around free software and how well organizations transition towards free software. The statement about the mayor of Munich has unfortunately been multiplied by the Linux press, but coming from this presentation it seems that the transition is properly locked into processes and there won’t be a change of plans any time soon. Gijs also gave other great examples of free software being used, of which the Gendarmerie struck me by scale and determination. Of course the main issue in Europe related to this topic is the reluctance of the European Commission of even considering free software, which is covered by Gijs as well.

Building an application installer from the ground up – Richard Hughes

The presentation give a nice overview of the process of solving the know problem of making legacy systems compatible with the new system. Basic considerations were how to deal with local and remote information storage and how to deal with fonts, plugins and terminal commands. I believe the team did a great job by keeping a local focus (including search), incorporating development efforts in the ranking, refraining from including all terminal commands in the software center (which would totally clutter the interface) and supplying content for the premium applications. This will help make software center a premium tool which will not only aid casual users, but will also be a powerful tool for power-users as well.

Better Presentation of fonts in Fedora – Pravin Satpute

Overall I didn’t find this presentation a strong one. It made me aware of a new fact, namely that developers are able to choose their own fonts, regardless of fonts included in the distribution or supplied by the user. However I’m not quire sure if Pravin maybe meant that developers aren’t able to develop for a specific set of fonts, because that is decided later on by the selected theme and the font settings. Halfway down the presentation a small discussion about the font feature in the new software center, where a main question arose on grouping fonts and how to deal with example texts. These questions however remain unanswered. Pravin provided a link to his font portal which seems to be aimed at providing additional features like comments and character support views on top of a concept like the Open Font Library. The key point I took away from this presentation is that work is needed on creating a generic overview covering the font characteristics, the character support, license information, readability, and possibly user reviews.

GNOME: a content application update – Debarshi Ray

This presentation gives a great overview of the effort of GNOME to come up with a set of applications to manage content, much in the same way Adobe Bridge has introduced the concept a while ago for the Adobe Creative Suite. It is not about viewing or editing and it is not about the files, it is about the content from various sources and managing it. One of the powerful concept explicitly highlighted is the ‘reversible delete’ so that rather than explicitly asking for confirmation, you can undo an accidental deletion. Furthermore secondary click (right click) have been removed to better suit touchscreen controls. Debarshi also gives a hint of things to come concerning sharing via various sharing points, managed in the settings dialog. The mock-up shown also shows regular applications like GIMP and Inkscape to be covered by this concept of sharing points, which seems odd but would help to unify the management concept.

How Is the Fedora Kernel Different – Levente Kurusa

This presentation was beyond my state of knowledge about kernels and the Linux kernel in particular. It did however highlight how the Linux kernel can be tweaked to meet different needs and how different distributions make different decisions on these settings. In general however I would believe most users would never be able to distinguish these kernels, just like I wouldn’t. I’d be more struck by decisions on a higher level like the default desktop environment and the package manager.

Procrastination makes you better – Life of a remotee – Flavio Percoco

This presentation gave a brief and humorous overview on the struggles of working remotely, covering some tips on improving your working life. It is strong in the sense that it was a very personal story, relating to many remote workers, although it only has limited pointers to other material on dealing with working remotely.

UEFI – The Great Satan and you – Adam Williamson

This was a very explanatory presentation covering both the technology of UEFI and Secure boot and the practical implications. Since I have no experience with a machine featuring UEFI, I didn’t have any idea about how much of a pain dealing with UEFI and Secure boot would be. It seems this very much depends on the machine being used, although best-practices exist. Also it clarified the controversy around Secure boot, since basically other keys apart from Microsoft could have been included, but unfortunately no other party was willing to take on the job. Surely a presentation worth recommending.

UX 101 – Practical usability methods that everyone can use – Karen T.

I found this presentation be a great one, coming clearly from a design-side rather than a development side. The presentation gives a concise overview of achieving a great interface, which is great to watch again before taking on a new project involving design. I believe anyone involved in user interfaces can learn from this overview.

Yubikeys – Nick Bebout

This presentation covers the Yubikeys by Yubico, which can be used for two-factor authentication. The newer model called the Yubikey Neo also features the possibility of hardware-based PGP. The presentation covered some aspects specifically targeted for Fedora users, but it did a decent covering of the features of Yubikey and even of smart-cards. Including a demo, this presentation offers plenty of pointers to delve into the various aspects of key management and two-factor authentication.

Richard Stallman reformatted

This year Richard Stallman gave a presentation at TEDxGeneva, which is now available on video. Having seen my fair share of Stallman presentations, it is quite noticeable how Stallman is forced to keep his presentation concise and keep the content aligned with the presented illustrations. Despite this struggling, the presentation gives a good summary of the many aspects of free software and the iconic illustrations make it very lively and understandable. Surely a video to recommend to others. (Despite the explicit note by Stallman to refrain from using the term ‘open-source’ I will classify it this way, mainly because free software includes freeware which is even more harmful than open-source software.)

Open furniture

During the last decades a slight change has occurred in the field of furniture, with the rise of modular furniture. Modular furniture has great benefits, since it can often be rearranged to fit changing needs, allowing people to hold on to their furniture much longer. Examples of such systems are Ikea Pax and Besta, Lundia, and my personal favorite Vitsœ. I do however believe that the current situation is somewhat unfortunate, since the different interfaces result in a system lock-in which limits the available components and the flexibility of the system. This therefore leaves room for improvement, giving rise to my idea.

By getting the implicit interfaces of furniture and the compatibility of the various components documented in a wiki-format, people might make a more conscious decision for a particular system which would contribute to the overall time of use. I imagine this covering for instance the panel sizes, the measurements between the various screw-holes and the similarity between systems. Furthermore I’m hoping that such information will assist companies and individuals to create additional furniture components that are compatible with existing systems in order to aid consumers in utilizing their system. It might even give rise to converters that will bridge the gap between systems. Exemplary for working with existing standards are the 3D-printable universal construction kit adapters.

Down the path with Emacs

Just about a year ago I started using Emacs and I’ve now come to the conclusion that it is about time to get out of the Emacs-world.

I used to make use of Zim and GTG for both my notes and tasks, but as I was using ever more shortcuts, I was keen on employing more advanced tools, especially with a lot of conversion formats in order to liberate my content. After an extensive search I started using Emacs Org mode. I must admit that Org mode is brilliant and very powerful. Adopting Org mode as a non-Emacs user meant I had to learn the most common Emacs shortcuts and get a sense of the considerations underlying Emacs. Having gotten up to steam, it is a brilliant way of managing notes and tasks intertwined, living in a flat file system. Being able to create overviews of all the different tasks allowed advanced overviews to be generated and allowed me to test various management styles like GTD and Kanban. By far the most powerful example of using Org mode was about half a year in: I had only a couple of days at an external company to work out a project outline. Being able to keep notes and tasks with blinding speed was already incredible, but being able to export a draft outline to both a neatly styled LaTeX report and LaTeX presentation was a great time-saver which made quite an impression.

Spending a lot of time in Emacs already, it only seemed logical to point more activities towards Emacs, which is the eventual consequence of such a tightly integrated editor with all its versatility. It didn’t took long for me to strictly use Emacs for my writing, coding, news reading and even browsing. Emacs really became my operating system, just the way as it has been joked by the community.

Emacs however isn’t an operating system. And Emacs isn’t a windows manager either. Emacs is just a legacy editor with many powerful modes which can be tailored to suit a lot of use-cases. Having adopted Emacs as my main tool, I became quite aware of its limitations. Limitations which aren’t around when using other programs for the job. Also the integration of Emacs with other programs wasn’t very good. Copying content from Emacs to other programs often required another editor like gedit to bridge the gap.

Now I’m steadily moving my activities and content back to my favorite GUI applications, which have a large user base and are dedicated to a particular set of tasks.

In retrospect I would describe Emacs as Swiss army knife combined with a pile of wood: you’ll be able to achieve a lot with the tool alone and by creating your own set of tools from to wood you can achieve even more. Nowadays there are however more tailored tools for the various jobs and making everything yourself just seems pointless.