Skip to content

Blog

Ubuntu calling freedom

So far the flash sales of the first Ubuntu phone by Bq has been sold out, and certainly not without a reason: the Ubuntu phone holds great promises for both the users and the development community. On the FSFE Discussion mailinglist I already gave a quick and general overview mostly based on a recent Linux Unplugged podcast, and so in this post I’d like to revisit my comments with a focus on freedom, as this is lacking in other articles. One word of caution though, I haven’t yet read formal documents or code, so all listed info is second-hand.

First off, embedded devices are difficult, and phones in particular are hard, like Fairphone for instance has come to find. The problem with phone-hardware in general, is the fact that a build is needed for a specific phone since auto-discovery of peripherals like on a regular computer is missing. Add to that the fact that electronics are developed more rapidly than free drivers an be developed, as was the case for the Vivaldi tablet. So unless you have a say in the electronics, and allow a non-signed bootloader, it is very hard and especially time-consuming to develop this lowest layer as free software. and that is also why project like Neo900 and GTA04 exist. One of the added benefits the GTA04 offers, is that the modem is physically separated from other processors, as the modem implementation is locked-down by law. This is about as free software supporting as free hardware designs can get, but this freedom comes at a cost in performance and money, thus requiring plenty of commitment to become a reality.

So in order to actually ship a product, using non-free designs and chips will be the default option, like Ubuntu did in this instance. In order to get a kernel running the device-specific board support package offers the prerequisites needed to boot the Linux kernel. But rather than modifying the Linux kernel and building a tightly integrated software stack for a particular device, as is the case for Android, Ubuntu Phone separates the software stack in two separate layers: a device-specific part and a Ubuntu-part. This separation is ingenious and brings great benefits.

By having a separate Ubuntu-part, this can be updated in the future, without having to do revisions on the device-specific part, thus allowing all models to stay up to date with the newest Ubuntu, and thus avoiding both the platform segregation of Android and the limited number of firmware updates like on iOS. Users can thus still get security fixes and the features newer applications might rely on. Also regarding this part, it would be possible to run a different top layer for a specific mobile operator, or run a different interface on top of this Ubuntu separation layer. I haven’t looked into this layer, but ideally it should be a clean and stable in order to allow others to adopt it.

Likewise the bottom part can be swapped. For instance an Ubuntu Phone port was made to the Nexus 5, which was done by building the necessary but limited hardware support and offering the separation layer. Due to the additional separation, this port will be able to keep up with firmware updates, and so all additional development efforts can go towards improving the device-specific part rather than keeping up with firmware versions. Depending on the required complexity of this device-specific layer, porting additional devices is relatively easy and particularly fruitful as it can remain nearly a one-time effort.

I’m not aware how free the Ubuntu-part is, although I assume this would be in line with other Ubuntu distributions where it mostly adheres to your needs for freedom. The interface is based on Qt5 and is very supportive of HTML5 applications. In this way mobile applications would be able to run on the Ubuntu desktop in the same matter, offering a great convergence solution. Also it is supportive of efforts being made to put forward HTML5 applications for a run-everywhere solution. There is no policy which requires applications to be free, so you can install all kinds of applications, of which a long list is already available. Users are able to sideload applications, avoiding the dependence on an appstore, which is probably the reason why no appstore was launched by Ubuntu just yet. Of no less importance, it seems to be well designed and offer great usability.

One somewhat overlooked part, is the availability of scopes. They aren’t as much overlooked in functionality, but rather in philosophy. Android and iOS have recently realized that apps can be complementary and it is up to the firmware to provide the integration. This can be news and weather, but more recently health and home automation seem relevant as well. The fact that scopes can either work with local data or on the internet but not both, respects the capabilities of the device and prevents unwanted data transmission. More importantly by offering aggregated scopes, you can create a locally generated view. This adheres to the vision of a web which is decentralized rather than centralized and in which each computer has many outgoing connections.

Of course the big elephant in the room is that the phone ties into the Ubuntu ecosystem and so convergence would be best between the Ubuntu phone and the Ubuntu desktop, and likewise it would bring a boost to the Ubuntu store, Ubuntu Snappy Core and presumably to cloud services. So what if Ubuntu would be the next big platform? Well, it would bring a very free firmware which is very friendly to porting devices, it would encourage development in HTML5 and Qt, it would encourage more decentralized applications, it would enable development of the Ubunthu phone itself, and it would put a great alternative next to the Google-ized Android and other systems.

Either way, I nearly bought one but I just missed out by the flash-sale. I’d strongly consider ordering one, because I believe this stack is much more freedom-respecting than Android. More frustrating my perfectly fine phone is still on Android 2.2 with a lack of application support and a whole load of known bugs. I haven’t looked deep enough into Jolla or Tizen to judge them. There are many known improvements available to be adopted, both in hardware, firmware and the available applications. Currently however this seems to be a great phone, with a great software platform, which is another stepping stone in the right direction.

Software isn’t magic

Last month the news landed that the recent Microsoft Outlook app for Android and iOS was leaking and exploiting login credentials. Because of this leak the European Parliament and some universities have blocked the use of this app. Although Microsoft promises double-encryption of the credentials, this specification is an optimistic representation of the actual practice:

What I saw was breathtaking. A frequent scanning from an AWS IP to my mail account. Means Microsoft stores my personal credentials and server data (luckily I’ve used my private test account and not my company account) somewhere in the cloud! They haven’t asked me. They just scan. So they have in theory full access to my PIM data.

— Rene Winkelmeyer

From an engineering perspective this seems to be a straightforward way of offering push messages when the original synchronization interface wasn’t suitable to. But something is of course totally off in the interface of the app. Asking whether or not you’d like to receive push-messages only covers part of the deal. The real result of switching on push-messages can be read in the privacy statement:

We provide a service that indexes and accelerates delivery of your email to your device. That means that our service retrieves your incoming and outgoing email messages and securely pushes them to the app on your device. Similarly, the service retrieves the calendar data and address book contacts associated with your email account and securely pushes those to the app on your device. Those messages, calendar events, and contacts, along with their associated metadata, may be temporarily stored and indexed securely both in our servers and locally on the app on your device. If your emails have attachments and you request to open them in our app, the service retrieves them from the mail server, securely stores them temporarily on our servers, and delivers them to the app.

— Microsoft

It is a unfortunate combination of a lack of security with an unclear presentation to the user. Likewise I’m curious who actually knows for that Google is storing all WiFi credentials of users having enabled the ‘backup’ option. In fact, these misconceptions of the inner working aren’t an exception, it’s more the usual case. Arne Padmos spoke at the last CCC and referred to a research into public perception of email. The over-simplistic drawings on page 15 clearly shows peoples lack of understanding about parties involved. Likewise 29% of U.S. citizens believe the cloud has something to do with the weather, and 95% are using cloud services whilst they thought they weren’t.

Software isn’t magic, but unfortunately it isn’t easy to understand for most people either. I’m certain we can, and should, do better job in educating the general public on these topics. It feels like a big secret waiting to come out, that so many parties and services are involved in getting a service to work. A secret we’d rather not bother a customer with, because the engineers have taken care of it and weighed the pro’s and con’s for the customer. But wouldn’t the customer be better of knowing what decisions underlie a system, to allow an educated choice?

In the Netherlands we have standardized obligatory layouts for energy bills so that customers have a better chance of understanding the product. Likewise there is a standard specification describing more complex financial products for a similar goal. In this regard it seems odd that digital services, which are often times highly complex, can get away with obfuscating instead of explaining. If more people would know their emails are like postcards, and would know that many parties handle those emails, I’m certain the demand for encryption would increase.

Optional rights

Our societies are built on rights which correspond to social norms; fundamental rights correspond to fundamental norms and local rights correspond to local norms. These rights can either be written down like laws, or they can merely be the practical manifestation of the informal norm. This collection of rights is a product of many, many years of progress, but this unfortunately doesn’t mean we can take them for granted. Every single day our rights are subject to discussion and shifting norms.

In recent history it seems that our established rights are no longer taken for granted but are repeatedly being offered as an option. The choice is offered between either keeping your rights or either having some increased convenience or financial benefit. Whilst this does not directly attack our rights, it still does so by way of shifting our norms. If some majority of people aren’t aware of this ‘trap’ and consequently give up their rights, this decreased level of rights becomes the new norm. In these cases the option of choice is hurtful to society, unlike the choice in the marketplace. This choice analogy is however used as an argument to justify the optional rights.

Recently in the Netherlands the right to choose your doctor was subject to debate in parliament, as the liberal party wanted to offer it as a choice, rather than as a right. Giving up this right would enable a financial benefit to the health insurances resulting from their increased negotiation position. In principle consumers should be able to still have this right available to them, but this market principle only holds if some insurances are offering this freedom of choice and the consumers are in fact aware of this consideration and care enough to defend this right. Erecting a new insurance company that adheres to these norms would be the way of the market, but unfortunately this is easier said than done. This market principle thereby undermines the stack of rights we have built over the years as a society via our democratic process.

This grim future of unavailable rights is already a fact in the Dutch educational system, as explained at this Dutch page. Whilst the Dutch parliament has agreed on the right for people to strictly use open standards and free software during education, there is no single Dutch school offering such an educational program. The reason for this unavailability is that in practice schools can choose their IT-systems and the student in market for education respecting open standards and free software is apparently too small or to distributed. So despite our democratic parliament agreeing on this right, in practice this right is subject to the market offering and as a consequence this right isn’t defended anywhere.

Another example is the infamous Facebook, which uses their social lock-in principle to trap users into accepting new terms which violate social norms on privacy, intellectual property and copyright. So rather than offering any benefit in return, it leaves not using the service as the only alternative. In order to defend our established rights, we must stand against this violation both as users and as a society. In this regard we can be glad the Dutch Data Protection Authority is at least investigating Facebook’s new terms.

Considering established levels of privacy, security, freedom or any other kind or right as a marketable feature is hurtful to society, because it erodes our values, our norms and therefore our rights.

This insight was triggered partially by the presentation on Privacy in Context by Helen Nissenbaum and the presentation by Richard Stallman at 31c3.

Why engineering students need to be taught free software

At a power systems symposium today I met some of my previous classmates of the technical university, now in the starting phase of their engineering career. My viewpoint on the need for free software in education was once again confirmed. Whilst at the university many advanced software packages are provided to students at negligible cost, at work these same tools are hard to obtain. In practice these software packages are too expensive to be used on just a couple of cases, let alone ‘try out’ to find a use case. This basically leave the choice between misusing unsuitable packages or not taking on the task in the first place, both of which are generally undesirable.

As I have learned, and my classmates are learning as well, as an engineering professional you are in need for software with no strings attached: free software. Engineers are taught to overcome many hurdles by grasping the problem and coming up with a right approach for solving the problem at hand. Restricting the set of these possible approaches by restricting the software selection ultimately leaves unmet engineering potential, making this practice hurtful to the end-result.

As each individual use case will require the software for a different use case, software packages in general cover a larger set of features in order to target a larger market of multiple use cases, resulting in relatively overpriced software. Apart from the cost of the software package, there are the costs of maintaining yet another software install and having to deal with recurring costs like license fees per year or version. A way to diminish this barrier is by offering subscriptions to hosted solutions, as many software vendors have started doing. Whilst this reduces the upfront cost, there is more to free software than cost alone.

The freedom to modify the code enables integrating the software package in a solution like an automated tool chain. Better still by modifying the underlying code or even working with upstream development engineers can customize and improve each tool of your tool set. Since it is free software no party will be able to take it from you, and you are able to fork the software if you disagree with the direction development is heading in. In this way an engineer is able to achieve far greater independence.

Whilst it seems to be a good idea to teach students to use professional software pckages used in the workplace, this approach presumes that those software packages will be available for students at the job after graduation. If this isn’t the case, these engineers experience unmet potential. By teaching free software, all students are able to exercise their potential, although some students will experience a non-free software package on the job. If the latter is the case, this presumably is because of specific features, which wouldn’t have been taught at university in the first place.

Furthermore students need to be taught to evaluate software offerings in order to select a package based on the task at hand, rather than to have a package selected for them which is often misused or underused. And free software should be taught just like academics are taught, since both value sharing information and checking the work of others.

Why EOMA68 will advance both free software and free hardware

If you’re not familiar with EOMA68, it’s an open electronic interface standard specifically designed to support the development of small computing devices built-up of free hardware and free software. It is mostly known for it’s involvement in the third attempt for creating the KDE-tablet, known as the Spark tablet and the Vivaldi tablet. In this project it was found that it is impossible to rely on the continuity of hardware specifications by Asian electronic vendors. If your goal is to develop a software stack, targeting changing hardware will consume most of the development resources, rendering the project useless. So it became clear that control of the hardware is very important in the fast-paced world of embedded and mobile computing. The EOMA68 standard is an important stepping stone in this regard, because it defines a strict interface between the processing board which includes the main component drivers and the board it is inserted into to provide all the necessary interfaces for the final use-case. This means that the processing boards can be produced at sufficient volumes to enable the desired control over the internal components and thus the free software support. The devices interfacing with the processing boards might be subject to electronic changes, but due to the EOMA68 abstraction, the impact on the software stack will at the very least exclude the basic working of the operating system.

So in this way EOMA68 enables the development of free software for this kind of hardware, but it also increases the ability to design free hardware. If a more free option for chips becomes available, the only step involved for freeing the end-user devices is to develop and build new processing boards. This is far easier of a task than incorporating all the interfaces (like screen drivers) and also the production count can be higher since it is more widely applicable. Also in the process of development the new processing board, it could be tested on the existing EOMA68 platforms without having to develop specific setups. For instance new processing boards can be beta-tested by swapping new cards around between people having EOMA68 compatible devices. Likewise new EOMA68 platforms can be developed and tested by comparing the performance between different processing platforms. So say a driver is functional on a general 64-bit architecture, the driver on the other architecture can be tested to produce the same results, all without creating specific setups for each hardware component.

Then in addition the standard brings the advantage of upgradeable hardware and even shared hardware to the table. The PCMCIA-based boards can be handled by consumers without risking ESD-issues and the interface allows repeated plugging and unplugging without deprecating the contacts. So if your laptop gets slower you just buy a new board for it. And by switching your boards around like a domino-game you can consequently upgrade your netbook, tablet, router or even your smartphone as well. You can leave the now spare processing board on the shelf as a back-up or buy an additional platform to fill another need. This type of upgrading reduces cost and e-waste. Another option would be to have true continuity by carrying a processing board and changing its interfaces depending on the need. You could even change to device with another screen type if you would like to work out in the sun or you could use the built-in connectors of the processing board to watch your holiday pictures at a friend’s place.

So how can you get on board with this? Well, there is a crowdfunding campaign about to launch in order to bootstrap this new paradigm. And just as this system enables, a new and more free processing board is already in development.

IEEE Open Source Software Task Force

Sometimes an open initiative just ‘clicks’, because it fills a growing need and does so in the right way. Great non-software examples I have come across in recent history are Wikipedia OpenStreetMap RepRap DIY Book Scanner WikiHouse OpenDesk and EOMA68. Just yesterday I experienced another such a ‘click’ initiative: the IEEE Task Force on Open Source Software for Power Systems. This initiative has a clear mission in encouraging free software adoption in this rather conservative field:

This Task Force explores the potential for open source software (OSS) in the Power Engineering Society (PES). The mission of the Task Force is twofold:

  1. diffuse the philosophy of OSS in the power systems community
  2. promote OSS for the benefit of the PES ranging all the way from simple pedagogical OSS to commercial-grade OSS.

— IEEE Open Source Software Task Force

Having a power system background, ever since I’ve become aware of free software I’ve wondered about why so little free software is being developed and used in the field of power systems. This concerns both software for calculations and simulations, but also operational systems like SCADA which could certainly benefit from having more eyes on the code. Also the calculation and simulation software is entering the operation domain now that the increased number of measurements and the available computation power allow for real-time grid analysis.

In any case power system software is becoming an ever more important part of the core business of power system development and management. Some vendor-independence and collaboration in development therefore seems to be important and sensible. Current practice however seems to indicate a low level of adoption by the industry, probably because free software has only recently come to the attention of the industry, and because of the lack of companies offering support. The latter has proven to work for the software industry, with Red Hat as a great example.

Two listed presentations at the panel sessions of 2009 because it shows the task force cares about software integration. An interchangeable data format was discussed which expands upon existing standards to better allow software programs to tie in with each other. Likewise GIS integration has been discussed, which is an important development in bridging the gap between the real-world and the simulation model, since the scope of a power system is greater than its individual components.

The software list published by the task force certainly lists some projects I’ll look further into. I certainly hope the efforts of this task force and the listed projects will contribute to a bright power system future.

The fun of free software

Despite running Linux for over 6 years now, I just recently converted my machine to Debian Testing. My initial reason for running Testing was to obtain newer versions of packages I value, like the GNOME desktop environment, the LaTeXila editor, and the Scilab simulation software. Therefore right out of the box it was very satisfying getting to experience the progress that had been made since the last Debian Stable release. However in Scilab I experienced the problem of graphs not displaying as they should. Of course I filed a bug report and by creating a workaround (writing svg images) I was able to continue business as usual. Now what I wasn’t expecting, is the level of excitement I got from having a bug that was bothering you finally solved. When I upgraded my packages today I found out that this specific bug has been fixed and even though it was a minor issue, it is amazing that all the people in patching the software and releasing it cared about my issues, and that so many other users will benefit from this patch as well. Not just seeing the larger updates but especially seeing the smaller improvements does shed a different light on software development in the free software community and I’d like to think it is very addictive, especially for the more technical users, to continually be supplied with small improvements.

Circumventing Google on mobile

Nowadays there are many ways to circumvent Google’s services for mobile, which is especially important to Android users who would like to take the next step in freeing their Android. There are other email providers, other PIM syncing services and other application distributors. However I would assume that sometimes a couple of non-free applications might be holding users back from freeing their Android, for instance because no free alternative is available or because their friends are tied in a non-free environment. Luckily the Linux Action Show made me aware of GooglePlayDownloader a project which enables the user to download .apk files from the Google Play Store whilst circumventing the logging and syncing required by Google. This is of course a cat and mouse game, with the associated projects reverse-engineering the API’s and store navigation to keep track of this moving target. With most software creators targeting just the Google Play Store for Android applications, this is a valuable addition to the set of tools that aid in freeing mobile users.

Scalability of higher laws

In his book Walden Henry David Thoreau writes about ‘higher laws’. Specific examples he state regard hunting and eating. Even though his statements intuitively seem truthful, the arguments don’t scale very well to modern society. Specifically he shares his view that in order for a personal to mentally ‘grow up’ he’d best practice the less higher practices (like hunting) and learn by experience that this practice isn’t ‘high’ enough. Taking this example, everybody should first make a lot of ‘mistakes’ in order to develop to a better self. However the society is built upon it’s social values and norms and on its technology. People can depart from the learnings of other people, even those who lived decades ago. In society the scientific method for example is regarded as normal and so is taking care of the environment. As for technology, people wouldn’t buy inefficient (although maybe powerful) cars now that fuel-efficient motors are available. The amount of ‘growing up’ for the more basic aspects of life isn’t what it used to be. Considering this growing up still seems possible and desirable, although it’s starting point is the current set of practices of society and technology.

A concrete life purpose

At TEDxMalibu Adam Leipzig gave a talk on defining your life purpose by way of referring to more solid aspects. If you have a sense of your life purpose but aren’t able to make it concrete, just consider clarifying the five aspects that make up your life purpose:

  1. Who are you?
  2. What do you do?
  3. Who do you do it for?
  4. What do those people want or need?
  5. How will they change as a result?