Skip to content

2014

The smallest valid Wikipedia article

Lately I’ve often wondered whether I should contribute my newly learned knowledge to Wikipedia by creating a dedicated article. Creating decent articles is however a lengthy and quite a time-consuming process. So how about setting a mental note by creating the planned article page and only contributing the most basic of information, with the intent of improving it later on. This begs the question is such a strategy can be considered good practice. Fortunately I found a Wikipedia page going in further detail on the definition of a ‘stub’, which basically is a short article of which the value can be questioned. A key point for me to take away from this article is the fact that Wikipedia is not a dictionary. This come down to the key distinction that a dictionary is about the word(s), whilst Wikipedia is about the subject. So in order to start a new article in a rightly fashion, some decent work will have to be done in advance.

The lack of lens mount standards

Ever since I started orienting on buying my current mirrorless camera, I have been amazed at how many different lens mounts are available. A reason for deciding on a my current PEN EP-1 is the fact that it uses the Micro Four Thirds system, which is currently supported by a multitude of vendors, in contrary to most other lens mounts. In addition the short flange focal distance allows many other types of lenses to be mounted by using adapters. On the contrary, the lens mounts of well-established brands like Nikon and Canon are only available on their specific camera bodies, but accept lenses from third party manufacturers like Sigma or Tamron which make their lenses available for most types of mounts. The fact that these different mounts exist is however not without reason.

As the matter of fact, the topic of lens mounts is a very interesting one. The interface can be defined in two dimensions, namely the physical dimension and the electronic dimension covering the communication protocol. The physical dimension is defined by the flange focal distance (partly defined by the option of using a mirror), the locking mechanism, the ring diameter, the film or sensor size, the electronic contact positioning and the optional lens motor gear. The electronic dimension is defined by the power and information that needs to be exchanged between the lens and the camera. This in turn depends on whether different camera features are built into the camera or in the lens. Optical image stabilization can be implemented in the camera body or in the camera lens, the type of motor can vary, the focusing actuation will depend on the type of autofocus used (contrast or phase-detection), not all lenses would be able to zoom, focus might be done via the camera by focus-by-wire rather than a direct manual focus on the lens and some lenses might feature no motor at all (full manual). Building a system which suits all use-cases would however be very cumbersome ans would consist of many compromises. Digital Photography Review sheds a light on this aspect in their review of the Olympus OM-D EM-1, which was the first camera in the Micro Four Thirds system to feature phase detection:

The key difference between contrast-detection autofocus (as generally used in compacts and mirrorless cameras), and phase detection (as traditionally used in DSLRs) is that phase detection is able to assess how out-of-focus the image is, and determine directly how far and in what direction the lens needs to move its focus group to achieve a sharp image. Contrast detection has to scan through at least part of its focus range to find the point of optimal focus.

This difference totally changes to the way lenses need to be designed – those optimised for phase detection need to be able to race to a specified location very quickly, whereas contrast detection lenses need to be able to scan back and forth very quickly. Traditionally, very few lenses designed for phase detection have coped very well with the subtle, scanning motion required for contrast detection. Those designed for Four Thirds SLRs could autofocus on previous Micro Four Thirds cameras, but only slowly and hesitantly.

— Wikipedia. Autofocus

Since this the phase-detection would require a different lens-construction, it is only of use when using phase-detection lenses. So in order to facilitate both types of autofocus and the corresponding types of lenses, a hybrid sensor should be used. Optical image stabilization could be optional on both the body and the lens like in the Micro Four Thirds system Olympus has placed stabilization in the camera body whilst Panasonic has placed stabilization in the lens body. This however implies that combing a Panasonic body with a Olympus lens would give you no optical stabilization, whilst an Olympus body with a Panasonic lens would give you two options of stabilization, requiring the photographer to disable one of the two. Then there also exist the differences in sensor size and consequent lens diameter in order to let in plenty of light.

An area yet untouched is the legacy of lenses. For example Nikon is famous for withstanding the temptation of changing the mounting when electronics where implemented, in contrast to Canon. This leave the F-mount as a well-supported format, although of course some variants exist due to the various electronic systems which came into being over time. Basically Nikon showed the clear distinction between the mechanical and electronic dimension described earlier, by making the electronic adapt to the physical system, rather than redefining both. By their move to the EF-mount, Canon destined a large set of lenses to be abandoned in time. Dealing with legacy-features is a whole different debate.

So in conclusion, even apart from all financial motivations for creating a lock-in with a certain lens mount and adopting new mounts, there are many dependencies about a lens mount. I believe that in the present day Micro Four Thirds has a leg up by setting a broad standard and making it relatively easy to allow fitting of lenses with different mountings. However then again it is yet another new standard, leading to further diversification rather than to convergence. Either way, I set my mind on supporting the Micro Four Thirds crusade and hoping that the standard might expand to suit other types of electronic operation (since the mechanical design is set) in order to become the unified standard I believe is needed.

Truly user-centered design

Federico Mena Quintero just published an extensive write-up about the reason for having the Linux-desktop (GNOME) focus on user-security and user-safety. Federico in return was inspired by the talk by Matthew Garret at GUADEC 2014, as featured by the Linux Weekly News. By using the parallelism of city-safety, Federico attempts to describe the way in which the total (desktop/city)-environment is benefiting from the established level of security and the achieved level of safety. I’d like to think that security is about the hard-limits, whilst safety is about the soft-limits, both of which can be crossed depending on the experience of the user. In a sense serving a secure and safe freedom-oriented system would make it impossible for users to compromise their own safety, security and privacy unless specific additional features are enabled. Of course the details about these features should be made very clear to the user, in order to avoid users unknowingly endangering themselves. The small bits which can be worked on at GNOME are listed in the meeting documents of the GNOME safety team.

API maturity model

I was tipped this great blogpost by Martin Fowler. You might resent from using the third level due to performance and bandwidth issues, but from an API-perspective it surely is very flexible and above all self-documenting.

Great insights from Flock 2014

This month the Fedora Flock conference was held in Prague. Even though I haven’t used Fedora in a while now, the conference was interesting to me because of other topics discussed. I already reported on the Novena presentation, but below I’ve listed other presentations.

Free And Open Source Software In Europe: Policies And Implementations – Gijs Hillenius

This presentation gives a nice overview of various initiatives around free software and how well organizations transition towards free software. The statement about the mayor of Munich has unfortunately been multiplied by the Linux press, but coming from this presentation it seems that the transition is properly locked into processes and there won’t be a change of plans any time soon. Gijs also gave other great examples of free software being used, of which the Gendarmerie struck me by scale and determination. Of course the main issue in Europe related to this topic is the reluctance of the European Commission of even considering free software, which is covered by Gijs as well.

Building an application installer from the ground up – Richard Hughes

The presentation give a nice overview of the process of solving the know problem of making legacy systems compatible with the new system. Basic considerations were how to deal with local and remote information storage and how to deal with fonts, plugins and terminal commands. I believe the team did a great job by keeping a local focus (including search), incorporating development efforts in the ranking, refraining from including all terminal commands in the software center (which would totally clutter the interface) and supplying content for the premium applications. This will help make software center a premium tool which will not only aid casual users, but will also be a powerful tool for power-users as well.

Better Presentation of fonts in Fedora – Pravin Satpute

Overall I didn’t find this presentation a strong one. It made me aware of a new fact, namely that developers are able to choose their own fonts, regardless of fonts included in the distribution or supplied by the user. However I’m not quire sure if Pravin maybe meant that developers aren’t able to develop for a specific set of fonts, because that is decided later on by the selected theme and the font settings. Halfway down the presentation a small discussion about the font feature in the new software center, where a main question arose on grouping fonts and how to deal with example texts. These questions however remain unanswered. Pravin provided a link to his font portal which seems to be aimed at providing additional features like comments and character support views on top of a concept like the Open Font Library. The key point I took away from this presentation is that work is needed on creating a generic overview covering the font characteristics, the character support, license information, readability, and possibly user reviews.

GNOME: a content application update – Debarshi Ray

This presentation gives a great overview of the effort of GNOME to come up with a set of applications to manage content, much in the same way Adobe Bridge has introduced the concept a while ago for the Adobe Creative Suite. It is not about viewing or editing and it is not about the files, it is about the content from various sources and managing it. One of the powerful concept explicitly highlighted is the ‘reversible delete’ so that rather than explicitly asking for confirmation, you can undo an accidental deletion. Furthermore secondary click (right click) have been removed to better suit touchscreen controls. Debarshi also gives a hint of things to come concerning sharing via various sharing points, managed in the settings dialog. The mock-up shown also shows regular applications like GIMP and Inkscape to be covered by this concept of sharing points, which seems odd but would help to unify the management concept.

How Is the Fedora Kernel Different – Levente Kurusa

This presentation was beyond my state of knowledge about kernels and the Linux kernel in particular. It did however highlight how the Linux kernel can be tweaked to meet different needs and how different distributions make different decisions on these settings. In general however I would believe most users would never be able to distinguish these kernels, just like I wouldn’t. I’d be more struck by decisions on a higher level like the default desktop environment and the package manager.

Procrastination makes you better – Life of a remotee – Flavio Percoco

This presentation gave a brief and humorous overview on the struggles of working remotely, covering some tips on improving your working life. It is strong in the sense that it was a very personal story, relating to many remote workers, although it only has limited pointers to other material on dealing with working remotely.

UEFI – The Great Satan and you – Adam Williamson

This was a very explanatory presentation covering both the technology of UEFI and Secure boot and the practical implications. Since I have no experience with a machine featuring UEFI, I didn’t have any idea about how much of a pain dealing with UEFI and Secure boot would be. It seems this very much depends on the machine being used, although best-practices exist. Also it clarified the controversy around Secure boot, since basically other keys apart from Microsoft could have been included, but unfortunately no other party was willing to take on the job. Surely a presentation worth recommending.

UX 101 – Practical usability methods that everyone can use – Karen T.

I found this presentation be a great one, coming clearly from a design-side rather than a development side. The presentation gives a concise overview of achieving a great interface, which is great to watch again before taking on a new project involving design. I believe anyone involved in user interfaces can learn from this overview.

Yubikeys – Nick Bebout

This presentation covers the Yubikeys by Yubico, which can be used for two-factor authentication. The newer model called the Yubikey Neo also features the possibility of hardware-based PGP. The presentation covered some aspects specifically targeted for Fedora users, but it did a decent covering of the features of Yubikey and even of smart-cards. Including a demo, this presentation offers plenty of pointers to delve into the various aspects of key management and two-factor authentication.

Richard Stallman reformatted

This year Richard Stallman gave a presentation at TEDxGeneva, which is now available on video. Having seen my fair share of Stallman presentations, it is quite noticeable how Stallman is forced to keep his presentation concise and keep the content aligned with the presented illustrations. Despite this struggling, the presentation gives a good summary of the many aspects of free software and the iconic illustrations make it very lively and understandable. Surely a video to recommend to others. (Despite the explicit note by Stallman to refrain from using the term ‘open-source’ I will classify it this way, mainly because free software includes freeware which is even more harmful than open-source software.)

Keys all over again

I just updated my GnuPG encryption by generating a new key-pair from scratch. Contrary to the last time I took care in keeping my main key private and explicitly using subkeys to be used for signing and decrypting. Even though a common practice has been established, it is quite a challenge to understand the different options and the way in which different configurations might be better or worse. I took some advice by looking at the GNU Privacy Handbook, a recent post by Stephen Josefsson, A Riseup article on best practices, a list of instructions on strictly working with a live OS, and an outdated manual for keysigning parties. Strictly signing offline feels like a hassle, but I’m sure I will get by.

One of the intended improvements I wasn’t able to work out, was using different passwords for my subkeys. I found an email on the GnuPG user mailinglist, but these instructions didn’t make it happen. This therefore remains to be worked out. The article of Stephen Josefsson also triggered some thoughts on more advanced configurations, by using a picture and refraining from using 64-bit based key-sizes. So there are still some ways of improving the quality of the configuration, although at the very least this change was a step in the right direction.

Open furniture

During the last decades a slight change has occurred in the field of furniture, with the rise of modular furniture. Modular furniture has great benefits, since it can often be rearranged to fit changing needs, allowing people to hold on to their furniture much longer. Examples of such systems are Ikea Pax and Besta, Lundia, and my personal favorite Vitsœ. I do however believe that the current situation is somewhat unfortunate, since the different interfaces result in a system lock-in which limits the available components and the flexibility of the system. This therefore leaves room for improvement, giving rise to my idea.

By getting the implicit interfaces of furniture and the compatibility of the various components documented in a wiki-format, people might make a more conscious decision for a particular system which would contribute to the overall time of use. I imagine this covering for instance the panel sizes, the measurements between the various screw-holes and the similarity between systems. Furthermore I’m hoping that such information will assist companies and individuals to create additional furniture components that are compatible with existing systems in order to aid consumers in utilizing their system. It might even give rise to converters that will bridge the gap between systems. Exemplary for working with existing standards are the 3D-printable universal construction kit adapters.

Bridging the smart grid gap

In the industry and academics around power systems there is a lot of buzz around the smart grid. As the matter of fact, the smart grid has become the norm on which to base predictions and proposals. And despite all the marketing buzz, it is truly a great cause for engineers to pay attention to. The smart grid paradigm releases extensive engineering efforts and the supportive creativity and financial funds. The underlying motivation is however rarely questioned. Why even a smart grid? Does it have any significance?

The mechanics of the grid operators are on the other hand mostly unmoved by all the buzz. They also hardly need to, since the core of the electrical grid still needs maintenance and expansion. Methods change and some additional measurement systems and control systems might have to be installed. Generally however not much has changed.

So there is a gap, and during my graduation I often times cross this gap. A gap which is totally logical, once you see where both sides are coming from. The mechanics over the years have established a way of building a very reliable and quite optimal electrical grid. There wouldn’t be any direct harm in continuing in this fashion, making some minor adjustments to planning and management if needed. The other side of the spectrum is however looking at the frightening trend of distributed generation and the nearly unlimited possibilities of IT-systems.

Even though these worlds are closing in on each other, they talk different languages. One side considers a lifetime of 10 years a maximum, whilst the other makes exploitation plans for 50 years ore more. One side sets out to build a highly reliable systems which will require the minimum of management, whilst the other side would like to automate all possible management tasks. One side doesn’t get scared of loads of wires, whilst the other side is cautious of adding even a single unnecessary conductor near a power system. One side would like to analyze information down to the microsecond, whilst the other side would only like to receive an indication if real action is needed. One side is concerned with the power systems in place, whilst the other is concerned with the procedures and management around it. Of course these examples are somewhat exaggerated, but as the matter of fact the backgrounds of both sides are very different.

The real threat to the smart grid adoption is when the mechanics are overrun by the distributed generation and when the management and academics come up with impractical solutions for non-existing problems. It is the problem of not talking and not having the smart grid discussion.

So at last there seems to be a reason not only for the smart grid, but more importantly for all its buzz.

A while back a report was published on the responses given on the EU copyright consultation. Despite the length of the document (101 pages) it is very readable and as the matter of fact it gives a decent overview of the different viewpoint involved in this issue. I’ve given some highlights below.

Institutional users on the terms of protection, making the case that in most cases the copyright is exhausted at the end of the copyright term:

Institutional users generally believe that the current terms are inappropriate and should be shortened. … They point out that in many cases, the costs of the digitisation of copyright protected works that are no longer commercially exploited exceeds the potential economic value of these works.

Some of the authors and performance reacting on the same issue apparently don’t seem to get that the copyright is defined to extend to a set period after death of the author:

The vast majority of authors and performers consider that the term of protection currently set out in EU law is appropriate and should not be shortened. However, some respondents in these categories favour a longer term of protection, which, they say, would better reflect longer life expectancy.

I was glad to see notice of disabilities in the section on copyright exceptions, although I would assume the real questions arise when third party service providers aid in transforming content to digital or audible form. Furthermore it struck me that there seems to be a lack of agreement amongst member states on about all of the issues. This would therefore further complicate the process of copyright reform and unification, resulting in the continuation of the status quo. An issue I wasn’t quite aware of, is that event though an exception exists for educational institution, this often times results in problems when courses are made available to an outside audience. This issue therefore hinders the adoption of new ways of teaching. At the minimum a clear stance should be taken about such cases. As a student it is painful to see end users argue for access of scientific articles without needing to go through all the paywalls which have been put up by the various journals. Staying on top of recent development is however important to all professionals in academics, probably to anyone studying, and therefore also to society as a whole. Limiting the flow of the information our modern society has been built upon can therefore be considered very coercive.

After reading through the document I would summarize that on the one hand the authors, management organizations and publishes are quite satisfied with the way the system is set-up, whilst the end users desire more freedom and increase clarity. A large part of these end users however seem to acknowledge the fact that copyright should be kept in order to keep the system going. So does this mean that copyright in its current form isn’t serving society they way it was intended originally?