Skip to content

Blog

It’s just technology

Of course I also observed the live coverage of the recent Apple-event. But despite the various improvements to their product range and the addition of the watch, it simply wasn’t the magic I was hoping for. That evening it struck me that technology is only about the way of doing things, it’s a tool, it just helps. Melving Kranzberg already knew technology is neither good, nor bad, nor neutral, thereby making it subject to its use. Marketing technology as a life-changer the way Apple does therefore seems to be a bit naive. Sure, insights in your health will help grow awareness, and being given correctional tips will help maintain habits. But bottom line you can only change life by yourself. Without intention, technology will hardly be any good. You might as well achieve the same results without that technology, which might even be better since there wasn’t any technology-dependence. Taking this viewpoint, the world all of a sudden seems to make a lot of buzz about the way we do things. I found the bookbook commercial by Ikea interesting in this regard.

Educational economics

Thinking about the brave new world of free education based on freely available information and educational programs, it became clear to me that there is a specific kind of economics to education. The basic time-tested principle of education is the chronological process of starting a suitable education, passing a cycle of learning and testing, passing the final test, getting the degree and finally enjoying the benefits it gives, and in some cases a renewal process needs to be followed indefinitely. In practice this scheme is strongly effectuated by the public opinion which generally favors a degree over a loose education, and the fact that most educational programs are very structured and regulated by various authorities. To put it in economic terms, this scheme leads people from an educational mortgage to educational rental.

This scheme is contrary to how most people finance their houses, for which most people start out renting due to lack of money and needing the flexibility. If life has settled, a mortgage is a more logical step since it reduces unnecessary expanses on the long-term. So how about this principle of education? The bulk of the learning starts out as a mortgage, where a commitment is given to complete a certain educational program and only if you can deliver your promise the corresponding degree will be given, which holds the ultimate value like a house would. If you however fail to deliver, you would still have enjoyed the education (the living) but you would not end up owning the degree (the house). After obtaining a degree, an upkeep is needed to keep knowledge up-to-date and it might even be mandatory by way of taking regular tests or programs. So having gained the degree would leave you learning (paying rent) many times over.

Considering that we can only learn so much and that our time spent on education is rather limited, we have to either limit our educational commitments or rethink this paradigm. Let’s start with the first option: what if we would only learn what we really needed? That seems to be something which is contrary to our current process, since the young brain is fed a large amount of generic knowledge and since this knowledge predetermines the available options for work, the main decisions are already made in some regard. We could however ‘start with the end in mind’ and from the start focus our education in the direction we desire, and make it so that we prioritize depth over broadness, in order to reduce upkeep levels. This is again is contrary to current practice. In addition shifts in our preferences would have to be instantly translated into shifts in education.

This leads us to the second possibility of rethinking the paradigm. What if our culture would strongly value gained knowledge over gained degrees and would have people learning by doing in order to speed up the evaluation-cycle of preferences in work leading to desire for educational topics. This would imply a world of education concerning small incremental educational steps of complete topics which also cover the necessary additional information to understand the main information, rather than putting that information into a different course. It should be clear how certain topic relate to each other, in order to make thoughtful decisions on the next education steps.

This flexible way of educating would require a different student mindset, a different kind of study material and a different kind of degree valuation, but I’d like to think that with the aid of technology this is not only a possibility, but this will also become reality.

Putting your knowledge upstream

The concept of upstream commits is well known in the world of free software, since it is a very effective way of having others expand on your personal contributes, keeping it up-to-date and improving the quality. I propose to use this same principle on general knowledge as well. Consider just how many books and notebooks each of us have in order to keep track of our personally gained knowledge. In most cases this knowledge is very specific since it concerns our professional occupation, which is only one of many diverse occupations. This way of working is actually rather wasteful: information will be lost once the owners of certain information will dismiss it, information will only ever be available in those forms thereby making it hard for others to gather the same information, and the process of gaining information requires others to basically start from scratch.

Taking the generic part of your personal information and putting it upstream will help to solve this loop of waste and build a freely available set of information which can easily be improved upon. If nobody can add to it, it will remain static but if people can your information can be put into context and be improved upon. Taking this to a radial level would basically mean that sites like Wikipedia become your notepad. Of course this would require a different kind of note taking concerning proper phrasing and referencing. Also it wouldn’t be as easy to relate information to your other information as long as these ties are generally relevant as well. The only valid exception for this strategy I can think of it confidential information.

The smallest valid Wikipedia article

Lately I’ve often wondered whether I should contribute my newly learned knowledge to Wikipedia by creating a dedicated article. Creating decent articles is however a lengthy and quite a time-consuming process. So how about setting a mental note by creating the planned article page and only contributing the most basic of information, with the intent of improving it later on. This begs the question is such a strategy can be considered good practice. Fortunately I found a Wikipedia page going in further detail on the definition of a ‘stub’, which basically is a short article of which the value can be questioned. A key point for me to take away from this article is the fact that Wikipedia is not a dictionary. This come down to the key distinction that a dictionary is about the word(s), whilst Wikipedia is about the subject. So in order to start a new article in a rightly fashion, some decent work will have to be done in advance.

The lack of lens mount standards

Ever since I started orienting on buying my current mirrorless camera, I have been amazed at how many different lens mounts are available. A reason for deciding on a my current PEN EP-1 is the fact that it uses the Micro Four Thirds system, which is currently supported by a multitude of vendors, in contrary to most other lens mounts. In addition the short flange focal distance allows many other types of lenses to be mounted by using adapters. On the contrary, the lens mounts of well-established brands like Nikon and Canon are only available on their specific camera bodies, but accept lenses from third party manufacturers like Sigma or Tamron which make their lenses available for most types of mounts. The fact that these different mounts exist is however not without reason.

As the matter of fact, the topic of lens mounts is a very interesting one. The interface can be defined in two dimensions, namely the physical dimension and the electronic dimension covering the communication protocol. The physical dimension is defined by the flange focal distance (partly defined by the option of using a mirror), the locking mechanism, the ring diameter, the film or sensor size, the electronic contact positioning and the optional lens motor gear. The electronic dimension is defined by the power and information that needs to be exchanged between the lens and the camera. This in turn depends on whether different camera features are built into the camera or in the lens. Optical image stabilization can be implemented in the camera body or in the camera lens, the type of motor can vary, the focusing actuation will depend on the type of autofocus used (contrast or phase-detection), not all lenses would be able to zoom, focus might be done via the camera by focus-by-wire rather than a direct manual focus on the lens and some lenses might feature no motor at all (full manual). Building a system which suits all use-cases would however be very cumbersome ans would consist of many compromises. Digital Photography Review sheds a light on this aspect in their review of the Olympus OM-D EM-1, which was the first camera in the Micro Four Thirds system to feature phase detection:

The key difference between contrast-detection autofocus (as generally used in compacts and mirrorless cameras), and phase detection (as traditionally used in DSLRs) is that phase detection is able to assess how out-of-focus the image is, and determine directly how far and in what direction the lens needs to move its focus group to achieve a sharp image. Contrast detection has to scan through at least part of its focus range to find the point of optimal focus.

This difference totally changes to the way lenses need to be designed – those optimised for phase detection need to be able to race to a specified location very quickly, whereas contrast detection lenses need to be able to scan back and forth very quickly. Traditionally, very few lenses designed for phase detection have coped very well with the subtle, scanning motion required for contrast detection. Those designed for Four Thirds SLRs could autofocus on previous Micro Four Thirds cameras, but only slowly and hesitantly.

— Wikipedia. Autofocus

Since this the phase-detection would require a different lens-construction, it is only of use when using phase-detection lenses. So in order to facilitate both types of autofocus and the corresponding types of lenses, a hybrid sensor should be used. Optical image stabilization could be optional on both the body and the lens like in the Micro Four Thirds system Olympus has placed stabilization in the camera body whilst Panasonic has placed stabilization in the lens body. This however implies that combing a Panasonic body with a Olympus lens would give you no optical stabilization, whilst an Olympus body with a Panasonic lens would give you two options of stabilization, requiring the photographer to disable one of the two. Then there also exist the differences in sensor size and consequent lens diameter in order to let in plenty of light.

An area yet untouched is the legacy of lenses. For example Nikon is famous for withstanding the temptation of changing the mounting when electronics where implemented, in contrast to Canon. This leave the F-mount as a well-supported format, although of course some variants exist due to the various electronic systems which came into being over time. Basically Nikon showed the clear distinction between the mechanical and electronic dimension described earlier, by making the electronic adapt to the physical system, rather than redefining both. By their move to the EF-mount, Canon destined a large set of lenses to be abandoned in time. Dealing with legacy-features is a whole different debate.

So in conclusion, even apart from all financial motivations for creating a lock-in with a certain lens mount and adopting new mounts, there are many dependencies about a lens mount. I believe that in the present day Micro Four Thirds has a leg up by setting a broad standard and making it relatively easy to allow fitting of lenses with different mountings. However then again it is yet another new standard, leading to further diversification rather than to convergence. Either way, I set my mind on supporting the Micro Four Thirds crusade and hoping that the standard might expand to suit other types of electronic operation (since the mechanical design is set) in order to become the unified standard I believe is needed.

Truly user-centered design

Federico Mena Quintero just published an extensive write-up about the reason for having the Linux-desktop (GNOME) focus on user-security and user-safety. Federico in return was inspired by the talk by Matthew Garret at GUADEC 2014, as featured by the Linux Weekly News. By using the parallelism of city-safety, Federico attempts to describe the way in which the total (desktop/city)-environment is benefiting from the established level of security and the achieved level of safety. I’d like to think that security is about the hard-limits, whilst safety is about the soft-limits, both of which can be crossed depending on the experience of the user. In a sense serving a secure and safe freedom-oriented system would make it impossible for users to compromise their own safety, security and privacy unless specific additional features are enabled. Of course the details about these features should be made very clear to the user, in order to avoid users unknowingly endangering themselves. The small bits which can be worked on at GNOME are listed in the meeting documents of the GNOME safety team.

API maturity model

I was tipped this great blogpost by Martin Fowler. You might resent from using the third level due to performance and bandwidth issues, but from an API-perspective it surely is very flexible and above all self-documenting.

Great insights from Flock 2014

This month the Fedora Flock conference was held in Prague. Even though I haven’t used Fedora in a while now, the conference was interesting to me because of other topics discussed. I already reported on the Novena presentation, but below I’ve listed other presentations.

Free And Open Source Software In Europe: Policies And Implementations – Gijs Hillenius

This presentation gives a nice overview of various initiatives around free software and how well organizations transition towards free software. The statement about the mayor of Munich has unfortunately been multiplied by the Linux press, but coming from this presentation it seems that the transition is properly locked into processes and there won’t be a change of plans any time soon. Gijs also gave other great examples of free software being used, of which the Gendarmerie struck me by scale and determination. Of course the main issue in Europe related to this topic is the reluctance of the European Commission of even considering free software, which is covered by Gijs as well.

Building an application installer from the ground up – Richard Hughes

The presentation give a nice overview of the process of solving the know problem of making legacy systems compatible with the new system. Basic considerations were how to deal with local and remote information storage and how to deal with fonts, plugins and terminal commands. I believe the team did a great job by keeping a local focus (including search), incorporating development efforts in the ranking, refraining from including all terminal commands in the software center (which would totally clutter the interface) and supplying content for the premium applications. This will help make software center a premium tool which will not only aid casual users, but will also be a powerful tool for power-users as well.

Better Presentation of fonts in Fedora – Pravin Satpute

Overall I didn’t find this presentation a strong one. It made me aware of a new fact, namely that developers are able to choose their own fonts, regardless of fonts included in the distribution or supplied by the user. However I’m not quire sure if Pravin maybe meant that developers aren’t able to develop for a specific set of fonts, because that is decided later on by the selected theme and the font settings. Halfway down the presentation a small discussion about the font feature in the new software center, where a main question arose on grouping fonts and how to deal with example texts. These questions however remain unanswered. Pravin provided a link to his font portal which seems to be aimed at providing additional features like comments and character support views on top of a concept like the Open Font Library. The key point I took away from this presentation is that work is needed on creating a generic overview covering the font characteristics, the character support, license information, readability, and possibly user reviews.

GNOME: a content application update – Debarshi Ray

This presentation gives a great overview of the effort of GNOME to come up with a set of applications to manage content, much in the same way Adobe Bridge has introduced the concept a while ago for the Adobe Creative Suite. It is not about viewing or editing and it is not about the files, it is about the content from various sources and managing it. One of the powerful concept explicitly highlighted is the ‘reversible delete’ so that rather than explicitly asking for confirmation, you can undo an accidental deletion. Furthermore secondary click (right click) have been removed to better suit touchscreen controls. Debarshi also gives a hint of things to come concerning sharing via various sharing points, managed in the settings dialog. The mock-up shown also shows regular applications like GIMP and Inkscape to be covered by this concept of sharing points, which seems odd but would help to unify the management concept.

How Is the Fedora Kernel Different – Levente Kurusa

This presentation was beyond my state of knowledge about kernels and the Linux kernel in particular. It did however highlight how the Linux kernel can be tweaked to meet different needs and how different distributions make different decisions on these settings. In general however I would believe most users would never be able to distinguish these kernels, just like I wouldn’t. I’d be more struck by decisions on a higher level like the default desktop environment and the package manager.

Procrastination makes you better – Life of a remotee – Flavio Percoco

This presentation gave a brief and humorous overview on the struggles of working remotely, covering some tips on improving your working life. It is strong in the sense that it was a very personal story, relating to many remote workers, although it only has limited pointers to other material on dealing with working remotely.

UEFI – The Great Satan and you – Adam Williamson

This was a very explanatory presentation covering both the technology of UEFI and Secure boot and the practical implications. Since I have no experience with a machine featuring UEFI, I didn’t have any idea about how much of a pain dealing with UEFI and Secure boot would be. It seems this very much depends on the machine being used, although best-practices exist. Also it clarified the controversy around Secure boot, since basically other keys apart from Microsoft could have been included, but unfortunately no other party was willing to take on the job. Surely a presentation worth recommending.

UX 101 – Practical usability methods that everyone can use – Karen T.

I found this presentation be a great one, coming clearly from a design-side rather than a development side. The presentation gives a concise overview of achieving a great interface, which is great to watch again before taking on a new project involving design. I believe anyone involved in user interfaces can learn from this overview.

Yubikeys – Nick Bebout

This presentation covers the Yubikeys by Yubico, which can be used for two-factor authentication. The newer model called the Yubikey Neo also features the possibility of hardware-based PGP. The presentation covered some aspects specifically targeted for Fedora users, but it did a decent covering of the features of Yubikey and even of smart-cards. Including a demo, this presentation offers plenty of pointers to delve into the various aspects of key management and two-factor authentication.

Richard Stallman reformatted

This year Richard Stallman gave a presentation at TEDxGeneva, which is now available on video. Having seen my fair share of Stallman presentations, it is quite noticeable how Stallman is forced to keep his presentation concise and keep the content aligned with the presented illustrations. Despite this struggling, the presentation gives a good summary of the many aspects of free software and the iconic illustrations make it very lively and understandable. Surely a video to recommend to others. (Despite the explicit note by Stallman to refrain from using the term ‘open-source’ I will classify it this way, mainly because free software includes freeware which is even more harmful than open-source software.)