Tag: Tips and tricks

Initial Developer Documentation for the Librem 5 Phone Platform

At Purism, we are just as excited as you are about the the development boards that will be distributed this summer. Once a person receives their development board, their first thought will be “This is great! Now, what do I do with it?” In anticipation of the technical guidance that will be needed, the developer documentation effort has begun. You can already see the current state of the documentation at developer.shop.puri.sm

Goal of the Docs

The developer documentation is there as a guide for getting a new developer setup and ready to start having fun! This will include plenty of examples that will help you along towards whatever your goal with the development board may be.

There will be technical step-by-step instructions that are suitable for both newbies and experienced Debian developers alike. The goal of the docs is to openly welcome you and light your path along the way with examples and links to external documentation. These examples will aid you from the start of unpacking your development board to building and deploying flatpak applications to it—and eventually including your package into PureOS. Included, you can expect examples on how to use certain tools like flatpak, the IDEs used to build flatpak applications, and UI tools to help you design apps. The design of the Librem 5 phone interface will also be outlined in detail to provide insight into the human interface guidelines that will be followed by the core applications. Use the design section to learn about gestures you can expect on the phone. Apps you design or port to the board can use these gestures too!

Please note that the docs are not a complete tutorial on how to use all of the development tools required. There are existing documentations available for each specific tool so there’s no need to reinvent the wheel. Instead, you will be directed to those locations online so you can research further on a specific tool.

We welcome all test and development efforts that volunteers have to give, so there will also be information on volunteering and how to become a Purism community member in general.

Work in progress

The documentation is in a constant state of flux. Content is being added daily and reorganization still occurs from time-to-time. If you no longer see a page there, just search for it because chances are it has been moved to somewhere else within the site instead of removed. The aim is to write documentation that is helpful and intuitive so it is important that an intuitive path is laid out. This developer documentation is still pretty new but is filling out quickly so that you are ready to hit the ground running with your new development board in June!

There will be a separate announcement in the next few weeks on this same blog to call for volunteers so get ready!

YouTube streaming with less interruptions and more privacy

In this short tutorial, I will show you how to watch your favorite YouTube videos without being annoyed by the ads or those random visuals popping around (like “annotations”). It will also improve your privacy by avoiding storing some history and cookies from watching those videos within your browser.

As a film maker, I think that displaying any kind of visual artifact (ads, comments/annotations…) on top of the video is degrading the artwork. It is like going to a museum and seeing Post-Its and stickers pasted all over the sculptures and paintings. How would a museum could justify such a business model? Of course, YouTube is not a museum and I don’t want to discuss ethics or business models here (maybe on another post?). YouTube is also a great source of inspiration and learning for me—I simply want a better viewer experience.

The solution to improve your watching experience is called GNOME MPV. It is a video player that lets you watch any video from your computer as well as remote videos like the ones from Youtube.

GNOME MPV is based on FFmpeg and is able to read almost any video format. It has a very simple interface and it is very fast. It has become my main video player.

Install it

I don’t think that GNOME MPV is currently the default video player in PureOS, so you may need to install it. It is very easy: open the GNOME software center (“Software”) and search for “GNOME MPV”. From there, click on the “Install” button. When done, just launch it.

Watching a YouTube video

On GNOME MPV, click on the “+” button on the top left of the window and select “Open Location”. A small dialog will appear.

In the text field, paste your Youtube video link and click “Open”. You can try with this example (A song from Free Music Archive): youtube.com/watch?v=4M9Puanhdac

Of course, I cannot guarantee that it will always work. Be aware that Youtube remains master of their videos and can decide which level of restrictions they apply to them. Also make sure that your system is up to date when problems occur. New versions with corrections may be available.

Play an entire YouTube playlist

You can also play an entire playlist. This time,  just paste a YouTube playlist URL.

Note that for it to work, I had to remove the video id from the URL and only leave the “list” attribute.

You can test with this example: youtube.com/watch?list=PLzCxunOM5WFJ3B0F5AnUCwMBTlyq64vKP

From there, you may go to the menu button, on the top right of the window (the 3 horizontal lines) and select “Toggle Playlist”

I use Youtube as an example in this tutorial because it is the streaming service that I use the most, but GNOME MPV also works with Vimeo and many other online streaming services. Just give them a try!

Your own music studio with JACK, Ardour and Yoshimi

Last week, after flashing coreboot on my Librem 13 (as a beta tester of the new coreboot install script), I came across a few problems with my heavily tweaked PureOS install, so I decided I would do a full, fresh install of PureOS 3.0 beta so my environment would be much closer to what a new user would expect.

While re-installing all my creative environment, I decided that I would do a quick tutorial on installing and using Jack as it is not straight forward and that there are not so many tutorials about it on the Internet.

What is JACK?

JACK stands for “JACK Audio Connection Kit”. It is a free software that lets you handle audio input and output between different applications.

You can see it as a set of audio jacks that you will be able to plug between different programs.

For example, you can use it to connect a software synthezizer (Yoshimi, ZynAddSubFX) to a multitrack sequencer (Ardour, LMMS).
You can use it to connect an audio editing software (Audacity) to a video editing software (Blender).

Many applications have Jack support. Here is a list from the JACK’s website.

As an example for this tutorial, I will show you how to use Yoshimi with Ardour.

Install the applications

First of all, we need to install all the required applications

sudo apt install qjackctl ardour yoshimi

Enable real time scheduling

Real time scheduling is a feature of all Linux based operating systems that enables an application to meet timing deadlines more reliably. It is also considered to be a potential source of system lock up if your hardware resources are not sufficient so, most of the time, it is not enabled by default.

As mentioned on the JACK’s website, JACK requires real time scheduling privileges for reliable, dropout-free operation.

There is a well detailed tutorial from the JACK’s team that describes how to enable real time scheduling on your system. I will go through the main steps here. It works for me on PureOS but should also work without problem on many other GNU/Linux distributions.

First of all, create a group called “realtime” and add your user to this group (replace USERNAME with your current login) :

sudo groupadd realtime
sudo usermod -a -G realtime USERNAME

You can check that “realtime” is now part of the user’s groups by running the following command :


Also, make sure that the user is part of the audio group. If not, just add it :

sudo usermod -a -G audio USERNAME

On PureOS (and Debian), you should have a folder called /etc/security/limits.d. If so, just create and edit the file /etc/security/limits.d/99-realtime.conf with your favorite editor. (If you don’t see this folder, you need to edit /etc/security/limits.conf).

sudo vi /etc/security/limits.d/99-realtime.conf

Add the following lines and save the file :

@realtime   -  rtprio     99
@realtime   -  memlock    unlimited

You need to logout and login again for the changes to take effect.

WARNING : You should only add new or existing users to the “realtime” group only if an application that they use (like JACK) requires it . By doing so, you give them pretty high privileges to interact with the process priorities, and this may affect the whole usability of the computer.


Before being able to connect anything with JACK, we need to set it up and start its deamon. For that matter, we will use QJackCtl which is a graphical application that controls JACK’s inputs and ouputs.

We will first make sure that JACK is setup correctly. Press the “Setup…” button.

I am not an expert with audio hardware and configurations and this setup is working perfectly on my Librem :

  • Driver: alsa
  • Realtime : yes
  • Interface : hw:PCH
  • Sample Rate : 44100
  • Frames/Period : 128
  • Periods/Buffer : 2



Save your settings and, on the main QJackCtl controls window, press the “Start” button. After a few seconds, you should see the “Connections” window popping up. This is where all the connections take place.

Connect Yoshimi to Ardour

Now, we are ready to connect our virtual jacks. It is time to open Ardour and create a new session. You should now see a lot more connections in the JACK’s connections window. It shows how Ardour interacts with the system’s audio inputs and outputs.

Let’s add a new track to Ardour. Click the menu “Track”->”Add Track, Bus or VCA…”. Call your new track “Drums” and set it as stereo.

Now you see 2 more Ardour inputs in the JACK’s connections window. They show the name of the audio track that we just created and they are currently connected to the default system’s capture device (the microphone). That is is not what we want so we will disconnect them.

Right click on one of them (Drums/audio_in 1) and chose “Disconnect”. It will disconnect the audio capture device. We will now connect our track to Yoshimi.

Open Yoshimi and wait for it to be fully loaded. You should now see the Yoshimi’s output appear on the JACK’s connections window. In order to connect the Yoshimi’s output to the Ardour’s input, just drag one on top of the other (make sure to respect the vertical order).


You are now ready to enjoy your fully operational free software powered professional music studio! 🙂

Please, feel free to comment this post or ask any question in our forums.

Have fun! 😉

Digital media transcoding – Part 3 – Free Formats in Post Production

This is the third and last part of my articles about media manipulation with free software. Here is part 1, and here is part 2.


Working with video files all day long makes me realize that formats are everywhere and the need for me to be able to freely manipulate them is constant. It is why I think that in term of multimedia creation and publishing, free formats are as important as free software. Free software will always more easily support free formats anyway. In another hand proprietary software may decide to drop support for any free or proprietary formats as they wish.

In that regard, my post production workflow tend to rely on free formats as much as possible. In the world of freedom, we are very lucky to have top quality free formats for multimedia production and I would like to share with you the main formats that are part of my post production workflow.

Note that this is my personal workflow, there may be better workflows, better formats, especially when working on sophisticated projects with a big team. This is just a basic reference that works for most of my projects.


The format you will need to deal with, when capturing video footage will depend on the camera you use. Most commercial cameras will record using proprietary formats and as of today, I don’t know any camera capable of recording using free formats. The Axiom by Apertus is a camera based on free hardware design but it is still under development and I have never had the chance to test one.
Usually, I have no control over this part but it is fine. The most important here is that my footage gets the quality that I expect.

I just make sure that the formats generated by the camera will be readable by my free software. Anyway, FFmpeg can read so many formats…


At this point, I may choose to keep my footage as it is or chose to convert it to a free format for storage purpose. This is very useful when capturing with a proprietary format so I get full control over my footage straight away.

The storing format should be lossless, which means that there will be no data loss during the conversion. This is the top quality footage that my final rendering should be based on.

When performing this task, I use the following :

  • Matroska (MKV) format – Huffyuv (video) / FLAC (audio)
  • Matroska (MKV) format – FFv1 (video) / FLAC (audio)

As both formats are lossless, there should not be any quality issue however while FFv1 generates a smaller file, it is, in my experience, slower when decoding and may affect the comfort of my workflow at some point. Usually, I prefer using Huffyuv.


When editing, the use of proxies can make your workflow much faster by requiring a smaller amount of hardware resources. A proxy is a low resolution and light weight version of the original footage.

Proxy files are very temporary and the final rendering don’t depend on them. In that regard, you may use the format that is the most adapted to you hardware speed. Kdenlive has an integrated proxy engine that lets you choose between MPEG1 and Xvid by default. These are not fully free so I would suggest using the following on a 640px wide output:

  • WebM format – VP8 (video) / Vorbis (audio)
  • Ogg format – Theora (video) / Vorbis (audio)

I have always found that VP8 decodes faster and feels lighter than Theora so my choice goes for VP8 here.

Compositing and color grading

I usually do all my editing with Kdenlive. When I need to do some advanced compositing, and color grading, I use Blender.

At this stage, I only care about the picture and put the audio aside. I generate image sequences based on my top quality footage and load them into Blender.

For color grading and full picture visual effects :

  • PNG

For animation compositing :

  • PNG with transparent background
  • Multi-layered OpenEXR (Very useful to avoid having too many rendered files)

There is one free image format that is still very young but very promising. It is called FLIF. It is a lossless image format that has a better quality for a smaller file size than PNG. At the moment of writing this post, it is only implemented in ImageMagick but I hope it will be adopted by many more free software in the future.

Audio editing

To be honest, I am not an expert in audio editing and my skills in this area are pretty basic. I mainly use 2 formats to manipulate audio files :

  • Vorbis (compressed for quick preview and editing)
  • FLAC (lossless for full quality, final rendering)


What would be creative freedom without non-restrictive sharing ?

As of today, the most common format on the web is, by far, MP4 (H264). It is a proprietary format and it is quite difficult to avoid. I don’t want to impose any restriction to my audience so I sometimes use H.264 when I have no other alternative, but in any case, I always use the following free formats that are compatible with many web browsers, many platforms and are perfect for streaming purpose :

  • WebM format – VP8 (video) / Vorbis (audio)
  • Ogg format – Theora (video) / Vorbis (audio)

Ultra-high definition (4k)

Producing 4K videos is also possible with a free formats workflow. More and more cameras are able to shoot in 4k and the Librem15 will give you the horse power to comfortably work with this kind of big resources. The workflow that I have presented here is adapted to any resolution. Just make sure that you do your final rendering with a format that is able to handle the highest resolutions :

  • WebM format – VP9 (video) / Vorbis (audio)


If you need more information about free formats, you may check the full list on Wikipedia.


I hope this series on media files manipulation has been useful to you and as the Blender Foundation would say, “Creative freedom starts here!”

Happy freedom! 🙂



Digital media transcoding – Part 2 – FFmpeg GUI with Kdenlive

This is the second part of my articles about media conversion with free software. Here is part 1.

This time, I will talk about transcoding your files with a nice and flexible graphical user interface.

I have been looking for a free and open source front-end to FFmpeg that would just specialize in media transcoding, but I haven’t been able to find one so far. Instead I use the video editing software Kdenlive, that is built on top of FFmpeg and integrates a very good interface for media encoding.

PureOS, Kdenlive and my Librem 13 is a perfect combination for my free, libre video editing. Gosh, I don’t regret my old Apple/Adobe workstation!


Installing Kdenlive in PureOS or Debian

Kdenlive comes at its latest version on PureOS. If it is not already installed, open up a terminal and type :

sudo apt install kdenlive

You are ready to go!

Single file transcoding

There are several ways to transcode your files with Kdenlive. This first one will let you trim your file or add any effect to it. It is good for the purpose of transcoding a single file to be streamed on the web for example, or transcoding several files manually, one by one.

First of all, you need to open Kdenlive and drop the file you want to transcode in the Project Bin. You may just drag the file from your file manager into it. Kdenlive should display a message asking you to switch the project’s settings to match your file size and framerate. Just click “Switch”


Then, place this file on your timeline. Make sure that the clip is positioned at the beginning of the timeline. At this point, you may trim your clip or add any effect to it.

Click the “Render” button that is positioned on the top toolbar. You may also go to Project->Render.


The rendering window will appear.


The easiest way to encode is to :

  1. choose a destination and name for your output file;
  2. choose a predefined output format;
  3. make sure that “Full project” is selected;
  4. click “Render to File”.

You may also get access to more advanced settings by checking “More options”


There you can specify your own FFmpeg settings as well as rescaling the output. Note that the parameters in this windows are based on MLT which is a multimedia framework built on top of FFmpeg, so the syntax may differ a bit from FFmpeg. Here is the MLT documentation.

If you wish to add your own encoding profile to the list, just click on the “Create new profile” icon. A new dialog will appear with the settings of the current selected profile. Just update it and save it. You will then be able to select it from the list any time.


Multiple files transcoding

This time we will see an easy way to transcode multiple files at once. It can also work for a single file but won’t give you the ability to trim it.

Move all the files you wish to transcode to the Kedenlive Project Bin.


Select them all and right click on one of them. Select “Transcode” and choose the desired format here.

A popup will appear, giving you some options. Click OK. The transcoded files should be created in the same folder as the original ones.


If you wish to add your own profile to the list, go to “Settings”->”Configure Kdenlive…”. There, select the “Transcode” tab.

Selecting an existing profile will pre-load the fields. Modify the properties, give it a name and click “Add Profile”. You will then find your new profile in the “Transcode” submenu.



You may have noticed that the profile lists are different if you choose the first or second method. This is part of a traditional post production worflow.

Generally, the first method, which is rendering your timeline is used for the final rendering of your project. This final rendering may be sent to additional processing like VFX or color grading. It may also just be the final output ready for delivery (broadcast, web streaming etc…)

The second method is used to store your footage into high quality (lossless) video files to be used as the reference, highest quality video files during the editing. While the editing itself may be done using much smaller lighter “proxy” files, the final rendering should be based on these high quality files.

I will describe in details the formats (all free) that I use in every step of my post production workflow. Stay tuned! 🙂


Digital media transcoding – Part 1 – FFmpeg

Still ready to switch to Free Software for your mulitmedia creations?
Let’s start with understanding the files and formats that we are going to manipulate in our workflow.

Codecs and Formats

When it comes to multimedia creation and publishing, using the right format, the right codec, converting, scaling, compressing, can be a real pain. Thankfully, in the world of Freedom, we have some of the best tools to help us manipulate media files and avoid a lot of frustration.

By the way, what’s a Format? And what’s a Codec?

You may wonder what’s the difference between a format and a codec? Well see it as the format being the container of the entire media file’s data representing both audio and video, and the codec being the way to encode and decode this data. The same format can hold data described by different codecs. Also a codec can be used with different formats.

As an example, the Matroska (.mkv) format can store either H264 or Theora encoded video and Opus or FLAC audio. Now the OGG format, can also hold Theora video and Opus or Vorbis audio… Is this making sense?

These are just examples but I have to admit that there are so many different formats and codecs that it is very difficult to see clear sometimes. What I suggest is to use only a few formats and codecs. The ones we really need. I will come back to this point in a future article.


The software that I use for digital media transcoding is FFmpeg.

FFmpeg is a command line based software that manipulates formats and codecs. Many free software already rely on FFmpeg so you may not need to ever use the commands directly but if you are comfortable with the terminal, FFmpeg can be very useful for quick conversions.

I will cover the basic usage of FFmpeg in this post.

If don’t like using the terminal or don’t easily remember commands (just like me), don’t worry, I will cover media conversions with a clean GUI, in a future post.

Installing FFmpeg

FFmpeg being very popular is pretty easy to install and should be directly available in the most common GNU/Linux distributions repositories.

When using PureOS, FFmpeg should be installed by default but if it is not the case, just use the following command in a terminal :

sudo apt install ffmpeg

Using FFmpeg

In order to get a list of formats supported by FFmpeg, open up a terminal window and type the following command :

ffmpeg -formats

And for a list of supported codecs, type the following command :

ffmpeg -codecs

As you can see, the list is quite impressive! FFmpeg can manipulate the most common free and proprietary digital audio and visual formats.

Converting a video can be achieved by a simple command line :

ffmpeg -i input.mov output.webm

This command converts a Quicktime .mov file to a .webm format with the default encoders (keeping the same scale, framerate and bitrate).

If you need to use a specific codec’s encoder for the chosen format just specify it in -vcodec (for video) and -acodec (for audio):

# Outputs the video in an OGG format using Theora codec for video and Vorbis codec for audio
ffmpeg -i input.mov -vcodec libtheora -acodec libvorbis output.ogg

# to get a list of all encoders
ffmpeg -encoders

In order to convert a video to an image sequence, you may do the following :

# I create a directory to store the files from my image sequence
mkdir sequence
ffmpeg -i input.mov sequence/output_%05d.png

If you wish to rescale the picture of the video, you can use the -vf option (video filter) and set the “scale” value :

# Rescales width and height
ffmpeg -i input.mov -vf scale=320:240 output.webm

# Rescales width keeping aspect ratio
ffmpeg -i input.mov -vf scale=320:-1 output.webm

# Doubles width keeping aspect ratio
ffmpeg -i input.mov -vf scale=scale=iw*2:ih output.webm

# Forces the image to fit into a 320×240 box
ffmpeg -i input.mov -vf scale=w=320:h=240:force_original_aspect_ratio=decrease output.webm

If you wish to force a constant bitrate to your video, use the -b option (use -b:v for the video and -b:a for the audio) :

# 8Mbit/s for video and 128kbit/s for the audio
ffmpeg -i input.mov -b:v 8000k -b:a 128k output.webm

You may also use -minrate and -maxrate to control the minimum and maximum bit rate tolerance (in bits/s)

These are the conversions that I use the most and represent a very short part of the real potential of FFmpeg. There are many more options and filters so if you want to know more about it, I suggest you browse the FFmpeg documentation.

Coming next, will be a tutorial on media conversion using the great GUI from Kdenlive. To complete this series of articles about media files manipulation, I will share with you the different free formats, codecs and params that I use in my entire video production workflow.

Stay tuned! 😉

One last thing…

As you were patient enough to read this article till the end, here is a little present. It is a script that I use to quickly convert short videos animations to animated GIFs.
This script generates a good quality GIF that is scaled down to 640px wide and ready to be embedded in any webpage. I found it in this excellent tutorial.




ffmpeg -v warning -i $1 -vf “$filters,palettegen” -y $palette
ffmpeg -v warning -i $1 -i $palette -lavfi “$filters [x]; [x][1:v] paletteuse” -y $2

To call this script, just use :

./gifenc.sh input.mkv output.gif

Happy free transcoding! 🙂

Thoughts? Send them to feedback(at)shop.puri.sm