Echoes Corsac.net - Echoes camshot
samedi 19 décembre 2020 (1 post)

As a followup to the previous post, here's an update on the iOS 14 USB tethering problem on Linux. After some investigation, Matti Vuorela found that reducing the USB packet size by two bytes would actually fix the issue. A small patch was later commited to the Linux kernel and found its way to Linux and distributions stable releases. On Debian stable you'll need to upgrade to Buster 10.7 to get the fix.

Yves-Alexis@11:14:08

vendredi 16 octobre 2020 (1 post)

It's a bit of a long shot, but maybe someone on Planet Debian or elsewhere can help us reach the right people at Apple.

Starting with iOS 14, something apparently changed on the way USB tethering (also called Personal Hotspot) is set up, which broke it for people using Linux. The driver in use is ipheth, developped in 2009 and included in the Linux kernel in 2010.

The kernel driver negotiates over USB with the iOS device in order to setup the link. The protocol used by both parties to communicate don't really seemed documented publicly, and it seems the protocol has evolved over time and iOS versions, and the Linux driver hasn't been kept up to date. On macOS and Windows the driver apparently comes with iTunes, and Apple engineers obviously know how to communicate with iOS devices, so iOS 14 is supported just fine.

There's an open bug on libimobildevice (the set of userlands tools used to communicate with iOS devices, although the update should be done in the kernel), with some debugging and communication logs between Windows and an iOS device, but so far no real progress has been done. The link is enabled, the host gets an IP from the device, can ping the device IP and can even resolve name using the device DNS resolver, but IP forwarding seems disabled, no packet goes farther than the device itself.

That means a lot of people upgrading to iOS 14 will suddenly lose USB tethering. While Wi-Fi and Bluetooth connection sharing still works, it's still suboptimal, so it'd be nice to fix the kernel driver and support the latest protocol used in iOS 14.

If someone knows the right contact (or the right way to contact them) at Apple so we can have access to some kind of documentation on the protocol and the state machine to use, please reach us (either to the libimobile device bug or to my email address below).

Thanks!

Yves-Alexis@14:36:41

vendredi 09 octobre 2020 (1 post)

So, a bit more thank 18 months ago, I started a new adventure. After a few flights with a friend of mine in a Robin DR400 and Jodel aircrafts, I enlisted in a local flight club at the Lognes airfield (LFPL), and started a Pilot Private License training. A PPL is an international flight license for non commercial operations. Associated with a qualification like the SEP (Single Engine Piston), it enables you to fly basically anywhere in the world (or at least anywhere where French is spoken by the air traffic controllers) with passengers, under Visual Flight Rules (VFR).



A bit like with cars, training has two parts, theoretical and practical, both validated in a test. You don't have to pass the theoretical test before starting the practical training, and it's actually recommended to do both in parallel, especially since nowadays most of the theoretical training is done online (you still have to do 10h of in-person courses before taking the test).


So in March 2019 I started both trainings. Theoretical training is divided in various domains, like regulations, flight mechanics, meteorology, human factors etc. and you can obviously train in parallel. Practical is more sequential and starts with basic flight training (turns, climbs, descents), then take-off, then landing configuration, then landing itself. All of that obviously with a flight instructor sitting next to you (you're on the left seat but the FI is the “pilot in command”). You then start doing circuit patterns, meaning you take off, do a circuit around the airfield, then land on the runway you just took off. Usually you actually don't do a complete landing but rather touch and go, and do it again in order to have more and more landing training.

Once you know how to take-off, do a pattern and land when everything is OK, you start practicing (still with your flight instructor aboard) various failures: especially engine failures at take off, but also flaps failure and stuff like that, all that while still doing patterns and practicing landings. At one point, the flight instructor deems you ready: he exits the plane, and you start your first solo flight: engine tests, take off, one pattern, landing.

For me practical training was done in an Aquila AT-01/A210, which is a small 2-seater. It's really light (it can actually be used as an ultralight), empty weight is a bit above 500kg and max weight is 750. It doesn't go really fast (it cruises at around 100 knots, 185 km/h) but it's nice to fly. As it's really lightweight the wind really shakes it though and it can be a bit hard to land because it really glides very well (with a lift-to-drag ratio at 14). I tried to fly a lot in the beginning, so the basic flight training was done in about 6 months and 23 flight hours. At that point my instructor stepped out of the plane and I did my first solo flight. Everything actually went just fine, because we did repeat a lot before that, so it wasn't even that scary. I guess I will remember my whole life, as people said, but it was pretty uneventful, although the controller did scold me a little because when taxiing back to the parking I misunderstood the instructions and didn't stop where asked (no runway incursion though).



After the first solo flight, you keep practicing patterns and solo flights every once in a while, and start doing cross-country flights: you're not restricted to the local airfields (LFPL, LFAI, LFPK) but start planning trips to more remote airports, about 30-40 minutes away (for me it was Moret/LFPU, Troyes/LFQB, Pontoise/LFPT). Cross country flights requires you to plan the route (draw it on the map, and write a navigation log so you know what to do when in flight), but also check the weather, relevant information, especially NOTAMs - Notice To Air Men (I hope someone rename those Notice to Air Crews at one point), estimate the fuel needed etc. For me, flight preparation time was between once and twice the flight time. Early flight preparation is completed on the day by last-minute checks, especially for weather. During the briefing (with the flight instructor at first, but for the test with the flight examiner and later with yourself) you check in turn every bit of information to decide if you're GO or not for the flight. As a lot of things in aviation, safety is really paramount here.



Once you've practiced cross country flight a bit, you start learning what to do in case of failures during a non-local flights, for example an engine failure in a middle of nowhere, when you have to chose a proper field to land, or a radio failure. And again when you're ready for it (and in case of my local club, once you pass your theoretical exam) you go for cross-country solo flights (of the 10h of solo flight required for taking the test, 5h should be done in cross-country flights). I went again to Troyes (LFQB), then Dijon-Darois (LFGI) and did a three-legs flight to Chalons-Ecury (LFQK) and Pont sur Yonne (LFGO).

And just after that, when I was starting to feel ready for the test, COVID-19 lockdown happened, grounding everyone for a few months. Even after it was over, I felt a bit rusty and had to take some more training. I finally took the test in the beginning of summer, but the first attempt wasn't good enough: I was really stressed, and maybe not completely ready actually. So a bit more training during summer, and finally in September I took the final test part, which was successful this time.

After some paperwork, a new, shiny, Pilot Private License arrived at my door.



And now that I can fly basically when I want, the autumn is finally here with bad weather all day long, so actually planning real flights is a bit tricky. For now I'm still flying solo on familiar trips, but at some point I should be able to bring a passenger with me (on the Aquila) and at some point migrate to a four-seaters like the DR400, ubiquitous in France.

Yves-Alexis@18:37:21

mercredi 28 mars 2018 (1 post)

There was some noise recently about the massive amount of data gathered by Cambridge Analytica from Facebook users. While I don't use Facebook myself, I do use Google and other services which are known to gather a massive amount of data, and I obviously know a lot of people using those services. I also saw some posts or tweet threads about the data collection those services do.

Mozilla recently released a Firefox extension to help users confine Facebook data collection. This addon is actually based on the containers technology Mozilla develops since few years. It started as an experimental feature in Nightly, then as a test pilot experiment, and finally evolved into a fully featured extension called Multi-Account containers. A somehow restricted version of this is even included directly in Firefox but you don't have the configuration window without the extension and you need to configure it manually with about:config.

Basically, containers separate storage (cookies, site preference, login session etc.) and enable an user to isolate various aspect of their online life by only staying logged to specific websites in their respective containers. In a way it looks like having a separate Firefox profile per website, but it's a lot more usable daily.

I use this extension massively, in order to isolate each website. I have one container for Google, one for Twitter, one for banking etc. If I used Facebook, I would have a Facebook container, if I used gmail I would have a gmail container. Then, my day to day browsing is done using the “default” container, where I'm not logged to any website, so tracking is minimal (I also use uBlock origin to reduce ads and tracking).

That way, my online life is compartmentalized/containerized and Google doesn't always associate my web searches to my account (I actually usually use DuckDuckGo but sometimes I do a Google search), Twitter only knows about the tweets I read and I don't expose all my cookies to every website.

The extension and support pages are really helpful to get started, but basically:

  • you install the extension from the extension page
  • you create new containers for the various websites you want using the menu
  • when you open a new tab you can opt to open it in a selected container by long pressing on the + button
  • the current container is shown in the URL bar and with a color underline on the current tab
  • it's also optionally possible to assign a website to a container (for example, always open facebook.com in the Facebook container), which can help restricting data exposure but might prevent you browsing that site unidentified

When you're inside the container and you want to follow a link, you can get out of the container by right clicking on the link, select “Open link in new container tab” then select “no container”. That way Facebook won't follow you on that website and you'll start fresh (after the redirection).

As far as I can tell it's not yet possible to have disposable containers (which would be trashed after you close the tab) but a feature request is open and another extension seems to exist.

In the end, and while the isolation from that extension is not perfect, I really suggest Firefox users to give it a try. In my opinion it's really easy to use and really helps maintaining healthy barriers on one's online presence. I don't know about an equivalent system for Chromium (or Safari) users but if you know about it feel free to point it to me.

A French version of this post is also available here just in case.

Yves-Alexis@20:44:03

lundi 16 octobre 2017 (1 post)

Following the news about the ROCA vulnerability (weak key generation in Infineon-based smartcards, more info here and here) I can confirm that the Almex smartcard I mentionned on my last post (which are Infineon based) are indeed vulnerable.

I've contacted Almex to have more details, but if you were interested in buying that smartcard, you might want to refrain for now.

It does *not* affect keys generated off-card and later injected (the process I use myself).

 

Yves-Alexis@17:32:01

mardi 10 octobre 2017 (1 post)

A long time ago, I switched my GnuPG setup to a smartcard based one. I kept using the same master key, but:

  • copied the rsa4096 master key to a “master” smartcard, for when I need to sign (certify) other keys;
  • created rsa2048 subkeys (for signature, encryption and authentication) and moved them to an OpenPGP smartcard for daily usage.

I've been working with that setup for a few years now and it is working perfectly fine. The signature counter on the OpenPGP basic card is a bit north of 5000 which is large but not that huge, all considered (and not counting authentication and decryption key usage).

One very nice feature of using a smartcard is that my laptop (or other machines I work on) never manipulates the private key directly but only sends request to the card, which is a really huge improvement, in my opinion. But it's also not the perfect solution for me: the OpenPGP card uses a proprietary platform from ZeitControl, named BasicCard. We have very few information on the smartcard, besides the fact that Werner Koch trust ZeistControl to not mess up. One caveat for me is that the card does not use a certified secure microcontroler like you would find in smartcard chips found in debit card or electronic IDs. That means it's not really been audited by a competent hardware lab, and thus can't be considered secure against physical attacks. The cardOS software and the application implementing the OpenPGP specification are not public either and have not been audited either, to the best of my knowledge.

At one point I was interested in the Yubikey Neo, especially since the architecture Yubico used was common: a (supposedly) certified platform (secure microcontroler, card OS) and a GlobalPlatform / JavaCard virtual machine. The applet used in the Yubikey Neo is open-source, too, so you could take a look at it and identify any issue.

Unfortunately, Yubico transitioned to a less common and more proprietary infrastructure for Yubikey 4: it's not longer Javacard based, and they don't provide the applet source anymore. This was not really seen as a good move by a lot of people, including Konstantin Ryabitsev (kernel.org administrator). Also, it wasn't possible  even for the Yubico Neo to actually build the applet yourself and inject it on the card: when the Yubikey leaves the facility, the applet is already installed and the smartcard is locked (for obvious security reason). I've tried asking about getting naked/empty Yubikey with developers keys to load the applet myself, but it' was apparently not possible or would have required signing an NDA with NXP (the chip maker), which is not really possible as an individual (not that I really want to anyway).

In the meantime, a coworker actually wrote an OpenPGP javacard applet, with the intention to support latest version of the OpenPGP specification, and especially elliptic curve cryptography. The applet is called SmartPGP and has been released on ANSSI github repository. I investigated a bit, and found a smartcard with correct specification: certified (in France or Germany), and supporting Javacard 3.0.4 (required for ECC). The card can do RSA2048 (unfortunately not RSA4096) and EC with NIST (secp256r1, secp384r1, secp521r1) and Brainpool (P256, P384, P512) curves.

I've ordered some cards, and when they arrived started playing. I've built the SmartPGP applet and pushed it to a smartcard, then generated some keys and tried with GnuPG. I'm right now in the process of migrating to a new smartcard based on that setup, which seems to work just fine after few days.

Part two of this serie will describe how to build the applet and inject it in the smartcard. The process is already documented here and there, but there are few things not to forget, like how to lock the card after provisionning, so I guess having the complete process somewhere might be useful in case some people want to reproduce it.

Yves-Alexis@22:44:37

jeudi 27 avril 2017 (1 post)

Since the question popped here and there, I'll post a short blog post about the issue right now so there's a reference somewhere.

As you may know, Brad Spengler (spender) and the Pax Team recently announced that the grsecurity test patches won't be released publicly anymore. The stable patches were already restricted to enterprise, paying customers, this is now also the case for the test patches.

Obviously that means the end of the current situation in Debian since I used those test patches for the linux-grsec packages, but I'm not exactly sure what comes next and I need to think a bit about this before doing anything.

The “passing the baton” post mention a handover to the community (though the FAQ mention it needs to stop using the term “grsecurity”) so maybe there's some coordination possible with other users like Gentoo Hardened and Alpine, but it's not clear what would be possible with the tools we have.

I'm actually quite busy right now so I don't have much time to think about all this, but expect a new blog post when things have settled a bit and I've made up my mind.

Yves-Alexis@13:18:57

mercredi 04 mai 2016 (1 post)

Following discussion in #810506 and the ACK by the backports team, I've uploaded linux-grsec package (version 4.4.7-1+grsec201604152208+1~bpo8+1) to jessie-backports, and it has been ACCEPTED this morning (along with linux-grsec-base support package). So if you have a Jessie install with backports enabled, linux-grsec should be one apt call away:

apt install -t jessie-backports linux-image-grsec-amd64

4.4.8 should follow soon

.

Yves-Alexis@10:45:35

samedi 09 janvier 2016 (1 post)

As some of you might have already noticed, linux-grsec entered Debian unstable earlier this week, following linux-grsec-base a bit earlier.

So that means, if you're running sid, you can just run:

# apt install linux-image-4.3.0-1-grsec-amd64

There's no metapackage (version-less) for now, but I might add one at one point, if people need it.

After installing the kernel and the linux-grsec-base support package, you should check the /etc/sysctl.d/grsec.conf file and review the various tunables there, which might or might not suit your needs. The settings are mostly all enabled in the package (in order to get a “secure by default” state), but there a few bits you might need to disable.

 For example, on my main laptop, where I do most of my stuff, including Debian work, I've disabled:

kernel.grsecurity.deny_new_usb = 0
kernel.grsecurity.audit_chdir = 0
kernel.grsecurity.chroot_deny_chmod = 0
kernel.grsecurity.chroot_deny_mknod = 0
kernel.pax.softmode = 0

The deny_new_usb because a laptop is not really usable without USB, audit_chdir because it's really to noisy (I like to keep exec_logging though, because it's only for the root gid so it's somehow interesting and not too noisy).

Both chroot settings are disabled because I'm building packages in pbuilder, which uses chroot. By the way, if you're doing that you'll need to add the pbuilder (uid 1234) user to the grsec-tpe (gid 64040) group inside the chroot so it has permissions to execute stuff.

softmode is disabled but it's a default setting (“secure by default”). You can use it if needed to see what PaX /would/ deny and adjust things (using paxctl or setting file extended attributes).

On the same laptop, I need to set PaX 'm' attributes (allow W|X memory maps) on the following binaries:

setfattr -n user.pax.flags -v m /usr/bin/evolution
setfattr -n user.pax.flags -v m /usr/bin/python
setfattr -n user.pax.flags -v m /usr/lib/chromium/chromium

It's a bit unfortunate (especially evolution and chromium are quite exposed to untrusted code, and python is really too generic), but to keep a working box I don't have much choice.

Plans regarding stable are a bit more fuzzy. As indicated on the initial bug, the current upstream release model doesn't really fit with the “Debian stable” one: only the test patch, against the latest major Linux kernel version, is available free of charge. I don't think the release team would be really happy to see a new Linux version uploaded to stable every two months.

Although having linux-grsec on unstable is already a great victory, I still think most users are likely to want it on stable (for example on server boxes), so I'm considering plans for that. Right now, I'm still uploading jessie packages to my repository, but also investigating wether backports are suitable. The default answer is no, obviously, because backports are only supposed to hosts packages which will be in the next stable release, but maybe there will be something possible. Stay tuned, in any way.

Don't hesitate to try the packages. There might be some roughs edges, it's expected. If you have issues, please read the documentation available on grsecurity and PaX, because security is a process, and installing the package won't just magically make you secure if you don't know what it does. Don't hesitate to report bugs, but try to investigate a bit before (with the src:linux package, and with vanilla+grsec packages).

Finally, many thanks to Brad Spengler and the PaX team, this is their work, I'm merely the packager here. 

Yves-Alexis@10:14:37

mercredi 04 novembre 2015 (1 post)

Thanks to Mehdi Dogguy, here's a nice hook to generate a source change file at build time (with pbuilder), so one can upload source-only packages to the archive and have buildds rebuild for all the architectures. Put it in .pbuilder/hooks/B10_source-build so it gets called once the builds succeeds

#! /bin/sh

generate_change_file()
{
  local version=$(dpkg-parsechangelog -Sversion)
  local package=$(dpkg-parsechangelog -Ssource)
  echo "Generating source changes file"
  dpkg-genchanges -S > ../${package}_${version}_source.changes
}

cd /tmp/buildd/*/debian/..
generate_change_file

Next time you build a package, you should find, alongside the <package>_<version>_<arch>.changes file, a <package>_<version>_source.changes which you can use with usual tools (lintian, debsign, dput…) to upload it to the Debian archive.

Note that if you do that, you have to make sure that your debian/rules support building separately the arch-dependent and arch-independant packages. To check that, you can call pdebuild like this:

pdebuild --debbuildopts -A # binary-only build, limited to arch-independant packages
pdebuild --debbuildopts -B # binary-only build, limited to arch-dependant packages

Yves-Alexis@20:53:55

mercredi 30 septembre 2015 (1 post)

As part of my ongoing effort to provide grsecurity patched kernels for Debian, I gave a talk this morning at Kernel Recipes 2015. Slides and video should be available at one point, but you can find the former here in the meantime. I'm making some progresses on #605090 which I should be able to push soon.

Yves-Alexis@18:00:09

dimanche 09 août 2015 (1 post)

So, everybody knows that WPS (Wi-Fi Protected Setup) is broken. But sometimes, you don't own the access point, and you'd just want the wireless to work. That happens for example when you're a guest in some place using an Orange Livebox and you don't have the WPA passphrase (usually because it's written somewhere you don't have access too, or because someone forgot to tell you).

Liveboxes WPS is the “press button” thing: you press a button on the front for one second, then any device can connect in the next two minutes. That works fine with Android devices, for example, but it didn't work with my laptop and NetworkManager, which doesn't support WPS at all.

Fortunately, the underlying piece of software (wpa_supplicant) does support WPS, and even the “push button” style. And you can nicely ask it to reveal the passphrase to you with the following trick.

  1. Disconnect NetworkManager from the network, disable the wireless link, stop it; just make sure wpa_supplicant is not running;
  2. Put a stub wpa_supplicant.conf file with only the following content:
    update_config=1
    
  3. Start wpa_supplicant in the foreground with your stub config file: 
    wpa_supplicant -iwlan0 -c wpa_supplicant.conf
    
  4. Start wpa_cli
Inside wpa_cli:
  1. Scan the network:
    scan
    
  2. Get the results:
    scan_results
    
    and identity the bssid of the Livebox
  3. Press the WPS button on the Livebox
  4. Run
    wps_pbc <bssid>
    ; some text should appear in the wpa_cli window, and it should eventually connect successfully (at that point you can even run a dhclient on wlan0)
  5. Run
    save_config
    

The last command will update your stub configuration file, adding a new network block with the passphrase in the clear. You can then use that passphrase inside Network Manager if it's more convenient for you.

There might be something easier, but at least it worked just fine for me during the holidays.

Yves-Alexis@21:44:32

jeudi 21 mai 2015 (1 post)

So, following the previous post, I've indeed updated the way I'm making my grsec kernels.

I wanted to upgrade my server to Jessie, and didn't want to keep the 3.2 kernel indefinitely, so I had to update to at least 3.14, and find something to make my life (and maybe some others) easier.

In the end, like planned, I've switched to the make deb-pkg way, using some scripts here and there to simplify stuff.

The scripts and configs can be found in my debian-grsec-config repository. The repository layout is pretty much self-explaining:

The bin/ folder contains two scripts:

  • get-grsec.sh, which will pick the latest grsec patch (for each branch) and applies it to the correct Linux branch. This script should be run from a git clone of the linux-stable git repository;
  • kconfig.py is taken from the src:linux Debian package, and can be used to merge multiple KConfig files

The configs/ folder contains the various configuration bits:

  • config-* files are the Debian configuration files, taken from the linux-image binary packages (for amd64 and i386);
  • grsec* are the grsecurity specifics bits (obviously);
  • hardening* contain non-grsec stuff still useful for hardened kernels, for example KASLR (cargo-culting nonwidthstanding) or strong SSP (available since I'm building the kernels on a sid box, YMMV).

I'm currently building amd64 kernels for Jessie and i386 kernels will follow soon, using config-3.14 + hardening + grsec. I'm hosting them on my apt repository. You're obviously free to use them, but considering how easy it is to rebuild a kernel, you might want to use a personal configuration (instead of mine) and rebuild the kernel yourself, so you don't have to trust my binary packages.

Here's a very quick howto (adapt it to your needs):

mkdir linux-grsec && cd linux-grsec
git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
git clone git://anonscm.debian.org/users/corsac/grsec/debian-grsec-config.git
mkdir build
cd linux-stable
../debian-grsec-config/bin/get-grsec.sh stable2 # for 3.14 branch
../debian-grsec-config/bin/kconfig.py ../build/.config ../debian-grsec-config/configs/config-3.14-2-amd64 ../debian-grsec-config/configs/hardening ../debian-grsec-config/configs/grsec
make KBUILD_OUTPUT=../build -j4 oldconfig
make KBUILD_OUTPUT=../build -j4 deb-pkg

Then you can use the generated Debian binary packages. If you use the Debian config, it'll need a lot of disk space for compilation and generate a huge linux-image debug package, so you might want to unset CONFIG_DEBUG_INFO locally if you're not interested. Right now only the deb files are generated but I've submitted a patch to have a .changes file which can be then used to manipulate them more easily (for example for uploading them a local Debian repository).

Note that, obviously, this is not targeted for inclusion to the official Debian archive. This is still not possible for various reasons explained here and there, and I still don't have a solution for that.

I hope this (either the scripts and config or the generated binary packages) can be useful. Don't hesitate to drop me a mail if needed.

Yves-Alexis@22:36:11

samedi 09 mai 2015 (1 post)

So, following the Jessie release, and after a quick approval by the release team for the 4.12 transition, we've uploaded Xfce 4.12 to sid and have asked the RT to schedule the relevant binNMUs for the libxfce4util and xfce4-panel reverse dependencies.

It went apparently well (besides some hickups here and there, lilke some lag on sparc, and some build-failulres on hurd). So Xfce 4.12 is now in sid, and should migrate to Stretch in the following weeks, provided nothing release critical is found.

Yves-Alexis@21:05:55

lundi 30 mars 2015 (1 post)

It's been a long time since I updated my repository with a recent kernel version, sorry for that. This is now done, the kernel (sources, i386 and amd64) is based on the (yet unreleased) 3.2.68-1 Debian kernel, patched with grsecurity 3.1-3.2.68-201503251805, and has the version 3.2.68-1~grsec1.

It works fine here, but as always, no warranty. If any problem occurs, try to reproduce using vanilla 3.2.68 + grsec patch before reporting here.

And now that Jessie release approaches, the question of what to do with those Debian/grsec kernel still arrise: the Jessie kernel is based on the 3.16 branch, which is not a (kernel.org) long term branch. Actually, the support already ended some times ago, and the (long term) maintainance is now assured by the Canonical Kernel Team (thus the -ckt suffix) with some help from the Debian kernel maintainers. So there's no Grsecurity patch following 3.16, and there's no easy way to forward-port the 3.14 patches.

At that point, and considering the support I got the last few years on this initiative, I don't think it's really worth it to continue providing those kernels.

One initiative which might be interesting, though, is the Mempo kernels. The Mempo team works on kernel reproducible builds, but they also include the grsecurity patch. Unfortunately, it seems that building the kernel their way involves calling a bash script which calls another one, and another one. A quick look at the various repositories is only enough to confuse me about how actually they build the kernel, in the end, so I'm unsure it's the perfect fit for a supposedly secure kernel. Not that the Debian way of building the kernel doesn't involves calling a lot of scripts (either bash or python), but still. After digging a bit, it seems that they're using make-kpkg (from the kernel-package package), which is not the recommended way anymore. Also, they're currently targeting Wheezy, so the 3.2 kernel, and I have no idea what they'll chose for Jessie.

In the end, for myself, I might just do a quick script which takes a git repository at the right version, pick the latest grsec patch for that branch, applies it, then run make deb-pkg and be done with it. That still leaves the problem of which branch to follow:

  • run a 3.14 kernel instead of the 3.16 (I'm unsure how much I'd lose / not gain from going to 3.2 to 3.14 instead of 3.16);
  • run a 3.19 kernel, then upgrade when it's time, until a new LTS branch appears.

There's also the config file question, but if I'm just using the kernels for myself and not sharing them, it's also easier, although if some people are actually interested it's not hard to publish them.

Yves-Alexis@22:27:21

mercredi 25 mars 2015 (1 post)

So I started migrating some of my LXCs to Jessie, to test the migration in advance. The upgrade itself was easy (the LXC is mostly empty and only runs radicale), but after the upgrade I couldn't login anymore (using lxc-console since I don't have lxc-attach, the host is on Wheezy). So this is mostly a note to self.

auth.log was showing:

Mar 25 22:10:13 lxc-sync login[1033]: pam_loginuid(login:session): Cannot open /proc/self/loginuid: Read-only file system
Mar 25 22:10:13 lxc-sync login[1033]: pam_loginuid(login:session): set_loginuid failed
Mar 25 22:10:13 lxc-sync login[1033]: pam_unix(login:session): session opened for user root by LOGIN(uid=0)
Mar 25 22:10:13 lxc-sync login[1033]: Cannot make/remove an entry for the specified session

The last message isn't too useful, but the first one gave the answer. Since LXC isn't really ready for security stuff, I have some hardening on top of that, and one measure is to not have rw access to /proc. I don't really need pam_loginuid there, so I just disabled that. I just need to remember to do that after each LXC upgrade.

Other than that, I have to boot using SystemV init, since apparently systemd doesn't cope too well with the various restrictions I enforce on my LXCs:

lxc-start -n sync
Failed to mount sysfs at /sys: Operation not permitted

(which is expected, since I drop CAP_SYS_ADMIN from my LXCs). I didn't yet investigate how to stop systemd doing that, so for now I'm falling back to SystemV init until I find the correct customization:

lxc-start -n sync /lib/sysvinit/init   
INIT: version 2.88 booting
[info] Using makefile-style concurrent boot in runlevel S.
hostname: you must be root to change the host name
mount: permission denied
mount: permission denied
[FAIL] udev requires a mounted sysfs, not started ... failed!
 failed!
mount: permission denied
[info] Setting the system clock.
hwclock: Cannot access the Hardware Clock via any known method.
hwclock: Use the --debug option to see the details of our search for an access method.
[warn] Unable to set System Clock to: Wed Mar 25 21:21:43 UTC 2015 ... (warning).
[ ok ] Activating swap...done.
mount: permission denied
mount: permission denied
mount: permission denied
mount: permission denied
[ ok ] Activating lvm and md swap...done.
[....] Checking file systems...fsck from util-linux 2.25.2
done.
[ ok ] Cleaning up temporary files... /tmp.
[ ok ] Mounting local filesystems...done.
[ ok ] Activating swapfile swap...done.
mount: permission denied
mount: permission denied
[ ok ] Cleaning up temporary files....
[ ok ] Setting kernel variables ...done.
[....] Configuring network interfaces...RTNETLINK answers: Operation not permitted
Failed to bring up lo.
done.
[ ok ] Cleaning up temporary files....
[FAIL] startpar: service(s) returned failure: hostname.sh udev ... failed!
INIT: Entering runlevel: 2
[info] Using makefile-style concurrent boot in runlevel 2.
dmesg: read kernel buffer failed: Operation not permitted
[ ok ] Starting Radicale CalDAV server : radicale.
Yes, there are a lot of errors, but they seem to be handled just fine.

Yves-Alexis@22:26:04

samedi 14 mars 2015 (1 post)

So, I also got myself a new toy. My current ThinkPad is a bit ancient, but still sturdy. It's an X201s from 2010 (brought refurbished), and it's still working pretty fine, but eh, I couldn't resist.

The X230 was nice, but didn't have a large resolution screen (1366×768). The X240 brought a full HD (1920×1080) IPS screen, but lost the hardware trackpoint buttons. Finally, the X250 brings back the buttons, still have a nice screen (not qHD or some other trendy resolutions, but still FHD and IPS). And on top of that, it comes with Broadwell, so that means I get smap.

It runs mostly fine out of the box on Debian sid, but for full support some tuning is needed. I've setup a page with more information on the laptop, and some images can be found over there.

Yves-Alexis@16:59:22

mercredi 11 juin 2014 (1 post)

So, it seems that for a lot of people using unstable, hardware-related permissions (shutdown/reboot, suspend/hibernate, devices mount/umount etc.) have been broken since some times.

That's usually the case for people using GNOME with lightdm display manager, Xfce with either gdm or lightdm.

It seems that recently, policykit (which is used by GNOME and Xfce) switched from consolekit backend to logind backend (yeah, systemd-logind). So applications using policykit needs to handle that correctly, and that means beeing sure a logind session is correctly setup, which is done by installing the package libpam-systemd.

For now, it's still possible to not switch to systemd as init system, by installing the systemd-shim package before libpam-systemd. Be aware that (at least with the current state of affairs), this is only true with logind before 204. When systemd maintainers start transitionning to a later version, only systemd-sysv (so, systemd as init system) will work.

For people reluctant to switch to systemd, they can use systemd-shim for now. Then when systemd 205+ enters the archive, either lose those hardware permissions, or try to improve systemd-shim to handle that situation.

There's not much we (Xfce/LightDM maintainers) can do about that.

Yves-Alexis@20:51:57

lundi 07 avril 2014 (1 post)

Short version:

  • yes we're affected;
  • we're currently working on it;
  • we didn't have an early warning so we're doing as fast as we can.

DSA should be in your INBOX in a few moments, and the updates on the mirror a moment later.

[UPDATE Tue, 08 Apr 2014 01:06:42 +0200]

After the upgrade, you really need to restart all TLS application using libssl1.0.0 to get the fix. Usual suspects are webservers, mailservers etc. Don't forget to restart clients too. Easiest way is to completely reboot the sever, but in case that's not a solution, you can check the process still using the old library with the following snippet:

grep -l 'libssl.*deleted' /proc/*/maps | tr -cd 0-9\\n | xargs -r ps u

Some people seem to indicate that the 64kB leak can enable an attacker to get pretty much anything from the process memory space, including the certificate private key. While we weren't able to confirm that yet, that's not really impossible, so you might also want to regenerate those private keys, although that's not something you should do in a rush either.

Yves-Alexis@23:35:30

dimanche 25 août 2013 (1 post)

So, last year I've switched to an OpenPGP smartcard setup for my whole personal/Debian PGP usage. When doing so, I've also switched to subkeys, since it's pretty natural when using a smartcard. I initially set up an expiration of one year for the subkeys, and everything seems to be running just fine for now.

The expiration date was set to october 27th, and I though it'd be a good idea to renew them quite in advance, considering there's my signing key in there, which is (for example) used to sign packages. If the Debian archive considers my signature subkey expired, that means I can't upload packages anymore, which is a bit of a problem (although I think I could still upload packages signed by the main key). dak (Debian Archive Kit, the software managing the Debian archive) uses keys from the keyring provided by Debian admins, which is usually updated every month or so from the keyring.debian.org public key server, so pushing the expiration date two months before the due date seemed like a good idea.

I've just did that, and it was pretty easy, actually. For those who followed my setup last year, here's how I did it:

First, I needed my main smartcard (the one storing the main key), since it's the only one able to do operations on the subkeys. So I plug it, and then:

corsac@scapa: gpg --edit-key 71ef0ba8
gpg (GnuPG) 1.4.14; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  4096R/71EF0BA8  created: 2009-05-06  expires: never       usage: SC  
                     trust: ultimate      validity: ultimate
sub  4096g/36E31BD8  created: 2009-05-06  expires: never       usage: E   
sub  2048R/CC0E273D  created: 2012-10-17  expires: 2013-10-27  usage: A   
sub  2048R/A675C0A5  created: 2012-10-27  expires: 2013-10-27  usage: S   
sub  2048R/D98D0D9F  created: 2012-10-27  expires: 2013-10-27  usage: E   
[ultimate] (1). Yves-Alexis Perez <corsac@corsac.net>
[ultimate] (2)  Yves-Alexis Perez (Debian) <corsac@debian.org>

gpg&> key 2

pub  4096R/71EF0BA8  created: 2009-05-06  expires: never       usage: SC  
                     trust: ultimate      validity: ultimate
sub  4096g/36E31BD8  created: 2009-05-06  expires: never       usage: E   
sub* 2048R/CC0E273D  created: 2012-10-17  expires: 2013-10-27  usage: A   
sub  2048R/A675C0A5  created: 2012-10-27  expires: 2013-10-27  usage: S   
sub  2048R/D98D0D9F  created: 2012-10-27  expires: 2013-10-27  usage: E   
[ultimate] (1). Yves-Alexis Perez <corsac@corsac.net>
[ultimate] (2)  Yves-Alexis Perez (Debian) <corsac@debian.org>

gpg> expire
Changing expiration time for a subkey.
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 429d
Key expires at mar. 28 oct. 2014 12:43:35 CET
Is this correct? (y/N) y

At that point, a pinentry dialog should ask you the PIN, and the smartcard will sign the subkey. Repear for all the subkeys (in my case, 3 and 4). If you ask for PIN confirmation at every signature, the pinentry dialog should reappear each time.

When you're done, check that everything is ok, and save:

gpg> save
corsac@scapa: gpg --list-keys 71ef0ba8
gpg: checking the trustdb
gpg: public key of ultimately trusted key AF2195C9 not found
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   4  signed:   5  trust: 0-, 0q, 0n, 0m, 0f, 4u
gpg: depth: 1  valid:   5  signed:  53  trust: 5-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2013-12-28
pub   4096R/71EF0BA8 2009-05-06
uid                  Yves-Alexis Perez <corsac@corsac.net>
uid                  Yves-Alexis Perez (Debian) <corsac@debian.org>
sub   4096g/36E31BD8 2009-05-06 [expires: 2014-10-28]
sub   2048R/CC0E273D 2012-10-17 [expires: 2014-10-28]
sub   2048R/A675C0A5 2012-10-27 [expires: 2014-10-28]
sub   2048R/D98D0D9F 2012-10-27 [expires: 2014-10-28]

Now that we have the new subkeys definition locally, we need to push it to the keyservers so other people get it too. In my case, I also need to push it to Debian keyring keyserver so it gets picked at the next update:

corsac@scapa: gpg --send-keys 71ef0ba8
gpg: sending key 71EF0BA8 to hkp server subkeys.pgp.net
corsac@scapa: gpg --keyserver keyring.debian.org --send-keys 71ef0ba8
gpg: sending key 71EF0BA8 to hkp server keyring.debian.org

Main smartcard now back in safe place. As far as I can tell, there's no operation needed on the daily smartcard (which only holds the subkeys), but you will need to refresh your public key on any machine you use it on before it gets the updated expiration date.

Yves-Alexis@14:18:12

Images
Stats
  • 1526 posts
  • 7950 jours
  • 0.19 posts/jour
  • IRC
  • Last.fm
Stuff
Tech
Weblogs
Desktop