SSH Certificates and Keys in Apple T2 On-host

Tommy
October 14 2020

Key Takeaways

  • SSH certificates can be used with the Apple T2 chip on macOS as an alternative to external smart cards, authenticated with a fingerprint per session.
  • The Mac T2 chip serves as an extra security layer by creating private keys in the secure enclave.
  • The CA can be stored on an external smartcard, only signing for access in a limited period - again limiting the exposure.

Introduction

Over the past days I have been going down a deep, deep rabbit hole of SSH proxy jumping and SSH certificates combined with smart cards.

After playing around with smart cards for SSH, I recognized that not only external smart cards such as the Yubikey or Nitrokey is a possible lane to go down.

Mac computers comes with a security chip called T2. This chip is also known to host something Apple calls Secure Enclave [1]. In the Secure Enclave you can store keys.

It will probably not serve as an equally secure solution as with external smart cards, but it is a better balance for usability.

The T2 is permanently stored in hardware on one host only, so the access needs to be signed on a per-host basis. In such I would say the T2 and external smart cards complement each other.

Always having the key available will bring two additional vulnerabilities:

  • If compromised, the key is always available logically
  • Separation of equipment and key is not possible e.g. in a travel situation

With a central pubkey directory tied to an identity (automated), the T2 can be of better use for an enterprise setup.

Setting up a Private Key in Secure Enclave

While fiddling around I found sekey on Github [2]. The project seems abandoned, but it is the secure enclave that does the heavy lifting.

The short and easy setup are:

$ brew cask install sekey
$ echo "export SSH_AUTH_SOCK=$HOME/.sekey/ssh-agent.ssh" >> ~/.zshrc
$ echo "IdentityAgent ~/.sekey/ssh-agent.ssh" >> ~/.ssh/config
$ source ~/.zshrc

A keypair can now be generated in the secure enclave by:

$ sekey --generate-keypair SSH
$ sekey --list-keys

Now export the public key of the curve generated on-chip:

$ sekey --export-key <id> > id_ecdsa.pub

Using the trick we found in our recent venture into using smart cards for signing the key, we can used PCKS#11 without compromising security [3]. In this case I use a Nitrokey:

$ brew cask install opensc
$ PKCS11_MODULE_PATH=/usr/local/lib/opensc-pkcs11.so
$ ssh-keygen -D $PKCS11_MODULE_PATH -e > ca.pub
$ ssh-keygen -D $PKCS11_MODULE_PATH -s ca.pub -I example -n zone-web -V +1h -z 1 id_ecdsa.pub
Enter PIN for 'OpenPGP card (User PIN)': 
Signed user key id_ecdsa-cert.pub: id "example" serial 1 for zone-web valid from 2020-10-14T20:26:00 to 2020-10-14T21:27:51
cp id_ecdsa-cert.pub ~/.ssh/

If you now try to ssh into a server using the given certificate authority as shown in the SSH-CA post [3], access should be granted with a fingerprint.

A Word of Caution

The T2 has some vulnerabilities shown recently [4]. Make sure to include these in your risk assessment of using it. If you won’t go down the smart card route it will still be better than storing the key on disk.

[1] https://support.apple.com/guide/security/secure-enclave-overview-sec59b0b31ff/web
[2] https://github.com/sekey/sekey
[3] https://secdiary.com/2020–10–13-ssh-ca-proxyjump.html
[4] https://inks.cybsec.network/tag/t2

Tags: #architecture #ssh #apple #t2
Read with Gemini

SSH Certificate Signing, Proxying and Smartcards

Tommy
October 13 2020

Key Takeaways

  • SSH has a key-signing concept that in combination with a smartcard provides a lean, off-disk process
  • A SSH-CA provides the possibility of managing access without a central point of failure
  • The use of SSH Jumphost is an easier way to tunnel sessions end-to-end encrypted, while still maintaining visibility and control through a central point

Introduction

This post is an all-in-one capture of my recent discoveries with SSH. It is an introduction for a technical audience.

It turns out that SSH is ready for a zero trust and microsegmentation approach, which is important for management of servers. Everything described in this post is available as open source software, but some parts require a smartcard or two, such as a Yubikey (or a Nitrokey if you prefer open source. I describe both).

I also go into detail on how to configure the CA key without letting the key touch the computer, which is an important principle.

The end-result should be a more an architecture providing a better overview of the infrastructure and a second logon-factor independent of phones and OATH.

SSH-CA

My exploration started when I read a 2016-article by Facebook engineering [1]. Surprised, but concerned with the configuration overhead and reliability I set out to test the SSH-CA concept. Two days later all my servers were on a new architecture.

SSH-CA works predictably like follows:

                                       [ User generates key on Yubikey ]
                                                        |
                                                        |
                                                        v
[ ssh-keygen generates CA key ] --------> [ signs pubkey of Yubikey ]
                |                           - for a set of security zones
                |                           - for users
                |                                       |
                |                                       |
                |                                       v
                v                         pubkey cert is distributed to user
[ CA cert and zones pushed to servers ]     - id_rsa-cert.pub
  - auth_principals/root (root-everywhere)
  - auth_principals/web (zone-web)

The commands required in a nutshell:

# on client
$ ssh-keygen -t rsa
    
# on server
$ ssh-keygen -C CA -f ca
$ ssh-keygen -s ca -I <id-for-logs> -n zone-web -V +1w -z 1 id_ecdsa.pub
    
# on client
cp id_ecdsa-cert.pub ~/.ssh/

Please refer to the next section for a best practice storage of your private key.

On the SSH server, add the following to the SSHD config:

TrustedUserCAKeys /etc/ssh/ca.pub
AuthorizedPrincipalsFile /etc/ssh/auth_principals/%u

What was conceptually new for me was principals and authorization files per server. This is how it works:

  1. Add a security zone, like zone-web, during certificate signing - “ssh-keygen * -n zone-web *”. Local username does not matter
  2. Add a file per user on the SSH server, where zone-web is added where applicable - e.g. “/etc/ssh/auth_principals/some-user” contains “zone-web”
  3. Login with the same user as given in the zone file - “ssh some-user@server”

This is the same as applying a role instead of a name to the authorization system, while something that IDs the user is added to certificate and logged when used.

This leaves us with a way better authorization and authentication scheme than authorized_keys that everyone uses. Read on to get the details for generating the CA key securely.

Keeping Private Keys Off-disk

An important principle I have about private keys is to rather cross-sign and encrypt two keys than to store one on disk. This was challenged for the SSH-CA design. Luckily I found an article describing the details of PKCS11 with ssh-keygen [2]:

If you’re using pkcs11 tokens to hold your ssh key, you may need to run ssh-keygen -D $PKCS11_MODULE_PATH ~/.ssh/id_rsa.pub so that you have a public key to sign. If your CA private key is being held in a pkcs11 token, you can use the -D parameter, in this case the -s parameter has to point to the public key of the CA.

Yubikeys on macOS 11 (Big Sur) requires the yubico-piv-tool to provide PKCS#11 drivers. It can be installed using Homebrew:

$ brew install yubico-piv-tool
$ PKCS11_MODULE_PATH=/usr/local/lib/libykcs11.dylib

Similarly the procedure for Nitrokey are:

$ brew cask install opensc
$ PKCS11_MODULE_PATH=/usr/local/lib/opensc-pkcs11.so

Generating a key on-card for Yubikey:

$ yubico-piv-tool -s 9a -a generate -o public.pem

For the Nitrokey:

$ pkcs11-tool -l --login-type so --keypairgen --key-type RSA:2048

Using the exported CA pubkey and the private key on-card a certificate may now be signed and distributed to the user.

$ ssh-keygen -D $PKCS11_MODULE_PATH -e  > ca.pub

$ ssh-keygen -D $PKCS11_MODULE_PATH -s ca.pub -I example -n zone-web -V +1w -z 1 id_rsa.pub
Enter PIN for 'OpenPGP card (User PIN)': 
Signed user key .ssh/id_rsa-cert.pub: id "example" serial 1 for zone-web valid from 2020-10-13T15:09:00 to 2020-10-20T15:10:40

The same concept goes for a user smart-card, except that is a plug and play as long as you have the gpg-agent running. When the id_rsa-cert.pub (the signed certificate of e.g. a Yubikey) is located in ~/.ssh, SSH will find the corresponding private key automatically. The workflow will be something along these lines:

[ User smartcard ] -----------> [ CA smartcard ]
         ^          id_rsa.pub          |
         |                              | signs
         |------------------------------|
           sends back id_rsa-cert.pub

A Simple Bastion Host Setup

The other thing I wanted to mention was the -J option of ssh, ProxyJump.

ProxyJump allows a user to confidentially, without risk of a man-in-the-middle (MitM), to tunnel the session through a central bastion host end-to-end encrypted.

Having end-to-end encryption for an SSH proxy may seem counter-intuitive since it cannot inspect the content. I believe it is the better option due to:

  • It is a usability compromise, but also a security compromise in case the bastion host is compromised.
  • Network access and application authentication (and even authorization) goes through a hardened point.
  • In addition the end-point should also log what happens on the server to a central syslog server.
  • A bastion host should always be positioned in front of the server segments, not on the infrastructure perimeter.

A simple setup looks like the following:

[ client ]  ---> [ bastion host ] ---> [ server ]

Practically speaking a standalone command will look like follows:

ssh -J jump.example.com dest.example.com

An equivalent .ssh/config will look like:

 Host j.example.com
 HostName j.example.com
 User sshjump
 Port 22

 Host dest.example.com
 HostName dest.example.com
 ProxyJump j.example.com
 User some-user
 Port 22

With the above configuration the user can compress the ProxyJump SSH-command to “ssh dest.example.com”.

Further Work

The basic design shown above requires one factor which is probably not acceptable in larger companies: someone needs to manually sign and rotate certificates. There are some options mentioned in open sources, where it is normally to avoid having certificates on clients and having an authorization gateway with SSO. This does however introduce a weakness in the chain.

I am also interested in using SSH certificates on iOS, but that has turned out to be unsupported in all apps I have tested so far. It is however on the roadmap of Termius, hopefully in the near-future. Follow updates on this subject on my Honk thread about it [4].

For a smaller infrastructure like mine, I have found the manual approach to be sufficient so far.

[1] Scalable and secure access with SSH: https://engineering.fb.com/security/scalable-and-secure-access-with-ssh/
[2] Using a CA with SSH: https://www.lorier.net/docs/ssh-ca.html
[3] Using PIV for SSH through PKCS #11: https://developers.yubico.com/PIV/Guides/SSH_with_PIV_and_PKCS11.html
[4] https://cybsec.network/u/tommy/h/q1g4YC31q45CT4SPK4

Tags: #architecture #ssh #proxyjump #smartcard #pkcs11 #ssh-ca #certificates #signing
Read with Gemini

Taking The Cyber Highground: Reverse Proxies to The Rescue

Tommy
October 07 2020

Key Takeaways

  • Monitoring the technology infrastructure is a key element for situational awareness in both security and IT operations.
  • A 2020 infrastructure should use a modern application layer reverse proxy such as Pomerium in front of all services. Leave all clients outside.
  • The threat landscape should be the focus when shaping a defendable infrastructure.

Disclaimer: If you have outsourced all your equipment and information to “the cloud”, this post is a sanity check of the relationship with your vendor. The primary audience of this post is everyone willing to invest in people and knowledge to provide a best possible defense for their people and processes, and the technology supporting them.

Introduction

I cannot start to imagine how many times Sun Tzu must have been quoted in board rooms around the world:

If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.

However much repeated, the message has not come across. Why is that? Because this is a hard problem to solve. It is in the intersection between people as a culture and technology.

If all used reverse proxies in a sensible way I would probably have a lot less to do at work. Time and time again it turns out that organisations do not have configuration control over their applications and infrastructure, and the reverse proxy is a central building block in gaining it. To an extent everything is about logs and traceability when an incident occurs.

Beyondcorp and The Defendable Infrastructure

The lucky part of this hard-to-solve problem is that Google has already prescribed one good solution in its Beyondcorp whitepapers [1].

But this was in some ways described in the Norwegian Armed Forces before that in its five architecture principles for a defendable infrastructure. These were published by its former Head of Section Critical Infrastructure Protection Centre [2]:

  1. Monitor the network for situational awareness
  2. A defender must be able to shape the battleground to have freedom of movement and to limit the opponent’s freedom of movement
  3. Update services to limit vulnerability exposure
  4. Minimize the infrastructure to limit the attack surface
  5. Traceability is important to analyze what happened

I know that Richard Bejtlich was an inspiration for the defendable infrastructure principles, so the books written by him is relevant [4,5].

Defendable infrastructure is a good term, and also used in a 2019 Lockheed article which defines it well [3]:

Classical security engineering and architecture has been trying to solve the wrong problem. It is not sufficient to try to build hardened systems; instead we must build systems that are defendable. A system’s requirements, design, or test results can’t be declared as “secure.” Rather, it is a combination of how the system is designed, built, operated, and defended that ultimately protects the system and its assets over time. Because adversaries adapt their own techniques based on changing objectives and opportunities, systems and enterprises must be actively defended.

The development of these architecture principles happened before 2010, so the question remains how they apply in 2020. We may get back to the other principles in later posts, but the rest of this article will focus on monitoring in a 2020-perspective.

Monitoring - a Central Vantage Point

One thing that has developed since 2010 is our understanding of positioning monitoring capabilities and the more mainstream possibility of detection on endpoints. The historical focus of mature teams was primarily on the network layer. While the network layer is still important as an objective point of observation the application layer has received more attention. The reason for it is the acceptance that often it is were exploitation happens and the capabilities as commercial products has emerged.

With that in mind a shift in the understanding of a best practice of positioning reverse proxies has occured as well. While the previous recommendation was to think: defend inside-out. The focus is now to defend outside-in.

The meaning of defending outside-in, is to take control of what can be controlled: the application infrastructure. In all practicality this means to position the reverse proxy in front of your server segment instead of the whole network, including clients.

                                               [ Application A ]
[ Client on-prem ]                                    |
                 ] ---> [ Reverse proxy ] ---> [  App gateway  ]
[ Client abroad  ]              ^                     |
                         risk assessment       [ Application B ]

Previously, by some reason, we put the “client on-prem” on the other side of the reverse proxy, because we believed we could control what the user was doing. Today, we know better. This is not a trust issue, it is a matter of prioritizing based on the asset value and the defending capacity.

A reverse proxy is also a central vantage point of your infrastructure. In a nutshell if you are good detecting security incidents at this point, you are in a good position to have freedom of movement - such as channeling your opponent.

The modern reverse proxy have two integration capabilitites that legacy proxies do not:

  • Single sign-on (SSO), which provides strong authentication and good identity management
  • Access control logic (Google calls this the access control engine)

In fact, Google in 2013 stated it uses 120 variables for a risk assessment in its access control logic for Gmail [6]. In comparison most organisations today use three: username, password and in half the instances a token.

Every time you sign in to Google, whether via your web browser once a month or an email program that checks for new mail every five minutes, our system performs a complex risk analysis to determine how likely it is that the sign-in really comes from you. In fact, there are more than 120 variables that can factor into how a decision is made.

I imagine that Google uses the following factors for comparison to the sole username/password approach (they state some of these in their article):

  • Geo-location with an algoritmic score of destination of last login to current location was part of this. The k-means distance could be a good fit.
  • Source ASN risk score
  • Asset subject to access
  • User role scored against asset subject to access
  • Device state (updated, antivirus installed and so on)
  • Previous usage patterns, like time of day
  • Other information about the behavioural patterns of relevant threats

Another nice feature of a reverse proxy setup this way is that it minimizes the exposure and gives defenders the possibility to route traffic the way they see fit. For instance, it would be hard for an attacker to differentiate between a honeypot and a production system in the first place. One could also challenge the user in cases where in doubt, instead of plainly denying access as is sometimes done.

One challenge is what protocols need support. The two clear ones are:

  • HTTP
  • SSH
  • Application gateways between micro-segments

I have scoped out the details of micro-segmentation from this post. Micro-segmentation is the basic idea of creating a fine mesh of network segments in the infrastructure so that no asset can communicate with another by default. The rest is then routed through e.g. a gateway such as Pomerium, or in high-performance cases an application gateway - which may be a gateway for a specific binary protocol. The reason is control of all activity between services, being able to shape and deny access in the terrain.

Even though this post is not about implementation I will leave you with some examples of good open source starting points: Pomerium is an reverse proxy with the SSO-capability, and the default capabilities of SSH takes you far (ssh-ca and JumpHost).

          -----------> [ syslog server ] <------------
         |                    |                       |
         |                    |                       |
 o       |                    |                       |
/|\ [ Client ] -------> [ example.com ] <-----> [ app001.example.com ]
/ \      |      https    - pomerium        |
         |       |       - SSH JumpHost    |
         |       |                         |
         |       |                         |
      [ HIDS ]   |-------------------> [ NIDS ]
    
       Figure 1: Conceptual Defendable Infrastructure Overview

Now that a checkpoint is establish in front of the infrastructure, the rest is a matter of traceability, taking the time to understand the data to gain insight and finally develop and implement tactics against your opponents.

Until next time.

[1] https://cloud.google.com/beyondcorp
[2] https://norcydef.blogspot.com/2013/03/tg13-forsvarbar-informasjonsinfrastrukt.html
[3] https://www.lockheedmartin.com/content/dam/lockheed-martin/rms/documents/cyber/LM-White-Paper-Defendable-Architectures.pdf
[4] Tao of Network Security Monitoring, The: Beyond Intrusion Detection
[5] Extrusion Detection: Security Monitoring for Internal Intrusions
[6] https://blog.google/topics/safety-security/an-update-on-our-war-against-account/

Tags: #beyondcorp #zerotrust #microsegmentation #monitoring #traceability #defendable #architecture
Read with Gemini

Telemetry and Modern Workspaces: A Three-Headed Hound

Tommy
February 25 2018

Telemetry for cyber security is currently at a crossroads. While past methods have been efficient by being based on network monitoring, the current revolution in encryption and the distributed workspace makes it insufficient to solely rely on network monitoring. Through this post we are going to focus on the current challenges.

Tags: #telemetry #architecture
Read with Gemini

This blog is powered by cl-yag and Tufte CSS!