Andrew Poelstra has been working as a cryptographer in the Bitcoin and cryptocurrency space for the last six years. He is best known for his off-brand use of digital signatures in the form of "scriptless scripts". Currently he is the Director of Research at Blockstream.

Monerokon Madness: Schnorr Schnadness
Andrew Poelstra, Blockstream
https://www.youtube.com/watch?v=L6KqkrP_nU4

Abstract

Schnorr signatures, like the ed25519 signatures used in Monero, have a simple structure which allows multiple parties to jointly compute a single signature. These signatures can be validated in the ordinary way against a joint public key computed as a combination of individual signers’ keys and for which no individual signer knows the complete secret key. However, while such multi-signatures are simple to describe mathematically, they require a more complex security model able to handle adversarial behavior by individual signers, such as “rogue key attacks”. Further, practical implementations of multiparty protocols such as this require careful consideration of uniform random number generation, denial-of-service attacks, replay attacks and more. In this talk, we discuss our work addressing these challenges using the secp256k1 elliptic curve (which is analogous to ed25519), as well as the additional challenges presented by the generalization from n-of-n multisignatures to k-of-n threshold signatures.

Transcription

As Brandon mentioned, I’m here to talk about the challenges of implementing threshold signatures, the challenges of implementing Schnorr signatures. So I’ll give a little bit of background about what that means and then just sort of list-- go through a series of problems that I have encountered over the last couple years. Not all at once is terrible, every couple of months is like some new weird surprising thing here.

To start, let me explain, where I'm I pointing this, there we go. Let me explain what a Schnorr signature is. So, a Schnorr signature is a digital signature algorithm. Digital signature algorithms they are kind at the heart of cryptocurrencies of all shapes and sizes. Bitcoin uses something called ECDSA. There is a proposal for Bitcoin to change ECDSA or to add Schnorr signatures to Bitcoin as part of a protocol called Taproot. But lots of other currencies such as Monero, have had Schnorr signatures pretty much forever. Monero has had, by virtue of using the ed25519 library end curve, has been using Schnorr signatures since back when it was ByteCoin. Although, I should maybe give a caveat that the Monero style signatures are actually part of a much larger scheme (Ring CT), which Aravind talked about in a fair bit more detail.

My next slide here talks about how simple Schnorr signatures are. And I need to qualify that with, “well this is simple if you look at how RingCT is implemented, you can find some equations where if you delete a whole bunch of terms you will be left with this little thing, basically”. In practice sometimes this is more complicated because it’s embedded in larger context, but at the heart of it, a Schnorr signature is just these three formulas here, or this is actually the instructions for signing, for producing a signature. You choose this thing called a nonce, a uniformly random thing. I'm going to call this ‘k’. You compete this hash ‘e’. This hash is a hash of your transaction, it's a hash of your public key, it's a hash of what is called your public nonce which is derived from ‘k’. It's just everything that you know at the time of signing, you throw onto this hash ‘e’. Then you do this very simple equation. You take your secret nonce, you take your secret key, ‘x’, multiplied by this hash thing, and you add them together. Or subtract them, or whatever you want to do. And, what you're left with is this little linear equation.

This pretty simple by itself, what is cool is that because this is a linear equation - and this is why I am being sloppy about pluses or minuses, is that if you have multiple signers, all trying to jointly produce a signature, or jointly produce a say a RingCT proof, this is very easy. Here’s a transition from single signers to multiple signers. All I have done is add some indices. In the multi-signer case, everyone independently chooses their nonce, ‘ki’. They combine their stuff, they compute this hash of whatever everybody knows together, and they each independently produce a signature here at the bottom, and then they combine those signatures. So you're basically adding a combined step.

Producing Schnorr signatures is a two-step process. After each one of them you add a combined step, and there is your transition from single signatures to multi-signatures. So that’s great, and that extends to other more complicated things such as bulletproofs and, therefore, bulletproof or RingCT, or Omniring, or whatever way you might want to produce proofs in Monero- this kind of simplicity carries over. And this is very tempting, as a protocol designer. And then also, if you care about provable security, which you probably should, the provable security story for Schnorr is quite a bit better than for ECDSA, and I’m not going to go too much into that.

Let me start by talking about a couple of issues, just with single Schnorr signatures, and then I am going to talk about multi-signatures. And then if I have time I am going to talk about threshold signatures and we will see like, the problems keep layering up in practice.

And the first problem we have comes from this first line here. I will use a pointer, or maybe I won't, because I think the livestreamers don't see the pointer? I'm not sure about that. This dollar sign here, this is cute, this is a pun. This is used in academic cryptography. The idea is you’re flipping a coin to generate a random bit, a zero or a one. A dollar sign is 100 coins. You need 100 bits, 256 bits, whatever. Hundred the bits or dollars, so you’re flipping dollars worth of pennies. So you’re generating a uniformly random thing here. In practice, it’s difficult to generate uniform randomness. Especially if you’ve got something like a hardware token or something which has limited access to entropy, maybe it doesn’t have any storage, maybe it has speed limitation with CPU speed limitations, whatever. And the issue here is that the randomness that you need really does need to be uniformly random. If your biased but even by a single bit, or even in principle less than a bit. Maybe every quarter of your signatures have the top bit is zero, a bit more often than it should be, in principle, and as actually in practice, that is enough for people to extract your secret keys, given enough signatures.

There was a paper in the Bitcoin space, by Nadia Heninger and Joachim Breitner, recently, a couple months ago, that actually used such a theoretical attack, to extract a bunch of secret keys from old Bitcoin transactions that had somehow produced biased nonces. I encourage you to look up this paper. There are also some interesting forensics about how this happened. But basically there were a few old signing devices or code bases or something, that were biasing their nonces by a few bits, or maybe a lot of bits. They weren’t actually reusing nonces or doing anything completely horrible beyond the pale, they just had biased nonces and given enough signatures. This was enough to delete keys.

Even if you could somehow solve this problem, of getting unbiased randomness, if you’re getting this from a hardware token or something, ideally you’d want to verify that the hardware token is somehow producing unbiased randomness. So let me elaborate on that a little bit. There’s a scheme called RFC6979, which is very simple. Well, in principle is simple, in practice it involves way too much hashing and extra CPU time. What you do is you take your secret key, you take your message, you throw those into a pseudo-random function, and you take the output of that. It’s a pseudo-random function you can assume that the output is uniformly random. That’s a cryptographic assumption, but it’s a better assumption than assuming that reading bits off an unconnected diode or something is unbiased.

Importantly, this is in principle, verifiable. Instead of going through a deterministic algorithm starting from some secrets and some methods and stuff, but in practice it’s next to impossible to verify. If you had a hardware wallet producing randomness this way, and you wanted as a user to guarantee that the hardware wallet was always doing this and not accidentally biasing something or deliberately biasing something, which it could do in undetectable ways, you would need your hardware wallet to somehow produce a zero-knowledge proof that it went through this algorithm. So that kind of sucks, it’s very heavy, there’s no code for this. It’s unclear whether existing hardware tokens today are powerful enough to do this. So, okay, RFC6979 that's great, it’s a total solution, but it’s unverifiable.

So here’s a way to get verifiability. Using this technique called ‘sign to contract’. What ‘sign to contract’ does, is it takes this equation - and this will be the last equation, sort of, I’m going to repeat some earlier equations but I’m going to try to make this accessible. Basically you the host, the user of your hardware wallet, provide some randomness, hands it to the hardware wallet, and a hardware wallet mixes in this randomness into it’s nonce. This is great, it means that if you are able to produce uniform randomness, or if the hardware token is able to produce uniform randomness then the result will be uniform. And in particular if the hardware wallet is trying to undermine you in some way, then you undermine the undermining. Which is awesome, except here we’re going to run into the first of many implementation difficulties with trying to do cool simple ideas. So if you naively combine RFC6979, the hardware wallets going through this deterministic process, and then you give it some randomness to mix in, you can extract secret keys from the hardware wallet. At least if you implement this in a naive way.

In signature folklore we have this meme “never reuse nonces”, but also actually you can’t ever reuse related nonces. So if your hardware wallet is producing the same nonce, and then offsetting it with some host provided randomness, and it offsets that with different host provided randomness, at different times, those are related nonces. The difference between those will be the difference of the modification. It turns out there's a stronger meme that should out there that, “never reuse nonces-- never reuse related nonces”, and importantly, you’re allowed to reuse your nonces as long as you sign the same message, because you’re producing the same signature over and over. It turns out if you’re using related nonces, even if you’ve got the same message, you’re still going to leak your key. And that’s what happens here.

What you need to do to implement this in practice, is, the hardware wallet need to pre-commit to it's, how does this thing work now?

The naive thing to do or to fix this would be to have the hardware wallet first tell you, would be to have you first provide the randomness that you want to mix in to the hardware wallets nonce. And the hardware wallet, when doing the RFC6979 thing will mix in that along with the secret key and message. And that way it will always be using a different base nonce for different tweaks. The problem there is that the hardware wallet still retains the ability to bias by grinding through just trying different possibilities until it gets one that’s biased.

What you need to do is send the hardware wallet a pre-commitment to the tweak you want it to add. The hardware wallet will mix in the pre-commitment to the tweak, into the nonce it is going to use, it provides you the nonce it’s going to use, then you provide the actual randomness, and then it mixes the randomness in. If you do it in that order, I think there are three, four steps, then you’re great. So the lesson here is don’t be naive. The middle lesson here is like, ughh.

The way this was discovered, and this is going to be a pattern, was that I wrote an API to do this “sign to contract” thing. I thought, “this is great, I’m eliminating a nonce by a side channel, this is awesome”. My colleague Jonas Nick, looked at this and he said, “I can extract keys with this”. And I said, "ughh, everytime I write an API and you can extract keys with it". And it’s scary because intuitively I’m taking a public nonce and I’m providing a public tweak. All the communication between the hardware wallet and the host here is completely public. So it kind of caught me off guard and then I learned the lessons that I put here in parentheses and everything is good. Don’t be naive.

Moving on. Let’s talk about multi-signatures. We can sort of think of that nonce thing as a precursor to multi-signatures. We’re going from having a single entity producing signature, to having a hardware wallet and a host jointly producing a signature. And that’s where things start to go a little bit off the rails. Intuitively, the reason we love Schnorr multi-signatures is exactly what I said at the first slide. We have this equation here, which I copied from the first slide, except changed the minus to a plus, everybody chooses a random nonce. Step one, everyone adds their nonces together and hashes up the result. Step two, everyone produces signatures and adds those together. Great, you have a multi-signature. No no thought required, no crazy new protocols, no extra crypto, no multiple steps, no repeats , no additional rounds of communication.

This is a bit more subtle than I’m making it seem. Intuitively you could literally just add all the different nonces together, and add the signatures together, and you would get a single signature for the sum of everyone’s keys. This is dangerous. The reason being that if you just start with a bunch of different signers public keys and add them together to get a joint public key, which is how the scheme actually works. Then one person, the last person to contribute their key, could just subtract, think of a key, subtract everyone else’s keys in that and say, “oh this is my key, I promise”, and then when you add them all together they've canceled out everyone else’s keys. So now they have a joint key, that is entirely controlled by one person. This is called a “Rogue-key attack”.

To prevent this you need to somehow re-randomize the keys in a certain way, so that if somebody tries to produce their key as a sum of other people’s keys then this re-randomization is going to undermine them. We had a way to do this initially or basically you would take your key, you would hash your key, and mix that into your secret key. It turns out this is broken by something called Wagner’s algorithm. Which is like, ughh. What Wagner’s algorithm let’s you do is as an attacker, given a whole bunch of random numbers. If you’re allowed to just keep generating random numbers over and over, you can find a whole bunch of them that add to zero. So you can have a whole bunch of 256 bit hashes and find a large set of them, but not very large like 500, maybe. And with not a lot of work, like 2 to the 60 or so, you can find a bunch that add to zero, or add to 1, or add to whatever you might want it to do. This is tractable. Two to the 60th is tractable. Two to the 128th for collisions is not tractable, 2 to the 256th for other things is not tractable, but this is tractable.

What this means is that if you let an attacker just keep choosing keys and every key gets its own randomness, they can somehow still choose their key so that when you add up all randomized keys, it still adds up to their original key. Very frustrating. So you have to mix randomness from every key into every other key. In practice you hash up the whole list of keys and then for each key you hash that list, along with the individual key and then that’s how you do your randomness.

Also, it turns out there’s an extra step here. I’m saying add the nonces and add the signatures. It turns out everybody has to pre-commit to their nonce, before revealing their nonces, before they can add them. And the reason again is that you can use Wagner’s algorithm on the message hashes, otherwise, for an attacked to choose their nonces in such a way that they can get a free signature out of this. They can start a whole bunch of parallel signing sessions with some people they want to attack. Like 511 signing sessions, and then they can get a 512th signature. By completing the 511 using their carefully chosen nonces and Wagner’s algorithm to get all of their existing signatures, to add up to another one.

You have to pre-commit, that undermines this. You have make randomness from every key into every key, that undermines a more naive thing. There you go, and the resulting scheme is something called MuSig, which had an interesting history. We originally published MuSig with a security proof they got through peer-review without this pre-commitment step. And this was actually quite heavily reviewed, not only by the journal referees, but also by everyone on our team writing the paper and then several other teams doing competing things. It was only noticed when another team managed to find a proof that our proof couldn’t be true. A weird ‘meta-proof’, and then they were like “hmm, maybe we should look more closely”. Then from there they were able to find an actual mistake in our proof. And still later they found an actual attack using Wagner’s algorithm.

The result in the final paper, they like finally got published, and fortunately during this whole drama, our paper was being held up just in the normal publishing review cycles. So we had time to fix this, so the final published version, the version that actually got published did not have this bug. But it was a close call and kind of a scary thing that something this subtle got through. And the mistake in the proof was really subtle. It was a long and complicated proof and it was subtle. There we go, that’s MuSig.

Moving on. Let’s talk about implementing MuSig, now that we have a scheme that’s secure. Here is the thing, if you mix RFC6979 and multi-signatures, again you can extract keys. And the problem here is that unlike in the host hardware wallet scheme, where as long as you do things in the right order, to make sure that when the hardware wallet is choosing its randomness, its ensuring that it will use a different random nonce, no matter what input you give it. Just for having everybody else pre-commit and then you mix a pre-commitments in. You can’t do this with multi-signatures. Somebody has to go first when publishing their nonce or when choosing their nonces. And that somebody doesn’t know what other people’s nonces will be, and that’s going to get mixed into the message hash. It doesn’t matter how many pre-commitment rounds you add. It doesn’t matter how you reorder things. Basically this is life if you are deterministicly generating your nonces, then it is possible for somebody to play forward the scheme, or do a multi-signature with you, choose some nonce, and then do another multi-signature with you on the same message, same key and everything, but they choose a different nonce this time and the two schemes together or the resulting signatures together will be sufficient for them to extract your key. So you can’t use deterministic nonces. Unless, maybe, and I’m not going to go into this, but if everybody uses deterministic nonces, and everybody provides a zero-knowledge proof that they did so, then nobody has room to tweak their own contributions , this is getting very complicated.

It’d be better to just no do deterministic nonces. Well, is it better? Is not really better because if you need your hardware wallet to be generating random nonces without any sort of deterministic scheme, now you’re back to all the old problems of being in principle unverifiable. Even if you extract the secret keys and break open the signatures, there’s not going to be anything to look for and what the nonce should be. You have to worry about nonce biasing. You have to worry about deliberate nonce biasing. As well as hardware glitching, heat, and all the natural ways that bias occurs. You also have to worry about replay attacks. A natural way to do this, maybe is if the hardware wallet has a counter and it’s just mixing a counter into a hash, and doing the '6979' thing but always incrementing a counter. You have to worry about somebody somehow reverting the counter or glitching you or replaying or something. Basically you either need an RNG, or you need some state that can’t be reversed, and these either are both difficult with hardware wallets.

We need fresh randomness. No RFC6979. That’s all I want to say about that. And we also need to still do the whole MuSig thing. Everybody needs to pre-commit to their nonce, there are three rounds here, pre-commit to the nonce, reveal the nonce, reveal the signatures. And, when you’re implementing this I said, “well this sucks, but I already have a three-round protocol going on in some system that I’m developing and I don’t know the message until the last moment, but can I at least have everybody share the nonces alongside sharing their contribution to the message? That way I can kind of do something in parallel and if anything goes wrong I can restart and do the next attempt in parallel with the old attempt or something like that.”

I wanted to let my signers contribute their pre-commitments to the nonces before knowing the message. And this seemed very straightforward we change their API a little bit to allow this to happen. And then it turned out two days ago, this is after two years of iteration on this by the way, two days ago, Jonas sends me a message, “Oh I found Wagner’s algorithm attack on this. You have to know the message before you choose your nonce”. Okay, fine.

But it’s frustrating that this kind of came at me out of nowhere, although there’s a lesson here. Aravind was talking about the value of provable security. When I tried to reorder the nonce sharing with the message choosing, I was actually violating the protocol as described in the paper. And therefore, the security proof in our paper no longer applied. And it turned out in this case there was actually an attack. There’s a lesson there about not re-ordering things in papers. If you implement a different protocol than isn’t in a paper then you implemented a different protocol and it’s security proof does not apply to you. And I was, I guess I got cocky about this, and I got burned, so that sucks.

Let’s move on to threshold signatures. I’m going to kind of rush through this but there’s actually not a whole lot to say. The first thing I want to say is, everything I just said for multi-signatures - let me first define a threshold signature versus a multi-signature because we tend to be a bit sloppy in this space about this distinction - multi-signatures, you have a whole bunch of people contributing to a signature, they produce a single signature. And in the academic space actually your verifiers know the whole list of keys and they know the original participants and they just have a single signature to verify. A threshold signature, maybe you need fewer than everybody, maybe you have ten participants and you want any eight of them to be able to produce a signature. This is called a threshold signature.

Schnorr threshold signatures are super easy because they have this nice linear structure to our equations. Everybody there’s a key setup round, where everybody secret shares our keys, everybody shares their shares, and they need private communication channels for this but that’s a bit involved. Then everybody replaces their individual keys with sums of their share and everybody else’s shares. And then at signing time, you have your eight people together, who have whatever eight they happen to be, and they’re able to just do the multi-signature algorithm. They all choose nonces, you add these up, they take their keys and they kind of weight them by these things called lagrange multipliers that change depending on which set of eight participants you have. They produce signatures, you add the signatures together, everything’s great. Super simple.

There’s more nonce devising issues here. One is that you really have to choose a new nonce for every signature attempt, here. The trick is that even if you’re signing with the same key, the same set of signers, and the same message, and the same everything you got zero-knowledge proofs and all that good stuff. If you signed with a different set of participants, the resulting combined nonce might be different. In which case, you could have key extraction as well. Again, choose your nonces very carefully. Which means unverifiably and with great difficulty.

And then, another issue with threshold signatures is that in principle the point of threshold signatures is that you have resilience against some parties not being available at signing time. But like in practice, probably you do have all your parties available. And ideally you could say, “well, if I only need eight participants to produce this signer with ten people contributing to the key, ideally all ten can contribute and if two of them are screwing around somehow we can just booth them out”, and then that’s great. The reason we have a threshold signature is to protect against not only people being out, but also adversarial behavior. It turns out it’s pretty involved, to do that.

First off, it’s difficult to recover gracefully. Basically you need to figure out who was screwing around, kick them out, and then restart the protocol. Determining who was dishonest is hard. You need to use this thing called publicly verifiable secret sharing. I thought I’d have some clever scheme to make that simpler using signatures and stuff. It turned out it didn’t work in the case that your adversarial behavior is somebody just contributing to the first half of the signature and then disappearing. The reason being that if you add requirements for everybody to do things in a valid way, and they just don’t do it- that’s difficult to detect and difficult to provide evidence of. So there’s a whole bunch of technical difficulties there but I’m not going to go into.

And then, one final minor note. I’ve been talking about how important provable security is and how I’ve gotten burned every time I try to escape from it. The thing is the threshold signatures there’s this paper saying you can't use, like you don’t get provable security without this whole extra machinery, and the reason you don’t get provable security is because the keys aren’t random. And I was like, “well, does it matter if your secret keys aren’t random? I mean it matters if the nonces aren’t random, but the secret keys stay fixed, so maybe it’s ok if they’re biased”. And then I looked at the paper and it’s not even about the secret keys, it’s about the public keys. And like, obviously that doesn’t matter, but I’ve been burned a lot saying, “obviously things don’t matter”, so, maybe I actually have to deal with all of this.

I mean the paper is telling you what to do, but the point is this stuff is difficult and there’s just lots of little things to think about. And that’s all that I wanted to say. Thank you.