/dev/random

From Infogalactic: the planetary knowledge core
(Redirected from /dev/urandom)
Jump to: navigation, search
Special device files

In Unix-like operating systems, /dev/random is a special file that serves as a blocking pseudorandom number generator. It allows access to environmental noise collected from device drivers and other sources.[1] Not all operating systems implement the same semantics for /dev/random.

Linux

Random number generation from kernel space was implemented for the first time for Linux[2] in 1994 by Theodore Ts'o.[3] The implementation uses secure hashes rather than ciphers, to avoid legal restrictions that were in place when the generator was originally designed. The implementation was also designed with the assumption that any given hash or cipher might eventually be found to be weak, and so the design is durable in the face of any such weaknesses. Fast recovery from pool compromise is not considered a requirement, because the requirements for pool compromise are sufficient for much easier and more direct attacks on unrelated parts of the operating system.

In this implementation, the generator keeps an estimate of the number of bits of noise in the entropy pool. From this entropy pool random numbers are created. When read, the /dev/random device will only return random bytes within the estimated number of bits of noise in the entropy pool. /dev/random should be suitable for uses that need very high quality randomness such as one-time pad or key generation. When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered.[4] The intent is to serve as a cryptographically secure pseudorandom number generator, delivering output with entropy as large as possible. This is suggested by the authors for use in generating cryptographic keys for high-value or long-term protection.[4]

A counterpart to /dev/random is /dev/urandom ("unlimited"[5]/non-blocking random source[4]) which reuses the internal pool to produce more pseudo-random bits. This means that the call will not block, but the output may contain less entropy than the corresponding read from /dev/random. While /dev/urandom is still intended as a pseudorandom number generator suitable for most cryptographic purposes, some people claim /dev/urandom as not recommended[who?] for the generation of long-term cryptographic keys. However this is in general not the case because once the entropy pool is unpredictable it doesn't leak security by a reduced number of bits. [6]

It is also possible to write to /dev/random. This allows any user to mix random data into the pool. Non-random data is harmless, because only a privileged user can issue the ioctl needed to increase the entropy estimate. The current amount of entropy and the size of the Linux kernel entropy pool are available in /proc/sys/kernel/random/, which can be displayed by the command cat /proc/sys/kernel/random/entropy_avail.

Gutterman, Pinkas, & Reinman in March 2006 published a detailed cryptographic analysis of the Linux random number generator[7] in which they describe several weaknesses. Perhaps the most severe issue they report is with embedded or Live CD systems, such as routers and diskless clients, for which the bootup state is predictable and the available supply of entropy from the environment may be limited. For a system with non-volatile memory, they recommend saving some state from the RNG at shutdown so that it can be included in the RNG state on the next reboot. In the case of a router for which network traffic represents the primary available source of entropy, they note that saving state across reboots "would require potential attackers to either eavesdrop on all network traffic" from when the router is first put into service, or obtain direct access to the router's internal state. This issue, they note, is particularly critical in the case of a wireless router whose network traffic can be captured from a distance, and which may be using the RNG to generate keys for data encryption.

The Linux kernel supports several hardware random number generators. The raw output of such a device may be obtained from /dev/hwrng.[8]

With Linux kernel 3.16 and newer,[9] the kernel itself mixes data from hardware random number generators into /dev/random on a sliding scale based on the definable entropy estimation quality of the HWRNG. This means that no userspace daemon, such as rngd from rng-tools is needed to do that job. With Linux kernel 3.17+, the VirtIO RNG was modified to have a default quality defined above 0,[10] and as such, is currently the only HWRNG mixed into /dev/random by default.

The entropy pool can be improved by programs like timer_entropyd, haveged, randomsound etc. With rng-tools, hardware random number generators like Entropy Key, etc. can write to /dev/random. The programs dieharder, diehard and ent can test these random number generators.[11][12][13][14]

In January 2014, Daniel J. Bernstein published a critique[15] of how Linux mixes different sources of entropy. He outlines an attack in which one source of entropy capable of monitoring the other sources of entropy could modify its output to nullify the randomness of the other sources of entropy. Consider the function H(x,y,z) where H is a hash function and x, y, and z are sources of entropy with z being the output of a CPU based malicious HRNG Z:

  1. Z generates a random value of r.
  2. Z computes H(x,y,r).
  3. If the output of H(x,y,r) is equal to the desired value, output r as z.
  4. Else, repeat starting at 1.

Bernstein estimated that an attacker would need to repeat H(x,y,r) 16 times to compromise DSA and ECDSA. This is possible because Linux reseeds H on an ongoing basis instead of using a single high quality seed.

<templatestyles src="Template:Blockquote/styles.css" />

Is there any serious argument that adding new entropy all the time is a good thing? The Linux /dev/urandom manual page claims that without new entropy the user is "theoretically vulnerable to a cryptographic attack",[16] but (as I've mentioned in various venues) this is a ludicrous argument—how can anyone simultaneously believe that

  • we can't figure out how to deterministically expand one 256-bit secret into an endless stream of unpredictable keys (this is what we need from urandom), but
  • we can figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.)?

FreeBSD

The FreeBSD operating system implements a 256-bit variant of the Yarrow algorithm, intended to provide a cryptographically secure pseudorandom stream, replacing a previous Linux-style random device. Unlike the Linux /dev/random, the FreeBSD /dev/random device never blocks. Its behavior is similar to the Linux /dev/urandom, and /dev/urandom on FreeBSD is linked to /dev/random.

Yarrow is based on the assumptions that modern PRNGs are very secure if their internal state is unknown to an attacker, and that they are better understood than the estimation of entropy. While entropy pool based methods are completely secure if implemented correctly, if they overestimate their entropy they may become less secure than well-seeded PRNGs. In some cases an attacker may have a considerable amount of control over the entropy; for example a diskless server may get almost all of it from the network, rendering it potentially vulnerable to man-in-the-middle attacks. Yarrow places a lot of emphasis on avoiding any pool compromise and on recovering from it as quickly as possible. It is regularly reseeded; on a system with small amount of network and disk activity, this is done after a fraction of a second.

FreeBSD also provides support for hardware random number generators, which will replace Yarrow when present.

OpenBSD

Since OpenBSD 5.1 (May 1, 2012) /dev/random and /dev/arandom use an algorithm based on RC4 but renamed, for licensing purposes, ARC4. While random number generation here uses system entropy gathered in several ways, the ARC4 algorithm provides a fail-safe, ensuring that a rapid and high quality pseudo-random number stream is provided even on a low entropy pool. The system automatically uses hardware random number generators (such as those provided on some Intel PCI hubs) if they are available, through the OpenBSD Cryptographic Framework.

As of OpenBSD 5.5 (May 1, 2014), the arc4random() call used for OpenBSD's random devices no longer uses ARC4, but ChaCha20.[17][18] NetBSD's implementation of the legacy arc4random() API has also been switched over to ChaCha20 as well.[19]

OS X and iOS

OS X uses 160-bit Yarrow based on SHA1.[20] There is no difference between /dev/random and /dev/urandom; both behave identically.[21] Apple's iOS also uses Yarrow.[22]

Other operating systems

/dev/random and /dev/urandom are also available on Solaris,[23] NetBSD,[24] Tru64 UNIX 5.1B,[25] AIX 5.2[26] and HP-UX 11i v2.[27] As with FreeBSD, AIX implements its own Yarrow-based design, however AIX uses considerably fewer entropy sources than the standard /dev/random implementation and stops refilling the pool when it thinks it contains enough entropy.[28]

In Windows NT, similar functionality is delivered by ksecdd.sys, but reading the special file \Device\KsecDD does not work as in UNIX. The documented methods to generate cryptographically random bytes are CryptGenRandom and RtlGenRandom.

While DOS doesn't naturally provide such functionality, there is an open-source third-party driver called noise.sys,[29] which functions similarly in that it creates two devices, RANDOM$ and URANDOM$, which are also accessible as /DEV/RANDOM$ and /DEV/URANDOM$, that programs can access for random data.

The Linux emulator Cygwin on Windows provide implementations of both /dev/random and /dev/urandom, which can be used in scripts and programs.

EGD as an alternative

A software program called EGD (entropy gathering daemon) is a common alternative for Unix systems that do not support the /dev/random device. It is a user-space daemon, which provides high-quality[citation needed] cryptographic random data. Some cryptographic software such as OpenSSL, GNU Privacy Guard, and the Apache HTTP Server support using EGD when a /dev/random device is not available.

EGD,[30] or a compatible alternative such as PRNGD,[31] gather pseudo-random entropy from various sources, process it to remove bias and improve cryptographic quality, and then make it available over a Unix domain socket (with /dev/egd-pool being a common choice) or over a TCP socket. The entropy gathering usually entails periodically forking subprocesses to query attributes of the system that are likely to be frequently changing and unpredictable, such as monitoring CPU, I/O, and network usage as well as the contents of various log files and temporary directories.

EGD communicates with other programs that need random data using a simple protocol. The client connects to an EGD socket and sends a command, identified by the value of the first octet:

  • command 0: query the amount of entropy currently available. The EGD daemon returns a 4-byte number in big-endian format representing the number of random bytes that can currently be satisfied without delay.
  • command 1: get random bytes, no blocking. The second byte in the request tells EGD how many random bytes of output it should return, from 1 to 255. If EGD does not have enough entropy to immediately satisfy the request, then fewer bytes, or perhaps no bytes, may be returned. The first octet of the reply indicates how many additional bytes, those containing the random data, immediately follow in the reply.
  • command 2: get random bytes, blocking. The second byte tells EGD how many random bytes of output it should return. If EGD does not have enough entropy, it will wait until it has gathered enough before responding. Unlike command 1, the reply starts immediately with the random bytes rather than a length octet, as the total length of returned data will not vary from the amount requested.
  • command 3: update entropy. This command allows the client to provide additional entropy to be added to EGD's internal pool. The next two bytes, interpreted as a 16-bit big-endian integer indicate how many bits of randomness the caller is claiming to be supplying. The fourth byte indicates how many additional bytes of source data follow in the request. The EGD daemon may mix in the received entropy and will return nothing back.

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. 4.0 4.1 4.2 random(4) – Linux Programmer's Manual – Special Files
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. https://media.ccc.de/v/32c3-7441-the_plain_simple_reality_of_entropy#video&t=1262
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=be4000bc4644d027c519b6361f5ae3bbfc52c347
  10. https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=34679ec7a0c45da8161507e1f2e1f72749dfd85c
  11. http://www.vanheusden.com/te/timer_entropyd-0.1.tgz
  12. https://code.google.com/p/dieharder/
  13. http://stat.fsu.edu/pub/diehard/
  14. https://www.gnu.org/software/hurd/user/tlecarrour/rng-tools.html
  15. http://blog.cr.yp.to/20140205-entropy.html
  16. man-pages/random.4 at revision 9dc53e71c24ab77d682dffbd204f94211161905c, line 68
  17. Lua error in package.lua at line 80: module 'strict' not found.
  18. Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.
  20. http://www.opensource.apple.com/source/xnu/xnu-1456.1.26/bsd/dev/random/
  21. random(4) – Darwin and OS X Kernel Interfaces Manual
  22. https://www.apple.com/br/ipad/business/docs/iOS_Security_Oct12.pdf
  23. Lua error in package.lua at line 80: module 'strict' not found.
  24. rnd(4) – NetBSD Kernel Interfaces Manual
  25. Lua error in package.lua at line 80: module 'strict' not found.
  26. Lua error in package.lua at line 80: module 'strict' not found.
  27. Lua error in package.lua at line 80: module 'strict' not found.
  28. Lua error in package.lua at line 80: module 'strict' not found.
  29. Lua error in package.lua at line 80: module 'strict' not found.
  30. Lua error in package.lua at line 80: module 'strict' not found.
  31. Lua error in package.lua at line 80: module 'strict' not found.

External links

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.