OSDN Git Service

uclinux-h8/linux.git
6 years agocrypto: brcm - explicitly cast cipher to hash type
Stefan Agner [Sat, 24 Mar 2018 11:02:42 +0000 (12:02 +0100)]
crypto: brcm - explicitly cast cipher to hash type

In the AES cases enum spu_cipher_type and enum hash_type have
the same values, so the assignment is fine. Explicitly cast
the enum type conversion.

This fixes two warnings when building with clang:
  drivers/crypto/bcm/cipher.c:821:34: warning: implicit conversion from
      enumeration type 'enum spu_cipher_type' to different enumeration
      type 'enum hash_type' [-Wenum-conversion]
                hash_parms.type = cipher_parms.type;
                                ~ ~~~~~~~~~~~~~^~~~
  drivers/crypto/bcm/cipher.c:1412:26: warning: implicit conversion from
      enumeration type 'enum spu_cipher_type' to different enumeration
      type 'enum hash_type' [-Wenum-conversion]
                hash_parms.type = ctx->cipher_type;
                                ~ ~~~~~^~~~~~~~~~~

Signed-off-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: talitos - don't leak pointers to authenc keys
Tudor-Dan Ambarus [Fri, 23 Mar 2018 10:42:24 +0000 (12:42 +0200)]
crypto: talitos - don't leak pointers to authenc keys

In talitos's aead_setkey we save pointers to the authenc keys in a
local variable of type struct crypto_authenc_keys and we don't
zeroize it after use. Fix this and don't leak pointers to the
authenc keys.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: qat - don't leak pointers to authenc keys
Tudor-Dan Ambarus [Fri, 23 Mar 2018 10:42:23 +0000 (12:42 +0200)]
crypto: qat - don't leak pointers to authenc keys

In qat_alg_aead_init_sessions we save pointers to the authenc keys
in a local variable of type struct crypto_authenc_keys and we don't
zeroize it after use. Fix this and don't leak pointers to the
authenc keys.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: picoxcell - don't leak pointers to authenc keys
Tudor-Dan Ambarus [Fri, 23 Mar 2018 10:42:22 +0000 (12:42 +0200)]
crypto: picoxcell - don't leak pointers to authenc keys

In spacc_aead_setkey we save pointers to the authenc keys in a
local variable of type struct crypto_authenc_keys and we don't
zeroize it after use. Fix this and don't leak pointers to the
authenc keys.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Reviewed-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ixp4xx - don't leak pointers to authenc keys
Tudor-Dan Ambarus [Fri, 23 Mar 2018 10:42:21 +0000 (12:42 +0200)]
crypto: ixp4xx - don't leak pointers to authenc keys

In ixp4xx's aead_setkey we save pointers to the authenc keys in a
local variable of type struct crypto_authenc_keys and we don't
zeroize it after use. Fix this and don't leak pointers to the
authenc keys.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - don't leak pointers to authenc keys
Tudor-Dan Ambarus [Fri, 23 Mar 2018 10:42:20 +0000 (12:42 +0200)]
crypto: chelsio - don't leak pointers to authenc keys

In chcr_authenc_setkey and chcr_aead_digest_null_setkey we save
pointers to the authenc keys in local variables of type
struct crypto_authenc_keys and we don't zeroize them after use.
Fix this and don't leak pointers to the authenc keys.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: caam/qi - don't leak pointers to authenc keys
Tudor-Dan Ambarus [Fri, 23 Mar 2018 10:42:19 +0000 (12:42 +0200)]
crypto: caam/qi - don't leak pointers to authenc keys

In caam/qi's aead_setkey we save pointers to the authenc keys in
a local variable of type struct crypto_authenc_keys and we don't
zeroize it after use. Fix this and don't leak pointers to the
authenc keys.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: caam - don't leak pointers to authenc keys
Tudor-Dan Ambarus [Fri, 23 Mar 2018 10:42:18 +0000 (12:42 +0200)]
crypto: caam - don't leak pointers to authenc keys

In caam's aead_setkey we save pointers to the authenc keys in a
local variable of type struct crypto_authenc_keys and we don't
zeroize it after use. Fix this and don't leak pointers to the
authenc keys.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: lrw - Free rctx->ext with kzfree
Herbert Xu [Fri, 23 Mar 2018 00:14:44 +0000 (08:14 +0800)]
crypto: lrw - Free rctx->ext with kzfree

The buffer rctx->ext contains potentially sensitive data and should
be freed with kzfree.

Cc: <stable@vger.kernel.org>
Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: talitos - fix IPsec cipher in length
LEROY Christophe [Thu, 22 Mar 2018 09:57:01 +0000 (10:57 +0100)]
crypto: talitos - fix IPsec cipher in length

For SEC 2.x+, cipher in length must contain only the ciphertext length.
In case of using hardware ICV checking, the ICV length is provided via
the "extent" field of the descriptor pointer.

Cc: <stable@vger.kernel.org> # 4.8+
Fixes: 549bd8bc5987 ("crypto: talitos - Implement AEAD for SEC1 using HMAC_SNOOP_NO_AFEU")
Reported-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Tested-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: Deduplicate le32_to_cpu_array() and cpu_to_le32_array()
Andy Shevchenko [Wed, 21 Mar 2018 17:01:40 +0000 (19:01 +0200)]
crypto: Deduplicate le32_to_cpu_array() and cpu_to_le32_array()

Deduplicate le32_to_cpu_array() and cpu_to_le32_array() by moving them
to the generic header.

No functional change implied.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: doc - clarify hash callbacks state machine
Horia Geantă [Tue, 20 Mar 2018 07:56:12 +0000 (09:56 +0200)]
crypto: doc - clarify hash callbacks state machine

Add a note that it is perfectly legal to "abandon" a request object:
- call .init() and then (as many times) .update()
- _not_ call any of .final(), .finup() or .export() at any point in
  future

Link: https://lkml.kernel.org/r/20180222114741.GA27631@gondor.apana.org.au
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: api - Keep failed instances alive
Herbert Xu [Tue, 20 Mar 2018 07:52:45 +0000 (15:52 +0800)]
crypto: api - Keep failed instances alive

This patch reverts commit 9c521a200bc3 ("crypto: api - remove
instance when test failed") and fixes the underlying problem
in a different way.

To recap, prior to the reverted commit, an instance that fails
a self-test is kept around.  However, it would satisfy any new
lookups against its name and therefore the system may accumlulate
an unbounded number of failed instances for the same algorithm
name.

The reverted commit fixed it by unregistering the instance.  Hoever,
this still does not prevent the creation of the same failed instance
over and over again each time the name is looked up.

This patch fixes it by keeping the failed instance around, just as
we would if it were a normal algorithm.  However, the lookup code
has been udpated so that we do not attempt to create another
instance as long as this failed one is still registered.  Of course,
you could still force a new creation by deleting the instance from
user-space.

A new error (ELIBBAD) has been commandeered for this purpose and
will be returned when all registered algorithm of a given name
have failed the self-test.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: api - Make crypto_alg_lookup static
Herbert Xu [Tue, 20 Mar 2018 00:05:39 +0000 (08:05 +0800)]
crypto: api - Make crypto_alg_lookup static

The function crypto_alg_lookup is only usd within the crypto API
and should be not be exported to the modules.  This patch marks
it as a static function.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: api - Remove unused crypto_type lookup function
Herbert Xu [Mon, 19 Mar 2018 23:41:00 +0000 (07:41 +0800)]
crypto: api - Remove unused crypto_type lookup function

The lookup function in crypto_type was only used for the implicit
IV generators which have been completely removed from the crypto
API.

This patch removes the lookup function as it is now useless.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - Remove declaration of static function from header
Harsh Jain [Mon, 19 Mar 2018 13:36:22 +0000 (19:06 +0530)]
crypto: chelsio - Remove declaration of static function from header

It fixes compilation warning introduced in commit

Fixes: 5110e65536f3 ("crypto: chelsio - Split Hash requests for...")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - hmac(sha224) support
Antoine Tenart [Mon, 19 Mar 2018 08:21:21 +0000 (09:21 +0100)]
crypto: inside-secure - hmac(sha224) support

This patch adds the hmac(sha224) support to the Inside Secure
cryptographic engine driver.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - hmac(sha256) support
Antoine Tenart [Mon, 19 Mar 2018 08:21:20 +0000 (09:21 +0100)]
crypto: inside-secure - hmac(sha256) support

This patch adds the hmac(sha256) support to the Inside Secure
cryptographic engine driver.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - the context ipad/opad should use the state sz
Antoine Tenart [Mon, 19 Mar 2018 08:21:19 +0000 (09:21 +0100)]
crypto: inside-secure - the context ipad/opad should use the state sz

This patches uses the state size of the algorithms instead of their
digest size to copy the ipad and opad in the context. This doesn't fix
anything as the state and digest size are the same for many algorithms,
and for all the hmac currently supported by this driver. However
hmac(sha224) use the sha224 hash function which has a different digest
and state size. This commit prepares the addition of such algorithms.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - improve the skcipher token
Antoine Tenart [Mon, 19 Mar 2018 08:21:18 +0000 (09:21 +0100)]
crypto: inside-secure - improve the skcipher token

The token used for encryption and decryption of skcipher algorithms sets
its stat field to "last packet". As it's a cipher only algorithm, there
is not hash operation and thus the "last hash" bit should be set to tell
the internal engine no hash operation should be performed.

This does not fix a bug, but improves the token definition to follow
exactly what's advised by the datasheet.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - do not access buffers mapped to the device
Antoine Tenart [Mon, 19 Mar 2018 08:21:17 +0000 (09:21 +0100)]
crypto: inside-secure - do not access buffers mapped to the device

This patches update the way the digest is copied from the state buffer
to the result buffer, so that the copy only happen after the state
buffer was DMA unmapped, as otherwise the buffer would be owned by the
device.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - improve the send error path
Antoine Tenart [Mon, 19 Mar 2018 08:21:16 +0000 (09:21 +0100)]
crypto: inside-secure - improve the send error path

This patch improves the send error path as it wasn't handling all error
cases. A new label is added, and some of the goto are updated to point
to the right labels, so that the code is more robust to errors.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - fix a typo in a register name
Antoine Tenart [Mon, 19 Mar 2018 08:21:15 +0000 (09:21 +0100)]
crypto: inside-secure - fix a typo in a register name

This patch fixes a typo in the EIP197_HIA_xDR_WR_CTRL_BUG register name,
as it should be EIP197_HIA_xDR_WR_CTRL_BUF. This is a cosmetic only
change.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - fix typo s/allways/always/ in a define
Antoine Tenart [Mon, 19 Mar 2018 08:21:14 +0000 (09:21 +0100)]
crypto: inside-secure - fix typo s/allways/always/ in a define

Small cosmetic patch fixing one typo in the
EIP197_HIA_DSE_CFG_ALLWAYS_BUFFERABLE macro, it should be _ALWAYS_.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - move the digest to the request context
Antoine Tenart [Mon, 19 Mar 2018 08:21:13 +0000 (09:21 +0100)]
crypto: inside-secure - move the digest to the request context

This patches moves the digest information from the transformation
context to the request context. This fixes cases where HMAC init
functions were called and override the digest value for a short period
of time, as the HMAC init functions call the SHA init one which reset
the value. This lead to a small percentage of HMAC being incorrectly
computed under heavy load.

Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
Suggested-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
[Ofer here did all the work, from seeing the issue to understanding the
root cause. I only made the patch.]
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: cavium - Replace mdelay with msleep in cpt_device_init
Jia-Ju Bai [Sun, 18 Mar 2018 14:50:38 +0000 (22:50 +0800)]
crypto: cavium - Replace mdelay with msleep in cpt_device_init

cpt_device_init() is never called in atomic context.

The call chain ending up at cpt_device_init() is:
[1] cpt_device_init() <- cpt_probe()
cpt_probe() is only set as ".probe" in pci_driver structure
"cpt_pci_driver".

Despite never getting called from atomic context, cpt_device_init() calls
mdelay(100), i.e. busy wait for 100ms.
That is not necessary and can be replaced with msleep to
avoid busy waiting.

This is found by a static analysis tool named DCNS written by myself.

Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: doc - Document remaining members in struct crypto_alg
Gary R Hook [Wed, 14 Mar 2018 22:15:52 +0000 (17:15 -0500)]
crypto: doc - Document remaining members in struct crypto_alg

Add missing comments for union members ablkcipher, blkcipher,
cipher, and compress. This silences complaints when building
the htmldocs.

Fixes: 0d7f488f0305a (crypto: doc - cipher data structures)
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: bfin_crc - remove blackfin CRC driver
Arnd Bergmann [Wed, 14 Mar 2018 15:35:32 +0000 (16:35 +0100)]
crypto: bfin_crc - remove blackfin CRC driver

The blackfin architecture is getting removed, so this
driver won't be used any more.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm,arm64 - Fix random regeneration of S_shipped
Leonard Crestez [Tue, 13 Mar 2018 20:17:23 +0000 (22:17 +0200)]
crypto: arm,arm64 - Fix random regeneration of S_shipped

The decision to rebuild .S_shipped is made based on the relative
timestamps of .S_shipped and .pl files but git makes this essentially
random. This means that the perl script might run anyway (usually at
most once per checkout), defeating the whole purpose of _shipped.

Fix by skipping the rule unless explicit make variables are provided:
REGENERATE_ARM_CRYPTO or REGENERATE_ARM64_CRYPTO.

This can produce nasty occasional build failures downstream, for example
for toolchains with broken perl. The solution is minimally intrusive to
make it easier to push into stable.

Another report on a similar issue here: https://lkml.org/lkml/2018/3/8/1379

Signed-off-by: Leonard Crestez <leonard.crestez@nxp.com>
Cc: <stable@vger.kernel.org>
Reviewed-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: ks-sa - add hw_random driver
Vitaly Andrianov [Tue, 13 Mar 2018 17:33:31 +0000 (13:33 -0400)]
hwrng: ks-sa - add hw_random driver

Keystone Security Accelerator module has a hardware random generator
sub-module. This commit adds the driver for this sub-module.

Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
[t-kristo@ti.com: dropped one unnecessary dev_err message]
Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agodt-bindings: rng: add bindings doc for Keystone SA HWRNG driver
Vitaly Andrianov [Tue, 13 Mar 2018 17:33:30 +0000 (13:33 -0400)]
dt-bindings: rng: add bindings doc for Keystone SA HWRNG driver

The Keystone SA module has a hardware random generator module.
This commit adds binding doc for the KS2 SA HWRNG driver.

Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Murali Karicheri <m-karicheri2@ti.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - fix clock resource by adding a register clock
Gregory CLEMENT [Tue, 13 Mar 2018 16:48:42 +0000 (17:48 +0100)]
crypto: inside-secure - fix clock resource by adding a register clock

On Armada 7K/8K we need to explicitly enable the register clock. This
clock is optional because not all the SoCs using this IP need it but at
least for Armada 7K/8K it is actually mandatory.

The binding documentation is updated accordingly.

Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - improve clock initialization
Gregory CLEMENT [Tue, 13 Mar 2018 16:48:41 +0000 (17:48 +0100)]
crypto: inside-secure - improve clock initialization

The clock is optional, but if it is present we should managed it. If
there is an error while trying getting it, we should exit and report this
error.

So instead of returning an error only in the -EPROBE case, turn it in an
other way and ignore the clock only if it is not present (-ENOENT case).

Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - fix clock management
Gregory CLEMENT [Tue, 13 Mar 2018 16:48:40 +0000 (17:48 +0100)]
crypto: inside-secure - fix clock management

In this driver the clock is got but never put when the driver is removed
or if there is an error in the probe.

Using the managed version of clk_get() allows to let the kernel take care
of it.

Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto
engine driver")
cc: stable@vger.kernel.org
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - fix missing unlock on error in safexcel_ahash_send_req()
weiyongjun \(A\) [Tue, 13 Mar 2018 14:54:03 +0000 (14:54 +0000)]
crypto: inside-secure - fix missing unlock on error in safexcel_ahash_send_req()

Add the missing unlock before return from function
safexcel_ahash_send_req() in the error handling case.

Fixes: cff9a17545a3 ("crypto: inside-secure - move cache result dma mapping to request")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: talitos - Delete an error message for a failed memory allocation in talitos_e...
Markus Elfring [Mon, 12 Mar 2018 13:18:23 +0000 (14:18 +0100)]
crypto: talitos - Delete an error message for a failed memory allocation in talitos_edesc_alloc()

Omit an extra message for a memory allocation failure in this function.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm64/sha256-neon - play nice with CONFIG_PREEMPT kernels
Ard Biesheuvel [Sat, 10 Mar 2018 15:21:54 +0000 (15:21 +0000)]
crypto: arm64/sha256-neon - play nice with CONFIG_PREEMPT kernels

Tweak the SHA256 update routines to invoke the SHA256 block transform
block by block, to avoid excessive scheduling delays caused by the
NEON algorithm running with preemption disabled.

Also, remove a stale comment which no longer applies now that kernel
mode NEON is actually disallowed in some contexts.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm64/aes-blk - add 4 way interleave to CBC-MAC encrypt path
Ard Biesheuvel [Sat, 10 Mar 2018 15:21:53 +0000 (15:21 +0000)]
crypto: arm64/aes-blk - add 4 way interleave to CBC-MAC encrypt path

CBC MAC is strictly sequential, and so the current AES code simply
processes the input one block at a time. However, we are about to add
yield support, which adds a bit of overhead, and which we prefer to
align with other modes in terms of granularity (i.e., it is better to
have all routines yield every 64 bytes and not have an exception for
CBC MAC which yields every 16 bytes)

So unroll the loop by 4. We still cannot perform the AES algorithm in
parallel, but we can at least merge the loads and stores.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm64/aes-blk - add 4 way interleave to CBC encrypt path
Ard Biesheuvel [Sat, 10 Mar 2018 15:21:52 +0000 (15:21 +0000)]
crypto: arm64/aes-blk - add 4 way interleave to CBC encrypt path

CBC encryption is strictly sequential, and so the current AES code
simply processes the input one block at a time. However, we are
about to add yield support, which adds a bit of overhead, and which
we prefer to align with other modes in terms of granularity (i.e.,
it is better to have all routines yield every 64 bytes and not have
an exception for CBC encrypt which yields every 16 bytes)

So unroll the loop by 4. We still cannot perform the AES algorithm in
parallel, but we can at least merge the loads and stores.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm64/aes-blk - remove configurable interleave
Ard Biesheuvel [Sat, 10 Mar 2018 15:21:51 +0000 (15:21 +0000)]
crypto: arm64/aes-blk - remove configurable interleave

The AES block mode implementation using Crypto Extensions or plain NEON
was written before real hardware existed, and so its interleave factor
was made build time configurable (as well as an option to instantiate
all interleaved sequences inline rather than as subroutines)

We ended up using INTERLEAVE=4 with inlining disabled for both flavors
of the core AES routines, so let's stick with that, and remove the option
to configure this at build time. This makes the code easier to modify,
which is nice now that we're adding yield support.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm64/chacha20 - move kernel mode neon en/disable into loop
Ard Biesheuvel [Sat, 10 Mar 2018 15:21:50 +0000 (15:21 +0000)]
crypto: arm64/chacha20 - move kernel mode neon en/disable into loop

When kernel mode NEON was first introduced on arm64, the preserve and
restore of the userland NEON state was completely unoptimized, and
involved saving all registers on each call to kernel_neon_begin(),
and restoring them on each call to kernel_neon_end(). For this reason,
the NEON crypto code that was introduced at the time keeps the NEON
enabled throughout the execution of the crypto API methods, which may
include calls back into the crypto API that could result in memory
allocation or other actions that we should avoid when running with
preemption disabled.

Since then, we have optimized the kernel mode NEON handling, which now
restores lazily (upon return to userland), and so the preserve action
is only costly the first time it is called after entering the kernel.

So let's put the kernel_neon_begin() and kernel_neon_end() calls around
the actual invocations of the NEON crypto code, and run the remainder of
the code with kernel mode NEON disabled (and preemption enabled)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm64/aes-bs - move kernel mode neon en/disable into loop
Ard Biesheuvel [Sat, 10 Mar 2018 15:21:49 +0000 (15:21 +0000)]
crypto: arm64/aes-bs - move kernel mode neon en/disable into loop

When kernel mode NEON was first introduced on arm64, the preserve and
restore of the userland NEON state was completely unoptimized, and
involved saving all registers on each call to kernel_neon_begin(),
and restoring them on each call to kernel_neon_end(). For this reason,
the NEON crypto code that was introduced at the time keeps the NEON
enabled throughout the execution of the crypto API methods, which may
include calls back into the crypto API that could result in memory
allocation or other actions that we should avoid when running with
preemption disabled.

Since then, we have optimized the kernel mode NEON handling, which now
restores lazily (upon return to userland), and so the preserve action
is only costly the first time it is called after entering the kernel.

So let's put the kernel_neon_begin() and kernel_neon_end() calls around
the actual invocations of the NEON crypto code, and run the remainder of
the code with kernel mode NEON disabled (and preemption enabled)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm64/aes-blk - move kernel mode neon en/disable into loop
Ard Biesheuvel [Sat, 10 Mar 2018 15:21:48 +0000 (15:21 +0000)]
crypto: arm64/aes-blk - move kernel mode neon en/disable into loop

When kernel mode NEON was first introduced on arm64, the preserve and
restore of the userland NEON state was completely unoptimized, and
involved saving all registers on each call to kernel_neon_begin(),
and restoring them on each call to kernel_neon_end(). For this reason,
the NEON crypto code that was introduced at the time keeps the NEON
enabled throughout the execution of the crypto API methods, which may
include calls back into the crypto API that could result in memory
allocation or other actions that we should avoid when running with
preemption disabled.

Since then, we have optimized the kernel mode NEON handling, which now
restores lazily (upon return to userland), and so the preserve action
is only costly the first time it is called after entering the kernel.

So let's put the kernel_neon_begin() and kernel_neon_end() calls around
the actual invocations of the NEON crypto code, and run the remainder of
the code with kernel mode NEON disabled (and preemption enabled)

Note that this requires some reshuffling of the registers in the asm
code, because the XTS routines can no longer rely on the registers to
retain their contents between invocations.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm64/aes-ce-ccm - move kernel mode neon en/disable into loop
Ard Biesheuvel [Sat, 10 Mar 2018 15:21:47 +0000 (15:21 +0000)]
crypto: arm64/aes-ce-ccm - move kernel mode neon en/disable into loop

When kernel mode NEON was first introduced on arm64, the preserve and
restore of the userland NEON state was completely unoptimized, and
involved saving all registers on each call to kernel_neon_begin(),
and restoring them on each call to kernel_neon_end(). For this reason,
the NEON crypto code that was introduced at the time keeps the NEON
enabled throughout the execution of the crypto API methods, which may
include calls back into the crypto API that could result in memory
allocation or other actions that we should avoid when running with
preemption disabled.

Since then, we have optimized the kernel mode NEON handling, which now
restores lazily (upon return to userland), and so the preserve action
is only costly the first time it is called after entering the kernel.

So let's put the kernel_neon_begin() and kernel_neon_end() calls around
the actual invocations of the NEON crypto code, and run the remainder of
the code with kernel mode NEON disabled (and preemption enabled)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: testmgr - add a new test case for CRC-T10DIF
Ard Biesheuvel [Sat, 10 Mar 2018 15:21:46 +0000 (15:21 +0000)]
crypto: testmgr - add a new test case for CRC-T10DIF

In order to be able to test yield support under preempt, add a test
vector for CRC-T10DIF that is long enough to take multiple iterations
(and thus possible preemption between them) of the primary loop of the
accelerated x86 and arm64 implementations.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ecc - Remove stack VLA usage
Kees Cook [Thu, 8 Mar 2018 21:57:02 +0000 (13:57 -0800)]
crypto: ecc - Remove stack VLA usage

On the quest to remove all VLAs from the kernel[1], this switches to
a pair of kmalloc regions instead of using the stack. This also moves
the get_random_bytes() after all allocations (and drops the needless
"nbytes" variable).

[1] https://lkml.org/lkml/2018/3/7/621

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - Validate buffer lengths for copy operations
Gary R Hook [Wed, 7 Mar 2018 17:31:14 +0000 (11:31 -0600)]
crypto: ccp - Validate buffer lengths for copy operations

The CCP driver copies data between scatter/gather lists and DMA buffers.
The length of the requested copy operation must be checked against
the available destination buffer length.

Reported-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name>
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: hash - Prevent use of req->result in ahash update
Kamil Konieczny [Wed, 7 Mar 2018 10:49:33 +0000 (11:49 +0100)]
crypto: hash - Prevent use of req->result in ahash update

Prevent improper use of req->result field in ahash update, init, export and
import functions in drivers code. A driver should use ahash request context
if it needs to save internal state.

Signed-off-by: Kamil Konieczny <k.konieczny@partner.samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: virtio - remove dependency on CRYPTO_AUTHENC
Peter Wu [Tue, 6 Mar 2018 23:53:15 +0000 (00:53 +0100)]
crypto: virtio - remove dependency on CRYPTO_AUTHENC

virtio_crypto does not use function crypto_authenc_extractkeys, remove
this unnecessary dependency. Compiles fine and passes cryptodev-linux
cipher and speed tests from https://wiki.qemu.org/Features/VirtioCrypto

Fixes: dbaf0624ffa5 ("crypto: add virtio-crypto driver")
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: testmgr - introduce SM4 tests
Gilad Ben-Yossef [Tue, 6 Mar 2018 09:44:43 +0000 (09:44 +0000)]
crypto: testmgr - introduce SM4 tests

Add testmgr tests for the newly introduced SM4 ECB symmetric cipher.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: sm4 - introduce SM4 symmetric cipher algorithm
Gilad Ben-Yossef [Tue, 6 Mar 2018 09:44:42 +0000 (09:44 +0000)]
crypto: sm4 - introduce SM4 symmetric cipher algorithm

Introduce the SM4 cipher algorithms (OSCCA GB/T 32907-2016).

SM4 (GBT.32907-2016) is a cryptographic standard issued by the
Organization of State Commercial Administration of China (OSCCA)
as an authorized cryptographic algorithms for the use within China.

SMS4 was originally created for use in protecting wireless
networks, and is mandated in the Chinese National Standard for
Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure)
(GB.15629.11-2003).

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio -Split Hash requests for large scatter gather list
Harsh Jain [Tue, 6 Mar 2018 05:07:52 +0000 (10:37 +0530)]
crypto: chelsio -Split Hash requests for large scatter gather list

Send multiple WRs to H/W when No. of entries received in scatter list
cannot be sent in single request.

Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - Fix iv passed in fallback path for rfc3686
Harsh Jain [Tue, 6 Mar 2018 05:07:51 +0000 (10:37 +0530)]
crypto: chelsio - Fix iv passed in fallback path for rfc3686

We use ctr(aes) to fallback rfc3686(ctr) request. Send updated IV to fallback path.

Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - Update IV before sending request to HW
Harsh Jain [Tue, 6 Mar 2018 05:07:50 +0000 (10:37 +0530)]
crypto: chelsio - Update IV before sending request to HW

CBC Decryption requires Last Block as IV. In case src/dst buffer
are same last block will be replaced by plain text. This patch copies
the Last Block before sending request to HW.

Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - Fix src buffer dma length
Harsh Jain [Tue, 6 Mar 2018 05:07:49 +0000 (10:37 +0530)]
crypto: chelsio - Fix src buffer dma length

ulptx header cannot have length > 64k. Adjust length accordingly.

Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - Use kernel round function to align lengths
Harsh Jain [Tue, 6 Mar 2018 05:07:48 +0000 (10:37 +0530)]
crypto: chelsio - Use kernel round function to align lengths

Replace DIV_ROUND_UP to roundup or rounddown

Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: mxc-rnga - add driver support on boards with device tree
Vladimir Zapolskiy [Mon, 5 Mar 2018 22:21:00 +0000 (00:21 +0200)]
hwrng: mxc-rnga - add driver support on boards with device tree

The driver works well on i.MX31 powered boards with device description
taken from board device tree, the only change to add to the driver is
the missing OF device id, the affected list of included headers and
indentation in platform driver struct are beautified a little.

Signed-off-by: Vladimir Zapolskiy <vz@mleia.com>
Reviewed-by: Fabio Estevam <fabio.estevam@nxp.com>
Reviewed-by: Kim Phillips <kim.phillips@arm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agodt-bindings: rng: Document Freescale i.MX21 and i.MX31 RNGA compatibles
Vladimir Zapolskiy [Mon, 5 Mar 2018 22:20:59 +0000 (00:20 +0200)]
dt-bindings: rng: Document Freescale i.MX21 and i.MX31 RNGA compatibles

Freescale i.MX21 and i.MX31 SoCs contain a Random Number Generator
Accelerator module (RNGA), which is replaced by RNGB and RNGC modules
on later i.MX SoC series, the change adds a new compatible property
to describe the controller.

Since all versions of Freescale RNG modules are legacy, apparently
the documentation file has no more potential for further extensions,
nevertheless generalize it by removing explicit RNGC specifics.

Signed-off-by: Vladimir Zapolskiy <vz@mleia.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Reviewed-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm64/speck - add NEON-accelerated implementation of Speck-XTS
Eric Biggers [Mon, 5 Mar 2018 19:17:07 +0000 (11:17 -0800)]
crypto: arm64/speck - add NEON-accelerated implementation of Speck-XTS

Add a NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
for ARM64.  This is ported from the 32-bit version.  It may be useful on
devices with 64-bit ARM CPUs that don't have the Cryptography
Extensions, so cannot do AES efficiently -- e.g. the Cortex-A53
processor on the Raspberry Pi 3.

It generally works the same way as the 32-bit version, but there are
some slight differences due to the different instructions, registers,
and syntax available in ARM64 vs. in ARM32.  For example, in the 64-bit
version there are enough registers to hold the XTS tweaks for each
128-byte chunk, so they don't need to be saved on the stack.

Benchmarks on a Raspberry Pi 3 running a 64-bit kernel:

   Algorithm                              Encryption     Decryption
   ---------                              ----------     ----------
   Speck64/128-XTS (NEON)                 92.2 MB/s      92.2 MB/s
   Speck128/256-XTS (NEON)                75.0 MB/s      75.0 MB/s
   Speck128/256-XTS (generic)             47.4 MB/s      35.6 MB/s
   AES-128-XTS (NEON bit-sliced)          33.4 MB/s      29.6 MB/s
   AES-256-XTS (NEON bit-sliced)          24.6 MB/s      21.7 MB/s

The code performs well on higher-end ARM64 processors as well, though
such processors tend to have the Crypto Extensions which make AES
preferred.  For example, here are the same benchmarks run on a HiKey960
(with CPU affinity set for the A73 cores), with the Crypto Extensions
implementation of AES-256-XTS added:

   Algorithm                              Encryption     Decryption
   ---------                              -----------    -----------
   AES-256-XTS (Crypto Extensions)        1273.3 MB/s    1274.7 MB/s
   Speck64/128-XTS (NEON)                  359.8 MB/s     348.0 MB/s
   Speck128/256-XTS (NEON)                 292.5 MB/s     286.1 MB/s
   Speck128/256-XTS (generic)              186.3 MB/s     181.8 MB/s
   AES-128-XTS (NEON bit-sliced)           142.0 MB/s     124.3 MB/s
   AES-256-XTS (NEON bit-sliced)           104.7 MB/s      91.1 MB/s

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - Use memdup_user() rather than duplicating its implementation
Markus Elfring [Mon, 5 Mar 2018 12:50:13 +0000 (13:50 +0100)]
crypto: ccp - Use memdup_user() rather than duplicating its implementation

Reuse existing functionality from memdup_user() instead of keeping
duplicate source code.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - Fill the result buffer only on digest, finup, and final ops
Gary R Hook [Wed, 7 Mar 2018 17:37:42 +0000 (11:37 -0600)]
crypto: ccp - Fill the result buffer only on digest, finup, and final ops

Any change to the result buffer should only happen on final, finup
and digest operations. Changes to the buffer for update, import, export,
etc, are not allowed.

Fixes: 66d7b9f6175e ("crypto: testmgr - test misuse of result in ahash")
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/des3_ede - des3_ede_skciphers[] can be static
Wu Fengguang [Fri, 2 Mar 2018 20:29:46 +0000 (04:29 +0800)]
crypto: x86/des3_ede - des3_ede_skciphers[] can be static

Fixes: 09c0f03bf8ce ("crypto: x86/des3_ede - convert to skcipher interface")
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ecdh - fix to allow multi segment scatterlists
James Bottomley [Thu, 1 Mar 2018 22:37:42 +0000 (14:37 -0800)]
crypto: ecdh - fix to allow multi segment scatterlists

Apparently the ecdh use case was in bluetooth which always has single
element scatterlists, so the ecdh module was hard coded to expect
them.  Now we're using this in TPM, we need multi-element
scatterlists, so remove this limitation.

Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: cfb - add support for Cipher FeedBack mode
James Bottomley [Thu, 1 Mar 2018 22:36:17 +0000 (14:36 -0800)]
crypto: cfb - add support for Cipher FeedBack mode

TPM security routines require encryption and decryption with AES in
CFB mode, so add it to the Linux Crypto schemes.  CFB is basically a
one time pad where the pad is generated initially from the encrypted
IV and then subsequently from the encrypted previous block of
ciphertext.  The pad is XOR'd into the plain text to get the final
ciphertext.

https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#CFB

Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: s5p-sss - Constify pointed data (arguments and local variables)
Krzysztof Kozlowski [Thu, 1 Mar 2018 20:50:13 +0000 (21:50 +0100)]
crypto: s5p-sss - Constify pointed data (arguments and local variables)

Improve the code (safety and readability) by indicating that data passed
through pointer is not modified.  This adds const keyword in many places,
most notably:
 - the driver data (pointer to struct samsung_aes_variant),
 - scatterlist addresses written as value to device registers,
 - key and IV arrays.

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: s5p-sss - Remove useless check for non-null request
Krzysztof Kozlowski [Thu, 1 Mar 2018 20:50:12 +0000 (21:50 +0100)]
crypto: s5p-sss - Remove useless check for non-null request

ahash_request 'req' argument passed by the caller
s5p_hash_handle_queue() cannot be NULL here because it is obtained from
non-NULL pointer via container_of().

This fixes smatch warning:
    drivers/crypto/s5p-sss.c:1213 s5p_hash_prepare_request() warn: variable dereferenced before check 'req' (see line 1208)

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-sham - Fix misleading indentation
Krzysztof Kozlowski [Thu, 1 Mar 2018 20:50:11 +0000 (21:50 +0100)]
crypto: omap-sham - Fix misleading indentation

Commit 8043bb1ae03c ("crypto: omap-sham - convert driver logic to use
sgs for data xmit") removed the if() clause leaving the statement as is.
The intention was in that case to finish the request always so the goto
instruction seems sensible.

Remove the indentation to fix Smatch warning:
    drivers/crypto/omap-sham.c:1761 omap_sham_done_task() warn: inconsistent indenting

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-sham - Remove useless check for non-null request
Krzysztof Kozlowski [Thu, 1 Mar 2018 20:50:10 +0000 (21:50 +0100)]
crypto: omap-sham - Remove useless check for non-null request

ahash_request 'req' argument passed by the caller
omap_sham_handle_queue() cannot be NULL here because it is obtained from
non-NULL pointer via container_of().

This fixes smatch warning:
    drivers/crypto/omap-sham.c:812 omap_sham_prepare_request() warn: variable dereferenced before check 'req' (see line 805)

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - no csum offload for ipsec path
Atul Gupta [Wed, 28 Feb 2018 17:48:08 +0000 (23:18 +0530)]
crypto: chelsio - no csum offload for ipsec path

The Inline IPSec driver does not offload csum.

Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: omap - Fix clock resource by adding a register clock
Gregory CLEMENT [Wed, 28 Feb 2018 14:27:23 +0000 (15:27 +0100)]
hwrng: omap - Fix clock resource by adding a register clock

On Armada 7K/8K we need to explicitly enable the register clock. This
clock is optional because not all the SoCs using this IP need it but at
least for Armada 7K/8K it is actually mandatory.

The binding documentation is updating accordingly.

Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: omap - Remove useless test before clk_disable_unprepare
Gregory CLEMENT [Wed, 28 Feb 2018 14:27:22 +0000 (15:27 +0100)]
hwrng: omap - Remove useless test before clk_disable_unprepare

clk_disable_unprepare() already checks that the clock pointer is valid.
No need to test it before calling it.

Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-aes - make queue length configurable
Tero Kristo [Tue, 27 Feb 2018 13:30:39 +0000 (15:30 +0200)]
crypto: omap-aes - make queue length configurable

Crypto driver queue size can now be configured from userspace. This
allows optimizing the queue usage based on use case. Default queue
size is still 10 entries.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-aes - make fallback size configurable
Tero Kristo [Tue, 27 Feb 2018 13:30:38 +0000 (15:30 +0200)]
crypto: omap-aes - make fallback size configurable

Crypto driver fallback size can now be configured from userspace. This
allows optimizing the DMA usage based on use case. Detault fallback
size of 200 is still used.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-sham - make queue length configurable
Tero Kristo [Tue, 27 Feb 2018 13:30:37 +0000 (15:30 +0200)]
crypto: omap-sham - make queue length configurable

Crypto driver queue size can now be configured from userspace. This
allows optimizing the queue usage based on use case. Default queue
size is still 10 entries.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-sham - make fallback size configurable
Tero Kristo [Tue, 27 Feb 2018 13:30:36 +0000 (15:30 +0200)]
crypto: omap-sham - make fallback size configurable

Crypto driver fallback size can now be configured from userspace. This
allows optimizing the DMA usage based on use case. Default fallback
size of 256 is still used.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-crypto - Verify page zone scatterlists before starting DMA
Tero Kristo [Tue, 27 Feb 2018 13:30:35 +0000 (15:30 +0200)]
crypto: omap-crypto - Verify page zone scatterlists before starting DMA

In certain platforms like DRA7xx having memory > 2GB with LPAE enabled
has a constraint that DMA can be done with the initial 2GB and marks it
as ZONE_DMA. But openssl when used with cryptodev does not make sure that
input buffer is DMA capable. So, adding a check to verify if the input
buffer is capable of DMA.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-sham - Verify page zone of scatterlists before starting DMA
Tero Kristo [Tue, 27 Feb 2018 13:30:34 +0000 (15:30 +0200)]
crypto: omap-sham - Verify page zone of scatterlists before starting DMA

In certain platforms like DRA7xx having memory > 2GB with LPAE enabled
has a constraint that DMA can be done with the initial 2GB and marks it
as ZONE_DMA. But openssl when used with cryptodev does not make sure that
input buffer is DMA capable. So, adding a check to verify if the input
buffer is capable of DMA.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Reported-by: Aparna Balasubramanian <aparnab@ti.com>
Reviewed-by: Lokesh Vutla <lokeshvutla@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: talitos - do not perform unnecessary dma synchronisation
LEROY Christophe [Mon, 26 Feb 2018 16:40:06 +0000 (17:40 +0100)]
crypto: talitos - do not perform unnecessary dma synchronisation

req_ctx->hw_context is mainly used only by the HW. So it is not needed
to sync the HW and the CPU each time hw_context in DMA mapped.
This patch modifies the DMA mapping in order to limit synchronisation
to necessary situations.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: talitos - don't persistently map req_ctx->hw_context and req_ctx->buf
LEROY Christophe [Mon, 26 Feb 2018 16:40:04 +0000 (17:40 +0100)]
crypto: talitos - don't persistently map req_ctx->hw_context and req_ctx->buf

Commit 49f9783b0cea ("crypto: talitos - do hw_context DMA mapping
outside the requests") introduced a persistent dma mapping of
req_ctx->hw_context
Commit 37b5e8897eb5 ("crypto: talitos - chain in buffered data for ahash
on SEC1") introduced a persistent dma mapping of req_ctx->buf

As there is no destructor for req_ctx (the request context), the
associated dma handlers where set in ctx (the tfm context). This is
wrong as several hash operations can run with the same ctx.

This patch removes this persistent mapping.

Reported-by: Horia Geanta <horia.geanta@nxp.com>
Cc: <stable@vger.kernel.org>
Fixes: 49f9783b0cea ("crypto: talitos - do hw_context DMA mapping outside the requests")
Fixes: 37b5e8897eb5 ("crypto: talitos - chain in buffered data for ahash on SEC1")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Tested-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: cavium - make two functions static
Colin Ian King [Mon, 26 Feb 2018 14:51:19 +0000 (14:51 +0000)]
hwrng: cavium - make two functions static

Functions cavium_rng_remove and cavium_rng_remove_vf are local to the
source and do not need to be in global scope, so make them static.

Cleans up sparse warnings:
drivers/char/hw_random/cavium-rng-vf.c:80:7: warning: symbol
'cavium_rng_remove_vf' was not declared. Should it be static?
drivers/char/hw_random/cavium-rng.c:65:7: warning: symbol
'cavium_rng_remove' was not declared. Should it be static?

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - wait for the request to complete if in the backlog
Antoine Tenart [Mon, 26 Feb 2018 13:45:12 +0000 (14:45 +0100)]
crypto: inside-secure - wait for the request to complete if in the backlog

This patch updates the safexcel_hmac_init_pad() function to also wait
for completion when the digest return code is -EBUSY, as it would mean
the request is in the backlog to be processed later.

Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
Suggested-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - move cache result dma mapping to request
Antoine Tenart [Mon, 26 Feb 2018 13:45:11 +0000 (14:45 +0100)]
crypto: inside-secure - move cache result dma mapping to request

In heavy traffic the DMA mapping is overwritten by multiple requests as
the DMA address is stored in a global context. This patch moves this
information to the per-hash request context so that it can't be
overwritten.

Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - move hash result dma mapping to request
Ofer Heifetz [Mon, 26 Feb 2018 13:45:10 +0000 (14:45 +0100)]
crypto: inside-secure - move hash result dma mapping to request

In heavy traffic the DMA mapping is overwritten by multiple requests as
the DMA address is stored in a global context. This patch moves this
information to the per-hash request context so that it can't be
overwritten.

Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
Signed-off-by: Ofer Heifetz <oferh@marvell.com>
[Antoine: rebased the patch, small fixes, commit message.]
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agoinclude: psp-sev: Capitalize invalid length enum
Brijesh Singh [Thu, 15 Feb 2018 19:34:45 +0000 (13:34 -0600)]
include: psp-sev: Capitalize invalid length enum

Commit 1d57b17c60ff ("crypto: ccp: Define SEV userspace ioctl and command
id") added the invalid length enum but we missed capitalizing it.

Fixes: 1d57b17c60ff (crypto: ccp: Define SEV userspace ioctl ...)
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
CC: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - Fix sparse, use plain integer as NULL pointer
Brijesh Singh [Thu, 15 Feb 2018 19:34:44 +0000 (13:34 -0600)]
crypto: ccp - Fix sparse, use plain integer as NULL pointer

Fix sparse warning: Using plain integer as NULL pointer. Replaces
assignment of 0 to pointer with NULL assignment.

Fixes: 200664d5237f (Add Secure Encrypted Virtualization ...)
Cc: Borislav Petkov <bp@suse.de>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Gary Hook <gary.hook@amd.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - return an actual key size from RSA max_size callback
Maciej S. Szmigiero [Sat, 24 Feb 2018 16:03:21 +0000 (17:03 +0100)]
crypto: ccp - return an actual key size from RSA max_size callback

rsa-pkcs1pad uses a value returned from a RSA implementation max_size
callback as a size of an input buffer passed to the RSA implementation for
encrypt and sign operations.

CCP RSA implementation uses a hardware input buffer which size depends only
on the current RSA key length, so it should return this key length in
the max_size callback, too.
This also matches what the kernel software RSA implementation does.

Previously, the value returned from this callback was always the maximum
RSA key size the CCP hardware supports.
This resulted in this huge buffer being passed by rsa-pkcs1pad to CCP even
for smaller key sizes and then in a buffer overflow when ccp_run_rsa_cmd()
tried to copy this large input buffer into a RSA key length-sized hardware
input buffer.

Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name>
Fixes: ceeec0afd684 ("crypto: ccp - Add support for RSA on the CCP")
Cc: stable@vger.kernel.org
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - don't disable interrupts while setting up debugfs
Sebastian Andrzej Siewior [Fri, 23 Feb 2018 22:33:07 +0000 (23:33 +0100)]
crypto: ccp - don't disable interrupts while setting up debugfs

I don't why we need take a single write lock and disable interrupts
while setting up debugfs. This is what what happens when we try anyway:

|ccp 0000:03:00.2: enabling device (0000 -> 0002)
|BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:69
|in_atomic(): 1, irqs_disabled(): 1, pid: 3, name: kworker/0:0
|irq event stamp: 17150
|hardirqs last  enabled at (17149): [<0000000097a18c49>] restore_regs_and_return_to_kernel+0x0/0x23
|hardirqs last disabled at (17150): [<000000000773b3a9>] _raw_write_lock_irqsave+0x1b/0x50
|softirqs last  enabled at (17148): [<0000000064d56155>] __do_softirq+0x3b8/0x4c1
|softirqs last disabled at (17125): [<0000000092633c18>] irq_exit+0xb1/0xc0
|CPU: 0 PID: 3 Comm: kworker/0:0 Not tainted 4.16.0-rc2+ #30
|Workqueue: events work_for_cpu_fn
|Call Trace:
| dump_stack+0x7d/0xb6
| ___might_sleep+0x1eb/0x250
| down_write+0x17/0x60
| start_creating+0x4c/0xe0
| debugfs_create_dir+0x9/0x100
| ccp5_debugfs_setup+0x191/0x1b0
| ccp5_init+0x8a7/0x8c0
| ccp_dev_init+0xb8/0xe0
| sp_init+0x6c/0x90
| sp_pci_probe+0x26e/0x590
| local_pci_probe+0x3f/0x90
| work_for_cpu_fn+0x11/0x20
| process_one_work+0x1ff/0x650
| worker_thread+0x1d4/0x3a0
| kthread+0xfe/0x130
| ret_from_fork+0x27/0x50

If any locking is required, a simple mutex will do it.

Cc: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: atmel-aes - fix the keys zeroing on errors
Antoine Tenart [Fri, 23 Feb 2018 09:01:40 +0000 (10:01 +0100)]
crypto: atmel-aes - fix the keys zeroing on errors

The Atmel AES driver uses memzero_explicit on the keys on error, but the
variable zeroed isn't the right one because of a typo. Fix this by using
the right variable.

Fixes: 89a82ef87e01 ("crypto: atmel-authenc - add support to authenc(hmac(shaX), Y(aes)) modes")
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: caam - do not use mem and emi_slow clock for imx7x
Rui Miguel Silva [Thu, 22 Feb 2018 14:22:48 +0000 (14:22 +0000)]
crypto: caam - do not use mem and emi_slow clock for imx7x

I.MX7x only use two clocks for the CAAM module, so make sure we do not try to
use the mem and the emi_slow clock when running in that imx7d and imx7s machine
type.

Cc: "Horia Geantă" <horia.geanta@nxp.com>
Cc: Aymen Sghaier <aymen.sghaier@nxp.com>
Cc: Fabio Estevam <fabio.estevam@nxp.com>
Cc: Peng Fan <peng.fan@nxp.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Lukas Auer <lukas.auer@aisec.fraunhofer.de>
Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: caam - Fix null dereference at error path
Rui Miguel Silva [Thu, 22 Feb 2018 14:22:47 +0000 (14:22 +0000)]
crypto: caam - Fix null dereference at error path

caam_remove already removes the debugfs entry, so we need to remove the one
immediately before calling caam_remove.

This fix a NULL dereference at error paths is caam_probe fail.

Fixes: 67c2315def06 ("crypto: caam - add Queue Interface (QI) backend support")

Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Cc: "Horia Geantă" <horia.geanta@nxp.com>
Cc: Aymen Sghaier <aymen.sghaier@nxp.com>
Cc: Fabio Estevam <fabio.estevam@nxp.com>
Cc: Peng Fan <peng.fan@nxp.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Lukas Auer <lukas.auer@aisec.fraunhofer.de>
Cc: <stable@vger.kernel.org> # 4.12+
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - add check to get PSP master only when PSP is detected
Brijesh Singh [Wed, 21 Feb 2018 14:41:39 +0000 (08:41 -0600)]
crypto: ccp - add check to get PSP master only when PSP is detected

Paulian reported the below kernel crash on Ryzen 5 system:

BUG: unable to handle kernel NULL pointer dereference at 0000000000000073
RIP: 0010:.LC0+0x41f/0xa00
RSP: 0018:ffffa9968003bdd0 EFLAGS: 00010002
RAX: ffffffffb113b130 RBX: 0000000000000000 RCX: 00000000000005a7
RDX: 00000000000000ff RSI: ffff8b46dee651a0 RDI: ffffffffb1bd617c
RBP: 0000000000000246 R08: 00000000000251a0 R09: 0000000000000000
R10: ffffd81f11a38200 R11: ffff8b52e8e0a161 R12: ffffffffb19db220
R13: 0000000000000007 R14: ffffffffb17e4888 R15: 5dccd7affc30a31e
FS:  0000000000000000(0000) GS:ffff8b46dee40000(0000) knlGS:0000000000000000
CR2: 0000000000000073 CR3: 000080128120a000 CR4: 00000000003406e0
Call Trace:
 ? sp_get_psp_master_device+0x56/0x80
 ? map_properties+0x540/0x540
 ? psp_pci_init+0x20/0xe0
 ? map_properties+0x540/0x540
 ? sp_mod_init+0x16/0x1a
 ? do_one_initcall+0x4b/0x190
 ? kernel_init_freeable+0x19b/0x23c
 ? rest_init+0xb0/0xb0
 ? kernel_init+0xa/0x100
 ? ret_from_fork+0x22/0x40

Since Ryzen does not support PSP/SEV firmware hence i->psp_data will
NULL in all sp instances. In those cases, 'i' will point to the
list head after list_for_each_entry(). Dereferencing the head will
cause kernel crash.

Add check to call get master device only when PSP/SEV is detected.

Reported-by: Paulian Bogdan Marinca <paulian@marinca.net>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
CC: Gary R Hook <gary.hook@amd.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ablk_helper - remove ablk_helper
Eric Biggers [Tue, 20 Feb 2018 07:48:28 +0000 (23:48 -0800)]
crypto: ablk_helper - remove ablk_helper

All users of ablk_helper have been converted over to crypto_simd, so
remove ablk_helper.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/glue_helper - rename glue_skwalk_fpu_begin()
Eric Biggers [Tue, 20 Feb 2018 07:48:27 +0000 (23:48 -0800)]
crypto: x86/glue_helper - rename glue_skwalk_fpu_begin()

There are no users of the original glue_fpu_begin() anymore, so rename
glue_skwalk_fpu_begin() to glue_fpu_begin() so that it matches
glue_fpu_end() again.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/glue_helper - remove blkcipher_walk functions
Eric Biggers [Tue, 20 Feb 2018 07:48:26 +0000 (23:48 -0800)]
crypto: x86/glue_helper - remove blkcipher_walk functions

Now that all glue_helper users have been switched from the blkcipher
interface over to the skcipher interface, remove the versions of the
glue_helper functions that handled the blkcipher interface.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: lrw - remove lrw_crypt()
Eric Biggers [Tue, 20 Feb 2018 07:48:25 +0000 (23:48 -0800)]
crypto: lrw - remove lrw_crypt()

Now that all users of lrw_crypt() have been removed in favor of the LRW
template wrapping an ECB mode algorithm, remove lrw_crypt().  Also
remove crypto/lrw.h as that is no longer needed either; and fold
'struct lrw_table_ctx' into 'struct priv', lrw_init_table() into
setkey(), and lrw_free_table() into exit_tfm().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: xts - remove xts_crypt()
Eric Biggers [Tue, 20 Feb 2018 07:48:24 +0000 (23:48 -0800)]
crypto: xts - remove xts_crypt()

Now that all users of xts_crypt() have been removed in favor of the XTS
template wrapping an ECB mode algorithm, remove xts_crypt().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/camellia-aesni-avx, avx2 - convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:23 +0000 (23:48 -0800)]
crypto: x86/camellia-aesni-avx, avx2 - convert to skcipher interface

Convert the AESNI AVX and AESNI AVX2 implementations of Camellia from
the (deprecated) ablkcipher and blkcipher interfaces over to the
skcipher interface.  Note that this includes replacing the use of
ablk_helper with crypto_simd.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/camellia - convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:22 +0000 (23:48 -0800)]
crypto: x86/camellia - convert to skcipher interface

Convert the x86 asm implementation of Camellia from the (deprecated)
blkcipher interface over to the skcipher interface.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/camellia - remove XTS algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:21 +0000 (23:48 -0800)]
crypto: x86/camellia - remove XTS algorithm

The XTS template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic XTS code themselves via xts_crypt().

Remove the xts-camellia-asm algorithm which did this.  Users who request
xts(camellia) and previously would have gotten xts-camellia-asm will now
get xts(ecb-camellia-asm) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/camellia - remove LRW algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:20 +0000 (23:48 -0800)]
crypto: x86/camellia - remove LRW algorithm

The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().

Remove the lrw-camellia-asm algorithm which did this.  Users who request
lrw(camellia) and previously would have gotten lrw-camellia-asm will now
get lrw(ecb-camellia-asm) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>