The DMA subsystem does is forcing private-to-shared page conversion in force_dma_unencrypted(). Return false from force_dma_unencrypted() for TDISP devices. Signed-off-by: Alexey Kardashevskiy --- arch/x86/mm/mem_encrypt.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 95bae74fdab2..8daa6482b080 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -20,10 +20,11 @@ bool force_dma_unencrypted(struct device *dev) { /* - * For SEV, all DMA must be to unencrypted addresses. + * dma_direct_alloc() forces page state change if private memory is + * allocated for DMA. Skip conversion if the TDISP device is accepted. */ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) - return true; + return !device_cc_accepted(dev); /* * For SME, all DMA must be to unencrypted addresses if the -- 2.52.0