From: Thierry Reding The Video Protection Region (VPR) found on NVIDIA Tegra chips is a region of memory that is protected from CPU accesses. It is used to decode and play back DRM protected content. It is a standard reserved memory region that can exist in two forms: static VPR where the base address and size are fixed (uses the "reg" property to describe the memory) and a resizable VPR where only the size is known upfront and the OS can allocate it wherever it can be accomodated. Signed-off-by: Thierry Reding --- .../nvidia,tegra-video-protection-region.yaml | 55 +++++++++++++++++++ 1 file changed, 55 insertions(+) create mode 100644 Documentation/devicetree/bindings/reserved-memory/nvidia,tegra-video-protection-region.yaml diff --git a/Documentation/devicetree/bindings/reserved-memory/nvidia,tegra-video-protection-region.yaml b/Documentation/devicetree/bindings/reserved-memory/nvidia,tegra-video-protection-region.yaml new file mode 100644 index 000000000000..c13292a791bb --- /dev/null +++ b/Documentation/devicetree/bindings/reserved-memory/nvidia,tegra-video-protection-region.yaml @@ -0,0 +1,55 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/reserved-memory/nvidia,tegra-video-protection-region.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: NVIDIA Tegra Video Protection Region (VPR) + +maintainers: + - Thierry Reding + - Jon Hunter + +description: | + NVIDIA Tegra chips have long supported a mechanism to protect a single, + contiguous memory region from non-secure memory accesses. Typically this + region is used for decoding and playback of DRM protected content. Various + devices, such as the display controller and multimedia engines (video + decoder) can access this region in a secure way. Access from the CPU is + generally forbidden. + + Two variants exist for VPR: one is fixed in both the base address and size, + while the other is resizable. Fixed VPR can be described by just a "reg" + property specifying the base address and size, whereas the resizable VPR + is defined by a size/alignment pair of properties. For resizable VPR the + memory is reusable by the rest of the system when it's unused for VPR and + therefore the "reusable" property must be specified along with it. For a + fixed VPR, the memory is permanently protected, and therefore it's not + reusable and must also be marked as "no-map" to prevent any (including + speculative) accesses to it. + +allOf: + - $ref: reserved-memory.yaml + +properties: + compatible: + const: nvidia,tegra-video-protection-region + +dependencies: + size: [alignment, reusable] + alignment: [size, reusable] + reusable: [alignment, size] + + reg: [no-map] + no-map: [reg] + +unevaluatedProperties: false + +oneOf: + - required: + - compatible + - reg + + - required: + - compatible + - size -- 2.50.0 From: Thierry Reding Add the memory-region and memory-region-names properties to the bindings for the display controllers and the host1x engine found on various Tegra generations. These memory regions are used to access firmware-provided framebuffer memory as well as the video protection region. Signed-off-by: Thierry Reding --- .../bindings/display/tegra/nvidia,tegra186-dc.yaml | 10 ++++++++++ .../bindings/display/tegra/nvidia,tegra20-dc.yaml | 10 +++++++++- .../bindings/display/tegra/nvidia,tegra20-host1x.yaml | 7 +++++++ 3 files changed, 26 insertions(+), 1 deletion(-) diff --git a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra186-dc.yaml b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra186-dc.yaml index ce4589466a18..881bfbf4764d 100644 --- a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra186-dc.yaml +++ b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra186-dc.yaml @@ -57,6 +57,16 @@ properties: - const: dma-mem # read-0 - const: read-1 + memory-region: + minItems: 1 + maxItems: 2 + + memory-region-names: + items: + enum: [ framebuffer, protected ] + minItems: 1 + maxItems: 2 + nvidia,outputs: description: A list of phandles of outputs that this display controller can drive. diff --git a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-dc.yaml b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-dc.yaml index 69be95afd562..a012644eeb7d 100644 --- a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-dc.yaml +++ b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-dc.yaml @@ -65,7 +65,15 @@ properties: items: - description: phandle to the core power domain - memory-region: true + memory-region: + minItems: 1 + maxItems: 2 + + memory-region-names: + items: + enum: [ framebuffer, protected ] + minItems: 1 + maxitems: 2 nvidia,head: $ref: /schemas/types.yaml#/definitions/uint32 diff --git a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.yaml b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.yaml index 3563378a01af..f45be30835a8 100644 --- a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.yaml +++ b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.yaml @@ -96,6 +96,13 @@ properties: items: - description: phandle to the HEG or core power domain + memory-region: + maxItems: 1 + + memory-region-names: + items: + - const: protected + required: - compatible - interrupts -- 2.50.0 From: Thierry Reding There is no technical reason why there should be a limited number of CMA regions, so extract some code into helpers and use them to create extra functions (cma_create() and cma_free()) that allow creating and freeing, respectively, CMA regions dynamically at runtime. Note that these dynamically created CMA areas are treated specially and do not contribute to the number of total CMA pages so that this count still only applies to the fixed number of CMA areas. Signed-off-by: Thierry Reding --- include/linux/cma.h | 16 ++++++++ mm/cma.c | 89 ++++++++++++++++++++++++++++++++++----------- 2 files changed, 83 insertions(+), 22 deletions(-) diff --git a/include/linux/cma.h b/include/linux/cma.h index 62d9c1cf6326..f1e20642198a 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -61,6 +61,10 @@ extern void cma_reserve_pages_on_error(struct cma *cma); struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp); bool cma_free_folio(struct cma *cma, const struct folio *folio); bool cma_validate_zones(struct cma *cma); + +struct cma *cma_create(phys_addr_t base, phys_addr_t size, + unsigned int order_per_bit, const char *name); +void cma_free(struct cma *cma); #else static inline struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp) { @@ -71,10 +75,22 @@ static inline bool cma_free_folio(struct cma *cma, const struct folio *folio) { return false; } + static inline bool cma_validate_zones(struct cma *cma) { return false; } + +static inline struct cma *cma_create(phys_addr_t base, phys_addr_t size, + unsigned int order_per_bit, + const char *name) +{ + return NULL; +} + +static inline void cma_free(struct cma *cma) +{ +} #endif #endif diff --git a/mm/cma.c b/mm/cma.c index e56ec64d0567..8149227d319f 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -214,6 +214,18 @@ void __init cma_reserve_pages_on_error(struct cma *cma) set_bit(CMA_RESERVE_PAGES_ON_ERROR, &cma->flags); } +static void __init cma_init_area(struct cma *cma, const char *name, + phys_addr_t size, unsigned int order_per_bit) +{ + if (name) + snprintf(cma->name, CMA_MAX_NAME, "%s", name); + else + snprintf(cma->name, CMA_MAX_NAME, "cma%d\n", cma_area_count); + + cma->available_count = cma->count = size >> PAGE_SHIFT; + cma->order_per_bit = order_per_bit; +} + static int __init cma_new_area(const char *name, phys_addr_t size, unsigned int order_per_bit, struct cma **res_cma) @@ -232,13 +244,8 @@ static int __init cma_new_area(const char *name, phys_addr_t size, cma = &cma_areas[cma_area_count]; cma_area_count++; - if (name) - snprintf(cma->name, CMA_MAX_NAME, "%s", name); - else - snprintf(cma->name, CMA_MAX_NAME, "cma%d\n", cma_area_count); + cma_init_area(cma, name, size, order_per_bit); - cma->available_count = cma->count = size >> PAGE_SHIFT; - cma->order_per_bit = order_per_bit; *res_cma = cma; totalcma_pages += cma->count; @@ -251,6 +258,27 @@ static void __init cma_drop_area(struct cma *cma) cma_area_count--; } +static int __init cma_check_memory(phys_addr_t base, phys_addr_t size) +{ + if (!size || !memblock_is_region_reserved(base, size)) + return -EINVAL; + + /* + * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which + * needs pageblock_order to be initialized. Let's enforce it. + */ + if (!pageblock_order) { + pr_err("pageblock_order not yet initialized. Called during early boot?\n"); + return -EINVAL; + } + + /* ensure minimal alignment required by mm core */ + if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) + return -EINVAL; + + return 0; +} + /** * cma_init_reserved_mem() - create custom contiguous area from reserved memory * @base: Base address of the reserved area @@ -271,22 +299,9 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, struct cma *cma; int ret; - /* Sanity checks */ - if (!size || !memblock_is_region_reserved(base, size)) - return -EINVAL; - - /* - * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which - * needs pageblock_order to be initialized. Let's enforce it. - */ - if (!pageblock_order) { - pr_err("pageblock_order not yet initialized. Called during early boot?\n"); - return -EINVAL; - } - - /* ensure minimal alignment required by mm core */ - if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) - return -EINVAL; + ret = cma_check_memory(base, size); + if (ret < 0) + return ret; ret = cma_new_area(name, size, order_per_bit, &cma); if (ret != 0) @@ -1112,3 +1127,33 @@ void __init *cma_reserve_early(struct cma *cma, unsigned long size) return ret; } + +struct cma *__init cma_create(phys_addr_t base, phys_addr_t size, + unsigned int order_per_bit, const char *name) +{ + struct cma *cma; + int ret; + + ret = cma_check_memory(base, size); + if (ret < 0) + return ERR_PTR(ret); + + cma = kzalloc(sizeof(*cma), GFP_KERNEL); + if (!cma) + return ERR_PTR(-ENOMEM); + + cma_init_area(cma, name, size, order_per_bit); + cma->ranges[0].base_pfn = PFN_DOWN(base); + cma->ranges[0].early_pfn = PFN_DOWN(base); + cma->ranges[0].count = cma->count; + cma->nranges = 1; + + cma_activate_area(cma); + + return cma; +} + +void cma_free(struct cma *cma) +{ + kfree(cma); +} -- 2.50.0 From: Thierry Reding Add a callback to struct dma_heap_ops that heap providers can implement to show information about the state of the heap in debugfs. A top-level directory named "dma_heap" is created in debugfs and individual files will be named after the heaps. Signed-off-by: Thierry Reding --- drivers/dma-buf/dma-heap.c | 56 ++++++++++++++++++++++++++++++++++++++ include/linux/dma-heap.h | 2 ++ 2 files changed, 58 insertions(+) diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index cdddf0e24dce..f062f88365a5 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -7,6 +7,7 @@ */ #include +#include #include #include #include @@ -217,6 +218,46 @@ const char *dma_heap_get_name(struct dma_heap *heap) } EXPORT_SYMBOL(dma_heap_get_name); +#ifdef CONFIG_DEBUG_FS +static int dma_heap_debug_show(struct seq_file *s, void *unused) +{ + struct dma_heap *heap = s->private; + int err = 0; + + if (heap->ops && heap->ops->show) + err = heap->ops->show(s, heap); + + return err; +} +DEFINE_SHOW_ATTRIBUTE(dma_heap_debug); + +static struct dentry *dma_heap_debugfs_dir; + +static void dma_heap_init_debugfs(void) +{ + struct dentry *dir; + + dir = debugfs_create_dir("dma_heap", NULL); + if (IS_ERR(dir)) + return; + + dma_heap_debugfs_dir = dir; +} + +static void dma_heap_exit_debugfs(void) +{ + debugfs_remove_recursive(dma_heap_debugfs_dir); +} +#else +static void dma_heap_init_debugfs(void) +{ +} + +static void dma_heap_exit_debugfs(void) +{ +} +#endif + /** * dma_heap_add - adds a heap to dmabuf heaps * @exp_info: information needed to register this heap @@ -291,6 +332,13 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) /* Add heap to the list */ list_add(&heap->list, &heap_list); + +#ifdef CONFIG_DEBUG_FS + if (heap->ops && heap->ops->show) + debugfs_create_file(heap->name, 0444, dma_heap_debugfs_dir, + heap, &dma_heap_debug_fops); +#endif + mutex_unlock(&heap_list_lock); return heap; @@ -327,6 +375,14 @@ static int dma_heap_init(void) } dma_heap_class->devnode = dma_heap_devnode; + dma_heap_init_debugfs(); + return 0; } subsys_initcall(dma_heap_init); + +static void __exit dma_heap_exit(void) +{ + dma_heap_exit_debugfs(); +} +__exitcall(dma_heap_exit); diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h index 27d15f60950a..065f537177af 100644 --- a/include/linux/dma-heap.h +++ b/include/linux/dma-heap.h @@ -12,6 +12,7 @@ #include struct dma_heap; +struct seq_file; /** * struct dma_heap_ops - ops to operate on a given heap @@ -24,6 +25,7 @@ struct dma_heap_ops { unsigned long len, u32 fd_flags, u64 heap_flags); + int (*show)(struct seq_file *s, struct dma_heap *heap); }; /** -- 2.50.0 From: Thierry Reding NVIDIA Tegra SoCs commonly define a Video-Protection-Region, which is a region of memory dedicated to content-protected video decode and playback. This memory cannot be accessed by the CPU and only certain hardware devices have access to it. Expose the VPR as a DMA heap so that applications and drivers can allocate buffers from this region for use-cases that require this kind of protected memory. VPR has a few very critical peculiarities. First, it must be a single contiguous region of memory (there is a single pair of registers that set the base address and size of the region), which is configured by calling back into the secure monitor. The memory region also needs to quite large for some use-cases because it needs to fit multiple video frames (8K video should be supported), so VPR sizes of ~2 GiB are expected. However, some devices cannot afford to reserve this amount of memory for a particular use-case, and therefore the VPR must be resizable. Unfortunately, resizing the VPR is slightly tricky because the GPU found on Tegra SoCs must be in reset during the VPR resize operation. This is currently implemented by freezing all userspace processes and calling invoking the GPU's freeze() implementation, resizing and the thawing the GPU and userspace processes. This is quite heavy-handed, so eventually it might be better to implement thawing/freezing in the GPU driver in such a way that they block accesses to the GPU so that the VPR resize operation can happen without suspending all userspace. In order to balance the memory usage versus the amount of resizing that needs to happen, the VPR is divided into multiple chunks. Each chunk is implemented as a CMA area that is completely allocated on first use to guarantee the contiguity of the VPR. Once all buffers from a chunk have been freed, the CMA area is deallocated and the memory returned to the system. Signed-off-by: Thierry Reding --- drivers/dma-buf/heaps/Kconfig | 7 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/tegra-vpr.c | 831 ++++++++++++++++++++++++++++++ include/trace/events/tegra_vpr.h | 57 ++ 4 files changed, 896 insertions(+) create mode 100644 drivers/dma-buf/heaps/tegra-vpr.c create mode 100644 include/trace/events/tegra_vpr.h diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index bb369b38b001..af97af1bb420 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -22,3 +22,10 @@ config DMABUF_HEAPS_CMA_LEGACY from the CMA area's devicetree node, or "reserved" if the area is not defined in the devicetree. This uses the same underlying allocator as CONFIG_DMABUF_HEAPS_CMA. + +config DMABUF_HEAPS_TEGRA_VPR + bool "NVIDIA Tegra Video-Protected-Region DMA-BUF Heap" + depends on DMABUF_HEAPS && DMA_CMA + help + Choose this option to enable Video-Protected-Region (VPR) support on + a range of NVIDIA Tegra devices. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032..265b77a7b889 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o +obj-$(CONFIG_DMABUF_HEAPS_TEGRA_VPR) += tegra-vpr.o diff --git a/drivers/dma-buf/heaps/tegra-vpr.c b/drivers/dma-buf/heaps/tegra-vpr.c new file mode 100644 index 000000000000..a36efeb031b8 --- /dev/null +++ b/drivers/dma-buf/heaps/tegra-vpr.c @@ -0,0 +1,831 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMA-BUF restricted heap exporter for NVIDIA Video-Protection-Region (VPR) + * + * Copyright (C) 2024-2025 NVIDIA Corporation + */ + +#define pr_fmt(fmt) "tegra-vpr: " fmt + +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include + +#define CREATE_TRACE_POINTS +#include + +struct tegra_vpr; + +struct tegra_vpr_device { + struct list_head node; + struct device *dev; +}; + +struct tegra_vpr_chunk { + phys_addr_t start; + phys_addr_t limit; + size_t size; + + struct tegra_vpr *vpr; + struct cma *cma; + bool active; + + struct page *start_page; + unsigned long *bitmap; + unsigned long virt; + pgoff_t num_pages; + + struct list_head buffers; + struct mutex lock; +}; + +struct tegra_vpr { + struct device_node *dev_node; + unsigned long align; + phys_addr_t base; + phys_addr_t size; + bool use_freezer; + + struct tegra_vpr_chunk *chunks; + unsigned int num_chunks; + + struct list_head devices; + struct mutex lock; +}; + +struct tegra_vpr_buffer { + struct tegra_vpr_chunk *chunk; + struct list_head attachments; + struct list_head list; + struct mutex lock; + + struct page *start_page; + struct page **pages; + pgoff_t num_pages; + phys_addr_t start; + phys_addr_t limit; + size_t size; + int pageno; + int order; + + unsigned long virt; +}; + +struct tegra_vpr_attachment { + struct device *dev; + struct sg_table sgt; + struct list_head list; +}; + +#define ARM_SMCCC_TE_FUNC_PROGRAM_VPR 0x3 + +#define ARM_SMCCC_VENDOR_SIP_TE_PROGRAM_VPR_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_SIP, \ + ARM_SMCCC_TE_FUNC_PROGRAM_VPR) + +static int tegra_vpr_set(phys_addr_t base, phys_addr_t size) +{ + struct arm_smccc_res res; + + arm_smccc_smc(ARM_SMCCC_VENDOR_SIP_TE_PROGRAM_VPR_FUNC_ID, base, size, + 0, 0, 0, 0, 0, &res); + + return res.a0; +} + +static int tegra_vpr_get_extents(struct tegra_vpr *vpr, phys_addr_t *base, + phys_addr_t *size) +{ + phys_addr_t start = ~0, limit = 0; + unsigned int i; + + for (i = 0; i < vpr->num_chunks; i++) { + struct tegra_vpr_chunk *chunk = &vpr->chunks[i]; + + if (!chunk->active) + break; + + if (chunk->start < start) + start = chunk->start; + + if (chunk->limit > limit) + limit = chunk->limit; + } + + if (limit > start) { + *size = limit - start; + *base = start; + } else { + *base = *size = 0; + } + + return 0; +} + +static int tegra_vpr_resize(struct tegra_vpr *vpr) +{ + struct tegra_vpr_device *node; + phys_addr_t base, size; + int err; + + err = tegra_vpr_get_extents(vpr, &base, &size); + if (err < 0) { + pr_err("%s(): failed to get VPR extents: %d\n", __func__, err); + return err; + } + + if (vpr->use_freezer) { + err = freeze_processes(); + if (err < 0) { + pr_err("%s(): failed to freeze processes: %d\n", + __func__, err); + return err; + } + } + + list_for_each_entry(node, &vpr->devices, node) { + err = pm_generic_freeze(node->dev); + if (err < 0) { + pr_err("failed to runtime suspend %s\n", + dev_name(node->dev)); + continue; + } + } + + trace_tegra_vpr_set(base, size); + + err = tegra_vpr_set(base, size); + if (err < 0) { + pr_err("failed to secure VPR: %d\n", err); + return err; + } + + list_for_each_entry(node, &vpr->devices, node) { + err = pm_generic_thaw(node->dev); + if (err < 0) { + pr_err("failed to runtime resume %s\n", + dev_name(node->dev)); + continue; + } + } + + if (vpr->use_freezer) + thaw_processes(); + + return 0; +} + +static int tegra_vpr_protect_pages(pte_t *ptep, unsigned long addr, + void *unused) +{ + pte_t pte = __ptep_get(ptep); + + pte = clear_pte_bit(pte, __pgprot(PROT_NORMAL)); + pte = set_pte_bit(pte, __pgprot(PROT_DEVICE_nGnRnE)); + + __set_pte(ptep, pte); + + return 0; +} + +static int tegra_vpr_unprotect_pages(pte_t *ptep, unsigned long addr, + void *unused) +{ + pte_t pte = __ptep_get(ptep); + + pte = clear_pte_bit(pte, __pgprot(PROT_DEVICE_nGnRnE)); + pte = set_pte_bit(pte, __pgprot(PROT_NORMAL)); + + __set_pte(ptep, pte); + + return 0; +} + +static int tegra_vpr_chunk_init(struct tegra_vpr *vpr, + struct tegra_vpr_chunk *chunk, + phys_addr_t start, size_t size, + unsigned int order, const char *name) +{ + INIT_LIST_HEAD(&chunk->buffers); + chunk->start = start; + chunk->limit = start + size; + chunk->size = size; + chunk->vpr = vpr; + + chunk->cma = cma_create(start, size, order, name); + if (IS_ERR(chunk->cma)) + return PTR_ERR(chunk->cma); + + chunk->num_pages = size >> PAGE_SHIFT; + + chunk->bitmap = bitmap_zalloc(chunk->num_pages, GFP_KERNEL); + if (!chunk->bitmap) { + cma_free(chunk->cma); + return -ENOMEM; + } + + /* CMA area is not reserved yet */ + chunk->start_page = NULL; + chunk->virt = 0; + + return 0; +} + +static void tegra_vpr_chunk_free(struct tegra_vpr_chunk *chunk) +{ + kfree(chunk->bitmap); + cma_free(chunk->cma); +} + +static inline bool tegra_vpr_chunk_is_last(const struct tegra_vpr_chunk *chunk) +{ + phys_addr_t limit = chunk->vpr->base + chunk->vpr->size; + + return chunk->limit == limit; +} + +static inline bool tegra_vpr_chunk_is_leaf(const struct tegra_vpr_chunk *chunk) +{ + const struct tegra_vpr_chunk *next = chunk + 1; + + if (tegra_vpr_chunk_is_last(chunk)) + return true; + + return !next->active; +} + +static int tegra_vpr_chunk_activate(struct tegra_vpr_chunk *chunk) +{ + unsigned long align = get_order(chunk->vpr->align); + int err; + + if (chunk->active) + return 0; + + trace_tegra_vpr_chunk_activate(chunk->start, chunk->limit); + + chunk->start_page = cma_alloc(chunk->cma, chunk->num_pages, align, + false); + if (!chunk->start_page) { + err = -ENOMEM; + goto free; + } + + chunk->virt = (unsigned long)page_to_virt(chunk->start_page); + + apply_to_existing_page_range(&init_mm, chunk->virt, chunk->size, + tegra_vpr_protect_pages, NULL); + flush_tlb_kernel_range(chunk->virt, chunk->virt + chunk->size); + + chunk->active = true; + + err = tegra_vpr_resize(chunk->vpr); + if (err < 0) + goto unprotect; + + bitmap_zero(chunk->bitmap, chunk->num_pages); + + return 0; + +unprotect: + chunk->active = false; + apply_to_existing_page_range(&init_mm, chunk->virt, chunk->size, + tegra_vpr_unprotect_pages, NULL); + flush_tlb_kernel_range(chunk->virt, chunk->virt + chunk->size); +free: + cma_release(chunk->cma, chunk->start_page, chunk->num_pages); + chunk->start_page = NULL; + chunk->virt = 0; + return err; +} + +static int tegra_vpr_chunk_deactivate(struct tegra_vpr_chunk *chunk) +{ + int err; + + if (!chunk->active || !tegra_vpr_chunk_is_leaf(chunk)) + return 0; + + /* do not deactivate if there are buffers left in this chunk */ + if (WARN_ON(!list_empty(&chunk->buffers))) + return 0; + + trace_tegra_vpr_chunk_deactivate(chunk->start, chunk->limit); + + chunk->active = false; + + err = tegra_vpr_resize(chunk->vpr); + if (err < 0) { + chunk->active = true; + return err; + } + + apply_to_existing_page_range(&init_mm, chunk->virt, chunk->size, + tegra_vpr_unprotect_pages, NULL); + flush_tlb_kernel_range(chunk->virt, chunk->virt + chunk->size); + + cma_release(chunk->cma, chunk->start_page, chunk->num_pages); + chunk->start_page = NULL; + chunk->virt = 0; + + return 0; +} + +static struct tegra_vpr_buffer * +tegra_vpr_chunk_allocate(struct tegra_vpr_chunk *chunk, size_t size) +{ + unsigned int order = get_order(size); + struct tegra_vpr_buffer *buffer; + int pageno, err; + pgoff_t i; + + err = tegra_vpr_chunk_activate(chunk); + if (err < 0) + return ERR_PTR(err); + + /* + * "order" defines the alignment and size, so this may result in + * fragmented memory depending on the allocation patterns. However, + * since this is used primarily for video frames, it is expected that + * a number of buffers of the same size will be allocated, so + * fragmentation should be negligible. + */ + pageno = bitmap_find_free_region(chunk->bitmap, chunk->num_pages, + order); + if (pageno < 0) + return ERR_PTR(-ENOSPC); + + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) { + err = -ENOMEM; + goto release; + } + + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->chunk = chunk; + buffer->start = chunk->start + (pageno << PAGE_SHIFT); + buffer->limit = buffer->start + size; + buffer->size = size; + buffer->num_pages = buffer->size >> PAGE_SHIFT; + buffer->pageno = pageno; + buffer->order = order; + + buffer->virt = (unsigned long)page_to_virt(chunk->start_page + pageno); + + buffer->pages = kmalloc_array(buffer->num_pages, + sizeof(*buffer->pages), + GFP_KERNEL); + if (!buffer->pages) { + err = -ENOMEM; + goto free; + } + + for (i = 0; i < buffer->num_pages; i++) + buffer->pages[i] = &chunk->start_page[pageno + i]; + + list_add_tail(&buffer->list, &chunk->buffers); + + return buffer; + +free: + kfree(buffer); +release: + bitmap_release_region(chunk->bitmap, pageno, order); + return ERR_PTR(err); +} + +static void tegra_vpr_chunk_release(struct tegra_vpr_chunk *chunk, + struct tegra_vpr_buffer *buffer) +{ + list_del(&buffer->list); + kfree(buffer->pages); + kfree(buffer); + + bitmap_release_region(chunk->bitmap, buffer->pageno, buffer->order); +} + +static int tegra_vpr_attach(struct dma_buf *buf, + struct dma_buf_attachment *attachment) +{ + struct tegra_vpr_buffer *buffer = buf->priv; + struct tegra_vpr_attachment *attach; + int err; + + attach = kzalloc(sizeof(*attach), GFP_KERNEL); + if (!attach) + return -ENOMEM; + + err = sg_alloc_table_from_pages(&attach->sgt, buffer->pages, + buffer->num_pages, 0, buffer->size, + GFP_KERNEL); + if (err < 0) + goto free; + + attach->dev = attach->dev; + INIT_LIST_HEAD(&attach->list); + attachment->priv = attach; + + mutex_lock(&buffer->lock); + list_add(&attach->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; + +free: + kfree(attach); + return err; +} + +static void tegra_vpr_detach(struct dma_buf *buf, + struct dma_buf_attachment *attachment) +{ + struct tegra_vpr_buffer *buffer = buf->priv; + struct tegra_vpr_attachment *attach = attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&attach->list); + mutex_unlock(&buffer->lock); + + sg_free_table(&attach->sgt); + kfree(attach); +} + +static struct sg_table * +tegra_vpr_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct tegra_vpr_attachment *attach = attachment->priv; + struct sg_table *sgt = &attach->sgt; + int err; + + err = dma_map_sgtable(attachment->dev, sgt, direction, + DMA_ATTR_SKIP_CPU_SYNC); + if (err < 0) + return ERR_PTR(err); + + return sgt; +} + +static void tegra_vpr_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *sgt, + enum dma_data_direction direction) +{ + dma_unmap_sgtable(attachment->dev, sgt, direction, + DMA_ATTR_SKIP_CPU_SYNC); +} + +static void tegra_vpr_recycle(struct tegra_vpr *vpr) +{ + unsigned int i; + int err; + + /* + * Walk the list of chunks in reverse order and check if they can be + * deactivated. + */ + for (i = 0; i < vpr->num_chunks; i++) { + unsigned int index = vpr->num_chunks - i - 1; + struct tegra_vpr_chunk *chunk = &vpr->chunks[index]; + + /* + * Stop at any chunk that has remaining buffers. We cannot + * deactivate any chunks at lower addresses because the + * protected region needs to remain contiguous. Technically we + * could shrink from top and bottom, but for the sake of + * simplicity we'll only shrink from the top for now. + */ + if (!list_empty(&chunk->buffers)) + break; + + err = tegra_vpr_chunk_deactivate(chunk); + if (err < 0) + pr_err("failed to deactivate chunk\n"); + } +} + +static void tegra_vpr_release(struct dma_buf *buf) +{ + struct tegra_vpr_buffer *buffer = buf->priv; + struct tegra_vpr_chunk *chunk = buffer->chunk; + struct tegra_vpr *vpr = chunk->vpr; + + mutex_lock(&vpr->lock); + + tegra_vpr_chunk_release(chunk, buffer); + tegra_vpr_recycle(vpr); + + mutex_unlock(&vpr->lock); +} + +/* + * Prohibit userspace mapping because the CPU cannot access this memory + * anyway. + */ +static int tegra_vpr_begin_cpu_access(struct dma_buf *buf, + enum dma_data_direction direction) +{ + return -EPERM; +} + +static int tegra_vpr_end_cpu_access(struct dma_buf *buf, + enum dma_data_direction direction) +{ + return -EPERM; +} + +static int tegra_vpr_mmap(struct dma_buf *buf, struct vm_area_struct *vma) +{ + return -EPERM; +} + +static const struct dma_buf_ops tegra_vpr_buf_ops = { + .attach = tegra_vpr_attach, + .detach = tegra_vpr_detach, + .map_dma_buf = tegra_vpr_map_dma_buf, + .unmap_dma_buf = tegra_vpr_unmap_dma_buf, + .release = tegra_vpr_release, + .begin_cpu_access = tegra_vpr_begin_cpu_access, + .end_cpu_access = tegra_vpr_end_cpu_access, + .mmap = tegra_vpr_mmap, +}; + +static struct dma_buf *tegra_vpr_allocate(struct dma_heap *heap, + unsigned long len, u32 fd_flags, + u64 heap_flags) +{ + struct tegra_vpr *vpr = dma_heap_get_drvdata(heap); + DEFINE_DMA_BUF_EXPORT_INFO(export); + struct tegra_vpr_buffer *buffer; + struct dma_buf *buf; + unsigned int i; + + mutex_lock(&vpr->lock); + + for (i = 0; i < vpr->num_chunks; i++) { + struct tegra_vpr_chunk *chunk = &vpr->chunks[i]; + size_t size = ALIGN(len, vpr->align); + + buffer = tegra_vpr_chunk_allocate(chunk, size); + if (IS_ERR(buffer)) { + /* try the next chunk if the current one is exhausted */ + if (PTR_ERR(buffer) == -ENOSPC) + continue; + + mutex_unlock(&vpr->lock); + return ERR_CAST(buffer); + } + + /* + * If a valid buffer was allocated, wrap it in a dma_buf and + * return it. + */ + if (buffer) { + export.exp_name = dma_heap_get_name(heap); + export.ops = &tegra_vpr_buf_ops; + export.size = buffer->size; + export.flags = fd_flags; + export.priv = buffer; + + buf = dma_buf_export(&export); + if (IS_ERR(buf)) { + tegra_vpr_chunk_release(chunk, buffer); + return ERR_CAST(buf); + } + + mutex_unlock(&vpr->lock); + return buf; + } + } + + mutex_unlock(&vpr->lock); + + /* + * If we get here, none of the chunks could allocate a buffer, so + * there's nothing else we can do. + */ + return ERR_PTR(-ENOMEM); +} + +static int tegra_vpr_debugfs_show(struct seq_file *s, struct dma_heap *heap) +{ + struct tegra_vpr *vpr = dma_heap_get_drvdata(heap); + phys_addr_t limit = vpr->base + vpr->size; + unsigned int i; + char buf[16]; + + string_get_size(vpr->size, 1, STRING_UNITS_2, buf, sizeof(buf)); + seq_printf(s, "%pap-%pap (%s)\n", &vpr->base, &limit, buf); + + for (i = 0; i < vpr->num_chunks; i++) { + const struct tegra_vpr_chunk *chunk = &vpr->chunks[i]; + struct tegra_vpr_buffer *buffer; + + string_get_size(chunk->size, 1, STRING_UNITS_2, buf, + sizeof(buf)); + seq_printf(s, " %pap-%pap (%s)\n", &chunk->start, + &chunk->limit, buf); + + list_for_each_entry(buffer, &chunk->buffers, list) { + string_get_size(buffer->size, 1, STRING_UNITS_2, buf, + sizeof(buf)); + seq_printf(s, " %pap-%pap (%s)\n", &buffer->start, + &buffer->limit, buf); + } + } + + return 0; +} + +static const struct dma_heap_ops tegra_vpr_heap_ops = { + .allocate = tegra_vpr_allocate, + .show = tegra_vpr_debugfs_show, +}; + +static int __init tegra_vpr_add_heap(struct reserved_mem *rmem, + struct device_node *np) +{ + struct dma_heap_export_info info = {}; + phys_addr_t start, limit; + struct dma_heap *heap; + struct tegra_vpr *vpr; + unsigned int order, i; + size_t max_size; + int err; + + vpr = kzalloc(sizeof(*vpr), GFP_KERNEL); + if (!vpr) { + err = -ENOMEM; + goto out; + } + + INIT_LIST_HEAD(&vpr->devices); + vpr->use_freezer = true; + vpr->dev_node = np; + vpr->align = SZ_1M; + vpr->base = rmem->base; + vpr->size = rmem->size; + vpr->num_chunks = 4; + + max_size = PAGE_SIZE << (get_order(vpr->size) - ilog2(vpr->num_chunks)); + order = get_order(vpr->align); + + vpr->chunks = kcalloc(vpr->num_chunks, sizeof(*vpr->chunks), + GFP_KERNEL); + if (!vpr) { + err = -ENOMEM; + goto free; + } + + /* + * Allocate CMA areas for VPR. All areas will be roughtly the same + * size, with the last area taking up the rest. + */ + start = vpr->base; + limit = vpr->base + vpr->size; + + pr_debug("VPR: %pap-%pap (%u chunks, %lu MiB)\n", &start, &limit, + vpr->num_chunks, (unsigned long)vpr->size / 1024 / 1024); + + for (i = 0; i < vpr->num_chunks; i++) { + size_t size = limit - start; + phys_addr_t end; + + size = min_t(size_t, size, max_size); + end = start + size - 1; + + err = tegra_vpr_chunk_init(vpr, &vpr->chunks[i], start, size, + order, rmem->name); + if (err < 0) { + pr_err("failed to create VPR chunk: %d\n", err); + goto free; + } + + pr_debug(" %2u: %pap-%pap (%lu MiB)\n", i, &start, &end, + size / 1024 / 1024); + start += size; + } + + info.name = vpr->dev_node->name; + info.ops = &tegra_vpr_heap_ops; + info.priv = vpr; + + heap = dma_heap_add(&info); + if (IS_ERR(heap)) { + err = PTR_ERR(heap); + goto cma_free; + } + + rmem->priv = heap; + + return 0; + +cma_free: + while (i--) + tegra_vpr_chunk_free(&vpr->chunks[i]); +free: + kfree(vpr->chunks); + kfree(vpr); +out: + return err; +} + +static int __init tegra_vpr_init(void) +{ + const char *compatible = "nvidia,tegra-video-protection-region"; + struct device_node *parent; + struct reserved_mem *rmem; + int err; + + parent = of_find_node_by_path("/reserved-memory"); + if (!parent) + return 0; + + for_each_child_of_node_scoped(parent, child) { + if (!of_device_is_compatible(child, compatible)) + continue; + + rmem = of_reserved_mem_lookup(child); + if (!rmem) + continue; + + err = tegra_vpr_add_heap(rmem, child); + if (err < 0) + pr_err("failed to add VPR heap for %pOF: %d\n", child, + err); + + /* only a single VPR heap is supported */ + break; + } + + return 0; +} +module_init(tegra_vpr_init); + +static int tegra_vpr_device_init(struct reserved_mem *rmem, struct device *dev) +{ + struct dma_heap *heap = rmem->priv; + struct tegra_vpr *vpr = dma_heap_get_drvdata(heap); + struct tegra_vpr_device *node; + int err = 0; + + if (!dev->driver->pm->freeze || !dev->driver->pm->thaw) + return -EINVAL; + + node = kzalloc(sizeof(*node), GFP_KERNEL); + if (!node) { + err = -ENOMEM; + goto out; + } + + INIT_LIST_HEAD(&node->node); + node->dev = dev; + + list_add_tail(&node->node, &vpr->devices); + +out: + return err; +} + +static void tegra_vpr_device_release(struct reserved_mem *rmem, + struct device *dev) +{ + struct dma_heap *heap = rmem->priv; + struct tegra_vpr *vpr = dma_heap_get_drvdata(heap); + struct tegra_vpr_device *node, *tmp; + + list_for_each_entry_safe(node, tmp, &vpr->devices, node) { + if (node->dev == dev) { + list_del(&node->node); + kfree(node); + } + } +} + +static const struct reserved_mem_ops tegra_vpr_ops = { + .device_init = tegra_vpr_device_init, + .device_release = tegra_vpr_device_release, +}; + +static int tegra_vpr_rmem_init(struct reserved_mem *rmem) +{ + rmem->ops = &tegra_vpr_ops; + + return 0; +} +RESERVEDMEM_OF_DECLARE(tegra_vpr, "nvidia,tegra-video-protection-region", + tegra_vpr_rmem_init); + +MODULE_DESCRIPTION("NVIDIA Tegra Video-Protection-Region DMA-BUF heap driver"); +MODULE_LICENSE("GPL"); diff --git a/include/trace/events/tegra_vpr.h b/include/trace/events/tegra_vpr.h new file mode 100644 index 000000000000..f8ceb17679fe --- /dev/null +++ b/include/trace/events/tegra_vpr.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#if !defined(_TRACE_TEGRA_VPR_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_TEGRA_VPR_H + +#undef TRACE_SYSTEM +#define TRACE_SYSTEM tegra_vpr + +#include + +TRACE_EVENT(tegra_vpr_chunk_activate, + TP_PROTO(phys_addr_t start, phys_addr_t limit), + TP_ARGS(start, limit), + TP_STRUCT__entry( + __field(phys_addr_t, start) + __field(phys_addr_t, limit) + ), + TP_fast_assign( + __entry->start = start; + __entry->limit = limit; + ), + TP_printk("%pap-%pap", &__entry->start, + &__entry->limit) +); + +TRACE_EVENT(tegra_vpr_chunk_deactivate, + TP_PROTO(phys_addr_t start, phys_addr_t limit), + TP_ARGS(start, limit), + TP_STRUCT__entry( + __field(phys_addr_t, start) + __field(phys_addr_t, limit) + ), + TP_fast_assign( + __entry->start = start; + __entry->limit = limit; + ), + TP_printk("%pap-%pap", &__entry->start, + &__entry->limit) +); + +TRACE_EVENT(tegra_vpr_set, + TP_PROTO(phys_addr_t base, phys_addr_t size), + TP_ARGS(base, size), + TP_STRUCT__entry( + __field(phys_addr_t, start) + __field(phys_addr_t, limit) + ), + TP_fast_assign( + __entry->start = base; + __entry->limit = base + size; + ), + TP_printk("%pap-%pap", &__entry->start, &__entry->limit) +); + +#endif /* _TRACE_TEGRA_VPR_H */ + +#include -- 2.50.0 From: Thierry Reding This node contains two sets of properties, one for the case where the VPR is resizable (in which case the VPR region will be dynamically allocated at boot time) and another case where the VPR is fixed in size and initialized by early firmware. The firmware running on the device is responsible for updating the node with the real physical address for the fixed VPR case and remove the properties needed only for resizable VPR. Similarly, if the VPR is resizable, the firmware should remove the "reg" property since it is no longer needed. Signed-off-by: Thierry Reding --- arch/arm64/boot/dts/nvidia/tegra234.dtsi | 34 ++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi index df034dbb8285..4d572f5fa0b1 100644 --- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi @@ -28,6 +28,40 @@ aliases { i2c8 = &dp_aux_ch3_i2c; }; + reserved-memory { + #address-cells = <2>; + #size-cells = <2>; + ranges; + + vpr: video-protection-region@0 { + compatible = "nvidia,tegra-video-protection-region"; + status = "disabled"; + no-map; + + /* + * Two variants exist for this. For fixed VPR, the + * firmware is supposed to update the "reg" property + * with the fixed memory region configured as VPR. + * + * For resizable VPR we don't care about the exact + * address and instead want a reserved region to be + * allocated with a certain size and alignment at + * boot time. + * + * The firmware is responsible for removing the + * unused set of properties. + */ + + /* fixed VPR */ + reg = <0x0 0x0 0x0 0x0>; + + /* resizable VPR */ + size = <0x0 0x70000000>; + alignment = <0x0 0x100000>; + reusable; + }; + }; + bus@0 { compatible = "simple-bus"; -- 2.50.0 From: Thierry Reding Signed-off-by: Thierry Reding --- arch/arm64/boot/dts/nvidia/tegra234.dtsi | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi index 4d572f5fa0b1..4f8031055ad0 100644 --- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi @@ -5262,6 +5262,23 @@ pcie-ep@141e0000 { }; }; + gpu@17000000 { + compatible = "nvidia,ga10b"; + reg = <0x0 0x17000000 0x0 0x1000000>, + <0x0 0x18000000 0x0 0x1000000>; + interrupts = , + , + , + ; + interrupt-names = "nonstall", "stall0", "stall1", "stall2"; + power-domains = <&bpmp TEGRA234_POWER_DOMAIN_GPU>; + clocks = <&bpmp TEGRA234_CLK_GPUSYS>, + <&bpmp TEGRA234_CLK_GPC0CLK>, + <&bpmp TEGRA234_CLK_GPC1CLK>; + clock-names = "sys", "gpc0", "gpc1"; + resets = <&bpmp TEGRA234_RESET_GPU>; + }; + sram@40000000 { compatible = "nvidia,tegra234-sysram", "mmio-sram"; reg = <0x0 0x40000000 0x0 0x80000>; -- 2.50.0 From: Thierry Reding The host1x needs access to the VPR region, so make sure to reference it via the memory-region property. Signed-off-by: Thierry Reding --- arch/arm64/boot/dts/nvidia/tegra234.dtsi | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi index 4f8031055ad0..0b9c2e1b47d2 100644 --- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi @@ -4414,6 +4414,9 @@ host1x@13e00000 { <14 &smmu_niso1 TEGRA234_SID_HOST1X_CTX6 1>, <15 &smmu_niso1 TEGRA234_SID_HOST1X_CTX7 1>; + memory-region = <&vpr>; + memory-region-names = "protected"; + vic@15340000 { compatible = "nvidia,tegra234-vic"; reg = <0x0 0x15340000 0x0 0x00040000>; -- 2.50.0 From: Thierry Reding The GPU needs to be idled before the VPR can be resized and unidled afterwards. Associate it with the VPR using the standard memory-region device tree property. Signed-off-by: Thierry Reding --- arch/arm64/boot/dts/nvidia/tegra234.dtsi | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi index 0b9c2e1b47d2..98d87144a2e4 100644 --- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi @@ -5280,6 +5280,9 @@ gpu@17000000 { <&bpmp TEGRA234_CLK_GPC1CLK>; clock-names = "sys", "gpc0", "gpc1"; resets = <&bpmp TEGRA234_RESET_GPU>; + + memory-region-names = "vpr"; + memory-region = <&vpr>; }; sram@40000000 { -- 2.50.0