OSDN Git Service

Merge tag 'drm-intel-next-2020-01-14' of git://anongit.freedesktop.org/drm/drm-intel...
authorDave Airlie <airlied@redhat.com>
Wed, 15 Jan 2020 06:57:53 +0000 (16:57 +1000)
committerDave Airlie <airlied@redhat.com>
Wed, 15 Jan 2020 06:57:54 +0000 (16:57 +1000)
Final drm/i915 features for v5.6:
- DP MST fixes (José)
- Fix intel_bw_state memory leak (Pankaj Bharadiya)
- Switch context id allocation to xarray (Tvrtko)
- ICL/EHL/TGL workarounds (Matt Roper, Tvrtko)
- Debugfs for LMEM details (Lukasz Fiedorowicz)
- Prefer platform acronyms over codenames in symbols (Lucas)
- Tiled and port sync mode fixes for fbdev and DP (Manasi)
- DSI panel and backlight enable GPIO fixes (Hans de Goede)
- Relax audio min CDCLK requirements on non-GLK (Kai Vehmanen)
- Plane alignment and dimension check fixes (Imre)
- Fix state checks for PSR (José)
- Remove ICL+ clock gating programming (José)
- Static checker fixes around bool usage (Ma Feng)
- Bring back tests for self-contained headers in i915 (Masahiro Yamada)
- Fix DP MST disable sequence (Ville)
- Start converting i915 to the new drm device based logging macros (Wambui Karuga)
- Add DSI VBT I2C sequence execution (Vivek Kasireddy)
- Start using function pointers and ops structs in uc code (Michal)
- Fix PMU names to not use colons or dashes (Tvrtko)
- TGL media decompression support (DK, Imre)
- Split i915_gem_gtt.[ch] to more manageable chunks (Matthew Auld)
- Create dumb buffers in LMEM where available (Ram)
- Extend mmap support for LMEM (Abdiel)
- Selftest updates (Chris)
- Hack bump up CDCLK on TGL to avoid underruns (Stan)
- Use intel_encoder and intel_connector more instead of drm counterparts (Ville)
- Build error fixes (Zhang Xiaoxu)
- Fixes related to GPU and engine initialization/resume (Chris)
- Support for prefaulting discontiguous objects (Abdiel)
- Support discontiguous LMEM object maps (Chris)
- Various GEM and GT improvements and fixes (Chris)
- Merge pinctrl dependencies branch for the DSI GPIO updates (Jani)
- Backmerge drm-next for new logging macros (Jani)

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/87sgkil0v9.fsf@intel.com
509 files changed:
Documentation/devicetree/bindings/display/allwinner,sun4i-a10-display-backend.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/allwinner,sun4i-a10-display-engine.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/allwinner,sun4i-a10-display-frontend.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/allwinner,sun4i-a10-hdmi.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tcon.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tv-encoder.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/allwinner,sun6i-a31-drc.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/allwinner,sun8i-a83t-de2-mixer.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/allwinner,sun8i-a83t-dw-hdmi.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/allwinner,sun8i-a83t-hdmi-phy.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/allwinner,sun8i-r40-tcon-top.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/allwinner,sun9i-a80-deu.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/panel/ampire,am-480272h3tmqw-t01h.yaml [deleted file]
Documentation/devicetree/bindings/display/panel/ampire,am800480r3tmqwa1h.txt [deleted file]
Documentation/devicetree/bindings/display/panel/giantplus,gpm940b0.txt [deleted file]
Documentation/devicetree/bindings/display/panel/panel-simple.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/display/panel/sharp,ls020b1dd01d.txt [deleted file]
Documentation/devicetree/bindings/display/sunxi/sun4i-drm.txt [deleted file]
Documentation/devicetree/bindings/vendor-prefixes.yaml
MAINTAINERS
drivers/gpu/drm/Kconfig
drivers/gpu/drm/amd/amdgpu/amdgpu.h
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.h
drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.c
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.h
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
drivers/gpu/drm/amd/amdgpu/amdgpu_pmu.c
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h
drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
drivers/gpu/drm/amd/amdgpu/cik_sdma.c
drivers/gpu/drm/amd/amdgpu/df_v3_6.c
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c
drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c
drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
drivers/gpu/drm/amd/amdgpu/navi10_ih.c
drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
drivers/gpu/drm/amd/amdgpu/si_dma.c
drivers/gpu/drm/amd/amdgpu/soc15.c
drivers/gpu/drm/amd/amdgpu/umc_v6_1.c
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.h
drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
drivers/gpu/drm/amd/amdgpu/vega10_ih.c
drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
drivers/gpu/drm/amd/amdkfd/kfd_debugfs.c
drivers/gpu/drm/amd/amdkfd/kfd_device.c
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
drivers/gpu/drm/amd/amdkfd/kfd_process.c
drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
drivers/gpu/drm/amd/amdkfd/kfd_topology.c
drivers/gpu/drm/amd/amdkfd/kfd_topology.h
drivers/gpu/drm/amd/display/Kconfig
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
drivers/gpu/drm/amd/display/dc/calcs/Makefile
drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c
drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.h
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
drivers/gpu/drm/amd/display/dc/core/dc.c
drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
drivers/gpu/drm/amd/display/dc/core/dc_stream.c
drivers/gpu/drm/amd/display/dc/dc.h
drivers/gpu/drm/amd/display/dc/dc_dsc.h
drivers/gpu/drm/amd/display/dc/dc_link.h
drivers/gpu/drm/amd/display/dc/dc_stream.h
drivers/gpu/drm/amd/display/dc/dc_types.h
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.h
drivers/gpu/drm/amd/display/dc/dcn20/Makefile
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dsc.c
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.h
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_link_encoder.h
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.h
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h
drivers/gpu/drm/amd/display/dc/dcn21/Makefile
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.h
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_link_encoder.h
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
drivers/gpu/drm/amd/display/dc/dm_services_types.h
drivers/gpu/drm/amd/display/dc/dml/Makefile
drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
drivers/gpu/drm/amd/display/dc/dsc/Makefile
drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
drivers/gpu/drm/amd/display/dc/inc/hw/dwb.h
drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
drivers/gpu/drm/amd/display/dc/inc/resource.h
drivers/gpu/drm/amd/display/dc/os_types.h
drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_meta.h [new file with mode: 0644]
drivers/gpu/drm/amd/display/dmub/inc/dmub_srv.h
drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h
drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c
drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.h
drivers/gpu/drm/amd/display/dmub/src/dmub_reg.h
drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
drivers/gpu/drm/amd/display/include/dal_asic_id.h
drivers/gpu/drm/amd/display/modules/color/color_gamma.c
drivers/gpu/drm/amd/display/modules/freesync/freesync.c
drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_transition.c
drivers/gpu/drm/amd/display/modules/hdcp/hdcp2_transition.c
drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
drivers/gpu/drm/amd/include/asic_reg/df/df_3_6_offset.h
drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_2_0_0_offset.h [new file with mode: 0644]
drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_2_0_0_sh_mask.h [new file with mode: 0644]
drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_2_1_0_offset.h [moved from drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_2_1_0_offset.h with 100% similarity]
drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_2_1_0_sh_mask.h [moved from drivers/gpu/drm/amd/include/asic_reg/dcn/dpcs_2_1_0_sh_mask.h with 100% similarity]
drivers/gpu/drm/amd/include/asic_reg/gc/gc_9_0_offset.h
drivers/gpu/drm/amd/include/asic_reg/umc/umc_6_1_2_offset.h [new file with mode: 0644]
drivers/gpu/drm/amd/include/atomfirmware.h
drivers/gpu/drm/amd/powerplay/amd_powerplay.c
drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
drivers/gpu/drm/amd/powerplay/arcturus_ppt.c
drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h
drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h
drivers/gpu/drm/amd/powerplay/navi10_ppt.c
drivers/gpu/drm/amd/powerplay/navi10_ppt.h
drivers/gpu/drm/amd/powerplay/renoir_ppt.c
drivers/gpu/drm/amd/powerplay/smu_internal.h
drivers/gpu/drm/amd/powerplay/smu_v11_0.c
drivers/gpu/drm/amd/powerplay/smu_v12_0.c
drivers/gpu/drm/amd/powerplay/smumgr/smu10_smumgr.c
drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c
drivers/gpu/drm/amd/powerplay/smumgr/vega12_smumgr.c
drivers/gpu/drm/amd/powerplay/smumgr/vega20_smumgr.c
drivers/gpu/drm/amd/powerplay/vega20_ppt.c
drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
drivers/gpu/drm/drm_atomic.c
drivers/gpu/drm/drm_atomic_helper.c
drivers/gpu/drm/drm_bridge.c
drivers/gpu/drm/drm_debugfs_crc.c
drivers/gpu/drm/drm_dp_aux_dev.c
drivers/gpu/drm/drm_dp_helper.c
drivers/gpu/drm/drm_dp_mst_topology.c
drivers/gpu/drm/drm_fb_cma_helper.c
drivers/gpu/drm/drm_lock.c
drivers/gpu/drm/drm_modes.c
drivers/gpu/drm/etnaviv/etnaviv_drv.c
drivers/gpu/drm/exynos/exynos_drm_dsi.c
drivers/gpu/drm/gma500/psb_irq.c
drivers/gpu/drm/i915/display/intel_dp_mst.c
drivers/gpu/drm/lima/lima_sched.c
drivers/gpu/drm/lima/lima_sched.h
drivers/gpu/drm/mediatek/Makefile
drivers/gpu/drm/mediatek/mtk_disp_color.c
drivers/gpu/drm/mediatek/mtk_disp_ovl.c
drivers/gpu/drm/mediatek/mtk_disp_rdma.c
drivers/gpu/drm/mediatek/mtk_drm_crtc.c
drivers/gpu/drm/mediatek/mtk_drm_crtc.h
drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
drivers/gpu/drm/mediatek/mtk_drm_drv.c
drivers/gpu/drm/mediatek/mtk_drm_drv.h
drivers/gpu/drm/mediatek/mtk_drm_plane.c
drivers/gpu/drm/mediatek/mtk_drm_plane.h
drivers/gpu/drm/meson/meson_drv.h
drivers/gpu/drm/meson/meson_rdma.c
drivers/gpu/drm/nouveau/dispnv04/arb.c
drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
drivers/gpu/drm/nouveau/dispnv50/base907c.c
drivers/gpu/drm/nouveau/dispnv50/disp.c
drivers/gpu/drm/nouveau/dispnv50/disp.h
drivers/gpu/drm/nouveau/dispnv50/head.c
drivers/gpu/drm/nouveau/dispnv50/head.h
drivers/gpu/drm/nouveau/dispnv50/head507d.c
drivers/gpu/drm/nouveau/dispnv50/head827d.c
drivers/gpu/drm/nouveau/dispnv50/head907d.c
drivers/gpu/drm/nouveau/dispnv50/head917d.c
drivers/gpu/drm/nouveau/dispnv50/headc37d.c
drivers/gpu/drm/nouveau/dispnv50/headc57d.c
drivers/gpu/drm/nouveau/dispnv50/lut.c
drivers/gpu/drm/nouveau/dispnv50/wndw.c
drivers/gpu/drm/nouveau/dispnv50/wndw.h
drivers/gpu/drm/nouveau/dispnv50/wndwc37e.c
drivers/gpu/drm/nouveau/dispnv50/wndwc57e.c
drivers/gpu/drm/nouveau/include/nvfw/acr.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/include/nvfw/flcn.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/include/nvfw/fw.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/include/nvfw/hs.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/include/nvfw/ls.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/include/nvfw/pmu.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/include/nvfw/sec2.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/include/nvif/class.h
drivers/gpu/drm/nouveau/include/nvif/if0008.h
drivers/gpu/drm/nouveau/include/nvif/mmu.h
drivers/gpu/drm/nouveau/include/nvkm/core/device.h
drivers/gpu/drm/nouveau/include/nvkm/core/falcon.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/include/nvkm/core/firmware.h
drivers/gpu/drm/nouveau/include/nvkm/core/memory.h
drivers/gpu/drm/nouveau/include/nvkm/core/msgqueue.h [deleted file]
drivers/gpu/drm/nouveau/include/nvkm/core/os.h
drivers/gpu/drm/nouveau/include/nvkm/engine/falcon.h
drivers/gpu/drm/nouveau/include/nvkm/engine/gr.h
drivers/gpu/drm/nouveau/include/nvkm/engine/nvdec.h
drivers/gpu/drm/nouveau/include/nvkm/engine/nvenc.h
drivers/gpu/drm/nouveau/include/nvkm/engine/sec2.h
drivers/gpu/drm/nouveau/include/nvkm/subdev/acr.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/include/nvkm/subdev/fault.h
drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h
drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
drivers/gpu/drm/nouveau/include/nvkm/subdev/ltc.h
drivers/gpu/drm/nouveau/include/nvkm/subdev/pmu.h
drivers/gpu/drm/nouveau/nouveau_bo.c
drivers/gpu/drm/nouveau/nouveau_dmem.c
drivers/gpu/drm/nouveau/nouveau_drm.c
drivers/gpu/drm/nouveau/nouveau_fence.c
drivers/gpu/drm/nouveau/nouveau_hwmon.c
drivers/gpu/drm/nouveau/nouveau_ttm.c
drivers/gpu/drm/nouveau/nvif/mmu.c
drivers/gpu/drm/nouveau/nvkm/Kbuild
drivers/gpu/drm/nouveau/nvkm/core/firmware.c
drivers/gpu/drm/nouveau/nvkm/core/subdev.c
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
drivers/gpu/drm/nouveau/nvkm/engine/device/priv.h
drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/Kbuild
drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf100.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf100.h
drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk20a.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm20b.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgv100.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxtu102.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/hubgk208.fuc5.h
drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/hubgm107.fuc5.h
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.h
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf104.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf108.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf110.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf117.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf119.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110b.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gk208.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gm107.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gm200.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gm20b.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gp100.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gp102.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gp104.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gp107.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gp108.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/engine/gr/gp10b.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gv100.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/tu102.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/engine/nvdec/Kbuild
drivers/gpu/drm/nouveau/nvkm/engine/nvdec/base.c
drivers/gpu/drm/nouveau/nvkm/engine/nvdec/gm107.c [moved from drivers/gpu/drm/nouveau/nvkm/engine/nvdec/gp102.c with 56% similarity]
drivers/gpu/drm/nouveau/nvkm/engine/nvdec/priv.h
drivers/gpu/drm/nouveau/nvkm/engine/nvenc/Kbuild
drivers/gpu/drm/nouveau/nvkm/engine/nvenc/base.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/engine/nvenc/gm107.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/engine/nvenc/priv.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/engine/sec2/Kbuild
drivers/gpu/drm/nouveau/nvkm/engine/sec2/base.c
drivers/gpu/drm/nouveau/nvkm/engine/sec2/gp102.c
drivers/gpu/drm/nouveau/nvkm/engine/sec2/gp108.c [moved from drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r367.h with 50% similarity]
drivers/gpu/drm/nouveau/nvkm/engine/sec2/priv.h
drivers/gpu/drm/nouveau/nvkm/engine/sec2/tu102.c
drivers/gpu/drm/nouveau/nvkm/falcon/Kbuild
drivers/gpu/drm/nouveau/nvkm/falcon/base.c
drivers/gpu/drm/nouveau/nvkm/falcon/cmdq.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/falcon/msgq.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue.h [deleted file]
drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue_0137c63d.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue_0148cdec.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/falcon/priv.h
drivers/gpu/drm/nouveau/nvkm/falcon/qmgr.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/falcon/qmgr.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/falcon/v1.c
drivers/gpu/drm/nouveau/nvkm/nvfw/Kbuild [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/nvfw/acr.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/nvfw/flcn.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/nvfw/fw.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/nvfw/hs.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/nvfw/ls.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/Kbuild
drivers/gpu/drm/nouveau/nvkm/subdev/acr/Kbuild [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/acr/base.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm200.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm20b.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp102.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp108.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp10b.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/acr/lsfw.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/acr/priv.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/acr/tu102.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/fault/Kbuild
drivers/gpu/drm/nouveau/nvkm/subdev/fault/base.c
drivers/gpu/drm/nouveau/nvkm/subdev/fault/gp100.c
drivers/gpu/drm/nouveau/nvkm/subdev/fault/gp10b.c [moved from drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_state.h with 57% similarity]
drivers/gpu/drm/nouveau/nvkm/subdev/fault/gv100.c
drivers/gpu/drm/nouveau/nvkm/subdev/fault/priv.h
drivers/gpu/drm/nouveau/nvkm/subdev/fault/tu102.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp102.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgf100.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgf108.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgk104.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgm107.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgm200.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgp100.c
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/Kbuild
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/gv100.c
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/priv.h [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/ltc/Kbuild
drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gp10b.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/ltc/priv.h
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gf100.c
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gm200.c
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/nv50.c
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/priv.h
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/tu102.c
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/ummu.c
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgf100.c
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv50.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/Kbuild
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gf100.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gf119.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk104.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk110.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk208.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk20a.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm107.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp100.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c [new file with mode: 0644]
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gt215.c
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/Kbuild [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr.h [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r352.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r352.h [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r361.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r361.h [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r364.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r367.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r370.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r370.h [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r375.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/base.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gm200.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gm200.h [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gm20b.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gp102.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gp108.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gp10b.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/hs_ucode.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/hs_ucode.h [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/ls_ucode_msgqueue.c [deleted file]
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/priv.h [deleted file]
drivers/gpu/drm/omapdrm/dss/dispc.c
drivers/gpu/drm/panel/Kconfig
drivers/gpu/drm/panel/Makefile
drivers/gpu/drm/panel/panel-simple.c
drivers/gpu/drm/panel/panel-sony-acx424akp.c [new file with mode: 0644]
drivers/gpu/drm/panfrost/panfrost_job.c
drivers/gpu/drm/radeon/atombios_crtc.c
drivers/gpu/drm/radeon/atombios_dp.c
drivers/gpu/drm/radeon/atombios_encoders.c
drivers/gpu/drm/radeon/atombios_i2c.c
drivers/gpu/drm/radeon/cik.c
drivers/gpu/drm/radeon/cik_sdma.c
drivers/gpu/drm/radeon/evergreen.c
drivers/gpu/drm/radeon/ni.c
drivers/gpu/drm/radeon/r100.c
drivers/gpu/drm/radeon/r600.c
drivers/gpu/drm/radeon/radeon_atombios.c
drivers/gpu/drm/radeon/radeon_bios.c
drivers/gpu/drm/radeon/radeon_connectors.c
drivers/gpu/drm/radeon/radeon_display.c
drivers/gpu/drm/radeon/radeon_dp_mst.c
drivers/gpu/drm/radeon/radeon_legacy_encoders.c
drivers/gpu/drm/radeon/radeon_pm.c
drivers/gpu/drm/radeon/radeon_vce.c
drivers/gpu/drm/radeon/radeon_vm.c
drivers/gpu/drm/radeon/rv770.c
drivers/gpu/drm/radeon/si.c
drivers/gpu/drm/rcar-du/rcar_lvds.c
drivers/gpu/drm/scheduler/sched_entity.c
drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
drivers/gpu/drm/sun4i/sun4i_backend.c
drivers/gpu/drm/sun4i/sun6i_drc.c
drivers/gpu/drm/tegra/dc.c
drivers/gpu/drm/tegra/dpaux.c
drivers/gpu/drm/tegra/drm.c
drivers/gpu/drm/tegra/drm.h
drivers/gpu/drm/tegra/dsi.c
drivers/gpu/drm/tegra/gr2d.c
drivers/gpu/drm/tegra/gr3d.c
drivers/gpu/drm/tegra/hdmi.c
drivers/gpu/drm/tegra/hub.c
drivers/gpu/drm/tegra/hub.h
drivers/gpu/drm/tegra/output.c
drivers/gpu/drm/tegra/sor.c
drivers/gpu/drm/tegra/vic.c
drivers/gpu/drm/udl/Kconfig
drivers/gpu/drm/v3d/v3d_drv.c
drivers/gpu/drm/vc4/vc4_dsi.c
drivers/gpu/drm/vc4/vc4_hdmi.c
drivers/gpu/drm/zte/zx_hdmi.c
drivers/gpu/drm/zte/zx_vga.c
drivers/gpu/host1x/bus.c
drivers/gpu/host1x/dev.c
drivers/gpu/host1x/syncpt.c
drivers/soc/mediatek/mtk-cmdq-helper.c
drivers/video/fbdev/mmp/hw/mmp_ctrl.c
include/drm/drm_atomic.h
include/drm/drm_bridge.h
include/drm/drm_dp_helper.h
include/drm/drm_dp_mst_helper.h
include/drm/drm_fb_cma_helper.h
include/drm/gpu_scheduler.h
include/drm/task_barrier.h [new file with mode: 0644]
include/linux/host1x.h
include/linux/mailbox/mtk-cmdq-mailbox.h
include/linux/soc/mediatek/mtk-cmdq.h

diff --git a/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-display-backend.yaml b/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-display-backend.yaml
new file mode 100644 (file)
index 0000000..86057d5
--- /dev/null
@@ -0,0 +1,291 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun4i-a10-display-backend.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner A10 Display Engine Backend Device Tree Bindings
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+description: |
+  The display engine backend exposes layers and sprites to the system.
+
+properties:
+  compatible:
+    enum:
+      - allwinner,sun4i-a10-display-backend
+      - allwinner,sun5i-a13-display-backend
+      - allwinner,sun6i-a31-display-backend
+      - allwinner,sun7i-a20-display-backend
+      - allwinner,sun8i-a23-display-backend
+      - allwinner,sun8i-a33-display-backend
+      - allwinner,sun9i-a80-display-backend
+
+  reg:
+    minItems: 1
+    maxItems: 2
+    items:
+      - description: Display Backend registers
+      - description: SAT registers
+
+  reg-names:
+    minItems: 1
+    maxItems: 2
+    items:
+      - const: be
+      - const: sat
+
+  interrupts:
+    maxItems: 1
+
+  clocks:
+    minItems: 3
+    maxItems: 4
+    items:
+      - description: The backend interface clock
+      - description: The backend module clock
+      - description: The backend DRAM clock
+      - description: The SAT clock
+
+  clock-names:
+    minItems: 3
+    maxItems: 4
+    items:
+      - const: ahb
+      - const: mod
+      - const: ram
+      - const: sat
+
+  resets:
+    minItems: 1
+    maxItems: 2
+    items:
+      - description: The Backend reset line
+      - description: The SAT reset line
+
+  reset-names:
+    minItems: 1
+    maxItems: 2
+    items:
+      - const: be
+      - const: sat
+
+  # FIXME: This should be made required eventually once every SoC will
+  # have the MBUS declared.
+  interconnects:
+    maxItems: 1
+
+  # FIXME: This should be made required eventually once every SoC will
+  # have the MBUS declared.
+  interconnect-names:
+    const: dma-mem
+
+  ports:
+    type: object
+    description: |
+      A ports node with endpoint definitions as defined in
+      Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+    properties:
+      "#address-cells":
+        const: 1
+
+      "#size-cells":
+        const: 0
+
+      port@0:
+        type: object
+        description: |
+          Input endpoints of the controller.
+
+      port@1:
+        type: object
+        description: |
+          Output endpoints of the controller.
+
+    required:
+      - "#address-cells"
+      - "#size-cells"
+      - port@0
+      - port@1
+
+    additionalProperties: false
+
+required:
+  - compatible
+  - reg
+  - interrupts
+  - clocks
+  - clock-names
+  - resets
+  - ports
+
+additionalProperties: false
+
+if:
+  properties:
+    compatible:
+      contains:
+        const: allwinner,sun8i-a33-display-backend
+
+then:
+  properties:
+    reg:
+      minItems: 2
+
+    reg-names:
+      minItems: 2
+
+    clocks:
+      minItems: 4
+
+    clock-names:
+      minItems: 4
+
+    resets:
+      minItems: 2
+
+    reset-names:
+      minItems: 2
+
+  required:
+    - reg-names
+    - reset-names
+
+else:
+  properties:
+    reg:
+      maxItems: 1
+
+    reg-names:
+      maxItems: 1
+
+    clocks:
+      maxItems: 3
+
+    clock-names:
+      maxItems: 3
+
+    resets:
+      maxItems: 1
+
+    reset-names:
+      maxItems: 1
+
+examples:
+  - |
+    /*
+     * This comes from the clock/sun4i-a10-ccu.h and
+     * reset/sun4i-a10-ccu.h headers, but we can't include them since
+     * it would trigger a bunch of warnings for redefinitions of
+     * symbols with the other example.
+     */
+
+    #define CLK_AHB_DE_BE0     42
+    #define CLK_DRAM_DE_BE0    140
+    #define CLK_DE_BE0         144
+    #define RST_DE_BE0         5
+
+    display-backend@1e60000 {
+        compatible = "allwinner,sun4i-a10-display-backend";
+        reg = <0x01e60000 0x10000>;
+        interrupts = <47>;
+        clocks = <&ccu CLK_AHB_DE_BE0>, <&ccu CLK_DE_BE0>,
+                 <&ccu CLK_DRAM_DE_BE0>;
+        clock-names = "ahb", "mod",
+                      "ram";
+        resets = <&ccu RST_DE_BE0>;
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            port@0 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <0>;
+
+                endpoint@0 {
+                    reg = <0>;
+                    remote-endpoint = <&fe0_out_be0>;
+                };
+
+                endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&fe1_out_be0>;
+                };
+            };
+
+            port@1 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <1>;
+
+                endpoint@0 {
+                    reg = <0>;
+                    remote-endpoint = <&tcon0_in_be0>;
+                };
+
+                endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&tcon1_in_be0>;
+                };
+            };
+        };
+    };
+
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+    /*
+     * This comes from the clock/sun8i-a23-a33-ccu.h and
+     * reset/sun8i-a23-a33-ccu.h headers, but we can't include them
+     * since it would trigger a bunch of warnings for redefinitions of
+     * symbols with the other example.
+     */
+
+    #define CLK_BUS_DE_BE      40
+    #define CLK_BUS_SAT                46
+    #define CLK_DRAM_DE_BE     84
+    #define CLK_DE_BE          85
+    #define RST_BUS_DE_BE      21
+    #define RST_BUS_SAT                27
+
+    display-backend@1e60000 {
+        compatible = "allwinner,sun8i-a33-display-backend";
+        reg = <0x01e60000 0x10000>, <0x01e80000 0x1000>;
+        reg-names = "be", "sat";
+        interrupts = <GIC_SPI 95 IRQ_TYPE_LEVEL_HIGH>;
+        clocks = <&ccu CLK_BUS_DE_BE>, <&ccu CLK_DE_BE>,
+                 <&ccu CLK_DRAM_DE_BE>, <&ccu CLK_BUS_SAT>;
+        clock-names = "ahb", "mod",
+                      "ram", "sat";
+        resets = <&ccu RST_BUS_DE_BE>, <&ccu RST_BUS_SAT>;
+        reset-names = "be", "sat";
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            port@0 {
+                reg = <0>;
+
+                endpoint {
+                    remote-endpoint = <&fe0_out_be0>;
+                };
+            };
+
+            port@1 {
+                reg = <1>;
+
+                endpoint {
+                    remote-endpoint = <&drc0_in_be0>;
+                };
+            };
+        };
+    };
+
+...
diff --git a/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-display-engine.yaml b/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-display-engine.yaml
new file mode 100644 (file)
index 0000000..944ff2f
--- /dev/null
@@ -0,0 +1,114 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun4i-a10-display-engine.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner A10 Display Engine Pipeline Device Tree Bindings
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+description: |
+  The display engine pipeline (and its entry point, since it can be
+  either directly the backend or the frontend) is represented as an
+  extra node.
+
+  The Allwinner A10 Display pipeline is composed of several components
+  that are going to be documented below:
+
+  For all connections between components up to the TCONs in the
+  display pipeline, when there are multiple components of the same
+  type at the same depth, the local endpoint ID must be the same as
+  the remote component's index. For example, if the remote endpoint is
+  Frontend 1, then the local endpoint ID must be 1.
+
+  Frontend 0  [0] ------- [0]  Backend 0  [0] ------- [0]  TCON 0
+              [1] --   -- [1]             [1] --   -- [1]
+                    \ /                         \ /
+                     X                           X
+                    / \                         / \
+              [0] --   -- [0]             [0] --   -- [0]
+  Frontend 1  [1] ------- [1]  Backend 1  [1] ------- [1]  TCON 1
+
+  For a two pipeline system such as the one depicted above, the lines
+  represent the connections between the components, while the numbers
+  within the square brackets corresponds to the ID of the local endpoint.
+
+  The same rule also applies to DE 2.0 mixer-TCON connections:
+
+  Mixer 0  [0] ----------- [0]  TCON 0
+           [1] ----   ---- [1]
+                   \ /
+                    X
+                   / \
+           [0] ----   ---- [0]
+  Mixer 1  [1] ----------- [1]  TCON 1
+
+properties:
+  compatible:
+    enum:
+      - allwinner,sun4i-a10-display-engine
+      - allwinner,sun5i-a10s-display-engine
+      - allwinner,sun5i-a13-display-engine
+      - allwinner,sun6i-a31-display-engine
+      - allwinner,sun6i-a31s-display-engine
+      - allwinner,sun7i-a20-display-engine
+      - allwinner,sun8i-a23-display-engine
+      - allwinner,sun8i-a33-display-engine
+      - allwinner,sun8i-a83t-display-engine
+      - allwinner,sun8i-h3-display-engine
+      - allwinner,sun8i-r40-display-engine
+      - allwinner,sun8i-v3s-display-engine
+      - allwinner,sun9i-a80-display-engine
+      - allwinner,sun50i-a64-display-engine
+      - allwinner,sun50i-h6-display-engine
+
+  allwinner,pipelines:
+    allOf:
+      - $ref: /schemas/types.yaml#/definitions/phandle-array
+      - minItems: 1
+        maxItems: 2
+    description: |
+      Available display engine frontends (DE 1.0) or mixers (DE
+      2.0/3.0) available.
+
+required:
+  - compatible
+  - allwinner,pipelines
+
+additionalProperties: false
+
+if:
+  properties:
+    compatible:
+      contains:
+        enum:
+          - allwinner,sun4i-a10-display-engine
+          - allwinner,sun6i-a31-display-engine
+          - allwinner,sun6i-a31s-display-engine
+          - allwinner,sun7i-a20-display-engine
+          - allwinner,sun8i-a83t-display-engine
+          - allwinner,sun8i-r40-display-engine
+          - allwinner,sun9i-a80-display-engine
+          - allwinner,sun50i-a64-display-engine
+
+then:
+  properties:
+    allwinner,pipelines:
+      minItems: 2
+
+else:
+  properties:
+    allwinner,pipelines:
+      maxItems: 1
+
+examples:
+  - |
+      de: display-engine {
+          compatible = "allwinner,sun4i-a10-display-engine";
+          allwinner,pipelines = <&fe0>, <&fe1>;
+      };
+
+...
diff --git a/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-display-frontend.yaml b/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-display-frontend.yaml
new file mode 100644 (file)
index 0000000..3eb1c2b
--- /dev/null
@@ -0,0 +1,138 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun4i-a10-display-frontend.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner A10 Display Engine Frontend Device Tree Bindings
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+description: |
+  The display engine frontend does formats conversion, scaling,
+  deinterlacing and color space conversion.
+
+properties:
+  compatible:
+    enum:
+      - allwinner,sun4i-a10-display-frontend
+      - allwinner,sun5i-a13-display-frontend
+      - allwinner,sun6i-a31-display-frontend
+      - allwinner,sun7i-a20-display-frontend
+      - allwinner,sun8i-a23-display-frontend
+      - allwinner,sun8i-a33-display-frontend
+      - allwinner,sun9i-a80-display-frontend
+
+  reg:
+    maxItems: 1
+
+  interrupts:
+    maxItems: 1
+
+  clocks:
+    items:
+      - description: The frontend interface clock
+      - description: The frontend module clock
+      - description: The frontend DRAM clock
+
+  clock-names:
+    items:
+      - const: ahb
+      - const: mod
+      - const: ram
+
+  # FIXME: This should be made required eventually once every SoC will
+  # have the MBUS declared.
+  interconnects:
+    maxItems: 1
+
+  # FIXME: This should be made required eventually once every SoC will
+  # have the MBUS declared.
+  interconnect-names:
+    const: dma-mem
+
+  resets:
+    maxItems: 1
+
+  ports:
+    type: object
+    description: |
+      A ports node with endpoint definitions as defined in
+      Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+    properties:
+      "#address-cells":
+        const: 1
+
+      "#size-cells":
+        const: 0
+
+      port@0:
+        type: object
+        description: |
+          Input endpoints of the controller.
+
+      port@1:
+        type: object
+        description: |
+          Output endpoints of the controller.
+
+    required:
+      - "#address-cells"
+      - "#size-cells"
+      - port@1
+
+    additionalProperties: false
+
+required:
+  - compatible
+  - reg
+  - interrupts
+  - clocks
+  - clock-names
+  - resets
+  - ports
+
+additionalProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/clock/sun4i-a10-ccu.h>
+    #include <dt-bindings/reset/sun4i-a10-ccu.h>
+
+    fe0: display-frontend@1e00000 {
+        compatible = "allwinner,sun4i-a10-display-frontend";
+        reg = <0x01e00000 0x20000>;
+        interrupts = <47>;
+        clocks = <&ccu CLK_AHB_DE_FE0>, <&ccu CLK_DE_FE0>,
+                 <&ccu CLK_DRAM_DE_FE0>;
+        clock-names = "ahb", "mod",
+                      "ram";
+        resets = <&ccu RST_DE_FE0>;
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            fe0_out: port@1 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <1>;
+
+                fe0_out_be0: endpoint@0 {
+                    reg = <0>;
+                    remote-endpoint = <&be0_in_fe0>;
+                };
+
+                fe0_out_be1: endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&be1_in_fe0>;
+                };
+            };
+        };
+    };
+
+
+...
diff --git a/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-hdmi.yaml b/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-hdmi.yaml
new file mode 100644 (file)
index 0000000..5d4915a
--- /dev/null
@@ -0,0 +1,183 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun4i-a10-hdmi.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner A10 HDMI Controller Device Tree Bindings
+
+description: |
+  The HDMI Encoder supports the HDMI video and audio outputs, and does
+  CEC. It is one end of the pipeline.
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+properties:
+  compatible:
+    oneOf:
+      - const: allwinner,sun4i-a10-hdmi
+      - const: allwinner,sun5i-a10s-hdmi
+      - const: allwinner,sun6i-a31-hdmi
+      - items:
+        - const: allwinner,sun7i-a20-hdmi
+        - const: allwinner,sun5i-a10s-hdmi
+
+  reg:
+    maxItems: 1
+
+  interrupts:
+    maxItems: 1
+
+  clocks:
+    oneOf:
+      - items:
+        - description: The HDMI interface clock
+        - description: The HDMI module clock
+        - description: The first video PLL
+        - description: The second video PLL
+
+      - items:
+        - description: The HDMI interface clock
+        - description: The HDMI module clock
+        - description: The HDMI DDC clock
+        - description: The first video PLL
+        - description: The second video PLL
+
+  clock-names:
+    oneOf:
+      - items:
+        - const: ahb
+        - const: mod
+        - const: pll-0
+        - const: pll-1
+
+      - items:
+        - const: ahb
+        - const: mod
+        - const: ddc
+        - const: pll-0
+        - const: pll-1
+
+  resets:
+    maxItems: 1
+
+  dmas:
+    items:
+      - description: DDC Transmission DMA Channel
+      - description: DDC Reception DMA Channel
+      - description: Audio Transmission DMA Channel
+
+  dma-names:
+    items:
+      - const: ddc-tx
+      - const: ddc-rx
+      - const: audio-tx
+
+  ports:
+    type: object
+    description: |
+      A ports node with endpoint definitions as defined in
+      Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+    properties:
+      "#address-cells":
+        const: 1
+
+      "#size-cells":
+        const: 0
+
+      port@0:
+        type: object
+        description: |
+          Input endpoints of the controller.
+
+      port@1:
+        type: object
+        description: |
+          Output endpoints of the controller. Usually an HDMI
+          connector.
+
+    required:
+      - "#address-cells"
+      - "#size-cells"
+      - port@0
+      - port@1
+
+    additionalProperties: false
+
+required:
+  - compatible
+  - reg
+  - interrupts
+  - clocks
+  - clock-names
+  - dmas
+  - dma-names
+
+if:
+  properties:
+    compatible:
+      contains:
+        const: allwinner,sun6i-a31-hdmi
+
+then:
+  properties:
+    clocks:
+      minItems: 5
+
+    clock-names:
+      minItems: 5
+
+  required:
+    - resets
+
+additionalProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/clock/sun4i-a10-ccu.h>
+    #include <dt-bindings/dma/sun4i-a10.h>
+    #include <dt-bindings/reset/sun4i-a10-ccu.h>
+
+    hdmi: hdmi@1c16000 {
+        compatible = "allwinner,sun4i-a10-hdmi";
+        reg = <0x01c16000 0x1000>;
+        interrupts = <58>;
+        clocks = <&ccu CLK_AHB_HDMI0>, <&ccu CLK_HDMI>,
+                 <&ccu CLK_PLL_VIDEO0_2X>,
+                 <&ccu CLK_PLL_VIDEO1_2X>;
+        clock-names = "ahb", "mod", "pll-0", "pll-1";
+        dmas = <&dma SUN4I_DMA_NORMAL 16>,
+               <&dma SUN4I_DMA_NORMAL 16>,
+               <&dma SUN4I_DMA_DEDICATED 24>;
+        dma-names = "ddc-tx", "ddc-rx", "audio-tx";
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            hdmi_in: port@0 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <0>;
+
+                hdmi_in_tcon0: endpoint@0 {
+                    reg = <0>;
+                    remote-endpoint = <&tcon0_out_hdmi>;
+                };
+
+                hdmi_in_tcon1: endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&tcon1_out_hdmi>;
+                };
+            };
+
+            hdmi_out: port@1 {
+                reg = <1>;
+            };
+        };
+    };
+
+...
diff --git a/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tcon.yaml b/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tcon.yaml
new file mode 100644 (file)
index 0000000..86ad617
--- /dev/null
@@ -0,0 +1,676 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun4i-a10-tcon.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner A10 Timings Controller (TCON) Device Tree Bindings
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+description: |
+  The TCON acts as a timing controller for RGB, LVDS and TV
+  interfaces.
+
+properties:
+  "#clock-cells":
+    const: 0
+
+  compatible:
+    oneOf:
+      - const: allwinner,sun4i-a10-tcon
+      - const: allwinner,sun5i-a13-tcon
+      - const: allwinner,sun6i-a31-tcon
+      - const: allwinner,sun6i-a31s-tcon
+      - const: allwinner,sun7i-a20-tcon
+      - const: allwinner,sun8i-a23-tcon
+      - const: allwinner,sun8i-a33-tcon
+      - const: allwinner,sun8i-a83t-tcon-lcd
+      - const: allwinner,sun8i-a83t-tcon-tv
+      - const: allwinner,sun8i-r40-tcon-tv
+      - const: allwinner,sun8i-v3s-tcon
+      - const: allwinner,sun9i-a80-tcon-lcd
+      - const: allwinner,sun9i-a80-tcon-tv
+
+      - items:
+        - enum:
+            - allwinner,sun50i-a64-tcon-lcd
+        - const: allwinner,sun8i-a83t-tcon-lcd
+
+      - items:
+        - enum:
+          - allwinner,sun8i-h3-tcon-tv
+          - allwinner,sun50i-a64-tcon-tv
+          - allwinner,sun50i-h6-tcon-tv
+        - const: allwinner,sun8i-a83t-tcon-tv
+
+  reg:
+    maxItems: 1
+
+  interrupts:
+    maxItems: 1
+
+  clocks:
+    minItems: 1
+    maxItems: 4
+
+  clock-names:
+    minItems: 1
+    maxItems: 4
+
+  clock-output-names:
+    allOf:
+      - $ref: /schemas/types.yaml#/definitions/string-array
+      - maxItems: 1
+    description:
+      Name of the LCD pixel clock created.
+
+  dmas:
+    maxItems: 1
+
+  resets:
+    anyOf:
+      - items:
+        - description: TCON Reset Line
+
+      - items:
+        - description: TCON Reset Line
+        - description: TCON LVDS Reset Line
+
+      - items:
+        - description: TCON Reset Line
+        - description: TCON eDP Reset Line
+
+      - items:
+        - description: TCON Reset Line
+        - description: TCON eDP Reset Line
+        - description: TCON LVDS Reset Line
+
+  reset-names:
+    oneOf:
+      - const: lcd
+
+      - items:
+        - const: lcd
+        - const: lvds
+
+      - items:
+        - const: lcd
+        - const: edp
+
+      - items:
+        - const: lcd
+        - const: edp
+        - const: lvds
+
+  ports:
+    type: object
+    description: |
+      A ports node with endpoint definitions as defined in
+      Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+    properties:
+      "#address-cells":
+        const: 1
+
+      "#size-cells":
+        const: 0
+
+      port@0:
+        type: object
+        description: |
+          Input endpoints of the controller.
+
+      port@1:
+        type: object
+        description: |
+          Output endpoints of the controller.
+
+        patternProperties:
+          "^endpoint(@[0-9])$":
+            type: object
+
+            properties:
+              allwinner,tcon-channel:
+                $ref: /schemas/types.yaml#/definitions/uint32
+                description: |
+                  TCON can have 1 or 2 channels, usually with the
+                  first channel being used for the panels interfaces
+                  (RGB, LVDS, etc.), and the second being used for the
+                  outputs that require another controller (TV Encoder,
+                  HDMI, etc.).
+
+                  If that property is present, specifies the TCON
+                  channel the endpoint is associated to. If that
+                  property is not present, the endpoint number will be
+                  used as the channel number.
+
+            unevaluatedProperties: true
+
+    required:
+      - "#address-cells"
+      - "#size-cells"
+      - port@0
+      - port@1
+
+    additionalProperties: false
+
+required:
+  - compatible
+  - reg
+  - interrupts
+  - clocks
+  - clock-names
+  - resets
+  - ports
+
+additionalProperties: false
+
+allOf:
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - allwinner,sun4i-a10-tcon
+              - allwinner,sun5i-a13-tcon
+              - allwinner,sun7i-a20-tcon
+
+    then:
+      properties:
+        clocks:
+          minItems: 3
+
+        clock-names:
+          items:
+            - const: ahb
+            - const: tcon-ch0
+            - const: tcon-ch1
+
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - allwinner,sun6i-a31-tcon
+              - allwinner,sun6i-a31s-tcon
+
+    then:
+      properties:
+        clocks:
+          minItems: 4
+
+        clock-names:
+          items:
+            - const: ahb
+            - const: tcon-ch0
+            - const: tcon-ch1
+            - const: lvds-alt
+
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - allwinner,sun8i-a23-tcon
+              - allwinner,sun8i-a33-tcon
+
+    then:
+      properties:
+        clocks:
+          minItems: 3
+
+        clock-names:
+          items:
+            - const: ahb
+            - const: tcon-ch0
+            - const: lvds-alt
+
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - allwinner,sun8i-a83t-tcon-lcd
+              - allwinner,sun8i-v3s-tcon
+              - allwinner,sun9i-a80-tcon-lcd
+
+    then:
+      properties:
+        clocks:
+          minItems: 2
+
+        clock-names:
+          items:
+            - const: ahb
+            - const: tcon-ch0
+
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - allwinner,sun8i-a83t-tcon-tv
+              - allwinner,sun8i-r40-tcon-tv
+              - allwinner,sun9i-a80-tcon-tv
+
+    then:
+      properties:
+        clocks:
+          minItems: 2
+
+        clock-names:
+          items:
+            - const: ahb
+            - const: tcon-ch1
+
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - allwinner,sun5i-a13-tcon
+              - allwinner,sun6i-a31-tcon
+              - allwinner,sun6i-a31s-tcon
+              - allwinner,sun7i-a20-tcon
+              - allwinner,sun8i-a23-tcon
+              - allwinner,sun8i-a33-tcon
+              - allwinner,sun8i-v3s-tcon
+              - allwinner,sun9i-a80-tcon-lcd
+              - allwinner,sun4i-a10-tcon
+              - allwinner,sun8i-a83t-tcon-lcd
+
+    then:
+      required:
+        - "#clock-cells"
+        - clock-output-names
+
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - allwinner,sun6i-a31-tcon
+              - allwinner,sun6i-a31s-tcon
+              - allwinner,sun8i-a23-tcon
+              - allwinner,sun8i-a33-tcon
+              - allwinner,sun8i-a83t-tcon-lcd
+
+    then:
+      properties:
+        resets:
+          minItems: 2
+
+        reset-names:
+          items:
+            - const: lcd
+            - const: lvds
+
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - allwinner,sun9i-a80-tcon-lcd
+
+    then:
+      properties:
+        resets:
+          minItems: 3
+
+        reset-names:
+          items:
+            - const: lcd
+            - const: edp
+            - const: lvds
+
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - allwinner,sun9i-a80-tcon-tv
+
+    then:
+      properties:
+        resets:
+          minItems: 2
+
+        reset-names:
+          items:
+            - const: lcd
+            - const: edp
+
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - allwinner,sun4i-a10-tcon
+              - allwinner,sun5i-a13-tcon
+              - allwinner,sun6i-a31-tcon
+              - allwinner,sun6i-a31s-tcon
+              - allwinner,sun7i-a20-tcon
+              - allwinner,sun8i-a23-tcon
+              - allwinner,sun8i-a33-tcon
+
+    then:
+      required:
+        - dmas
+
+examples:
+  - |
+    #include <dt-bindings/dma/sun4i-a10.h>
+
+    /*
+     * This comes from the clock/sun4i-a10-ccu.h and
+     * reset/sun4i-a10-ccu.h headers, but we can't include them since
+     * it would trigger a bunch of warnings for redefinitions of
+     * symbols with the other example.
+     */
+
+    #define CLK_AHB_LCD0       56
+    #define CLK_TCON0_CH0      149
+    #define CLK_TCON0_CH1      155
+    #define RST_TCON0          11
+
+    lcd-controller@1c0c000 {
+        compatible = "allwinner,sun4i-a10-tcon";
+        reg = <0x01c0c000 0x1000>;
+        interrupts = <44>;
+        resets = <&ccu RST_TCON0>;
+        reset-names = "lcd";
+        clocks = <&ccu CLK_AHB_LCD0>,
+                 <&ccu CLK_TCON0_CH0>,
+                 <&ccu CLK_TCON0_CH1>;
+        clock-names = "ahb",
+                      "tcon-ch0",
+                      "tcon-ch1";
+        clock-output-names = "tcon0-pixel-clock";
+        #clock-cells = <0>;
+        dmas = <&dma SUN4I_DMA_DEDICATED 14>;
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            port@0 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <0>;
+
+                endpoint@0 {
+                    reg = <0>;
+                    remote-endpoint = <&be0_out_tcon0>;
+                };
+
+                endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&be1_out_tcon0>;
+                };
+            };
+
+            port@1 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <1>;
+
+                endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&hdmi_in_tcon0>;
+                    allwinner,tcon-channel = <1>;
+                };
+            };
+        };
+    };
+
+    #undef CLK_AHB_LCD0
+    #undef CLK_TCON0_CH0
+    #undef CLK_TCON0_CH1
+    #undef RST_TCON0
+
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+    /*
+     * This comes from the clock/sun6i-a31-ccu.h and
+     * reset/sun6i-a31-ccu.h headers, but we can't include them since
+     * it would trigger a bunch of warnings for redefinitions of
+     * symbols with the other example.
+     */
+
+    #define CLK_PLL_MIPI       15
+    #define CLK_AHB1_LCD0      47
+    #define CLK_LCD0_CH0       127
+    #define CLK_LCD0_CH1       129
+    #define RST_AHB1_LCD0      27
+    #define RST_AHB1_LVDS      41
+
+    lcd-controller@1c0c000 {
+        compatible = "allwinner,sun6i-a31-tcon";
+        reg = <0x01c0c000 0x1000>;
+        interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
+        dmas = <&dma 11>;
+        resets = <&ccu RST_AHB1_LCD0>, <&ccu RST_AHB1_LVDS>;
+        reset-names = "lcd", "lvds";
+        clocks = <&ccu CLK_AHB1_LCD0>,
+                 <&ccu CLK_LCD0_CH0>,
+                 <&ccu CLK_LCD0_CH1>,
+                 <&ccu CLK_PLL_MIPI>;
+        clock-names = "ahb",
+                      "tcon-ch0",
+                      "tcon-ch1",
+                      "lvds-alt";
+        clock-output-names = "tcon0-pixel-clock";
+        #clock-cells = <0>;
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            port@0 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <0>;
+
+                endpoint@0 {
+                    reg = <0>;
+                    remote-endpoint = <&drc0_out_tcon0>;
+                };
+
+                endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&drc1_out_tcon0>;
+                };
+            };
+
+            port@1 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <1>;
+
+                endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&hdmi_in_tcon0>;
+                    allwinner,tcon-channel = <1>;
+                };
+            };
+        };
+    };
+
+    #undef CLK_PLL_MIPI
+    #undef CLK_AHB1_LCD0
+    #undef CLK_LCD0_CH0
+    #undef CLK_LCD0_CH1
+    #undef RST_AHB1_LCD0
+    #undef RST_AHB1_LVDS
+
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+    /*
+     * This comes from the clock/sun9i-a80-ccu.h and
+     * reset/sun9i-a80-ccu.h headers, but we can't include them since
+     * it would trigger a bunch of warnings for redefinitions of
+     * symbols with the other example.
+     */
+
+    #define CLK_BUS_LCD0       102
+    #define CLK_LCD0           58
+    #define RST_BUS_LCD0       22
+    #define RST_BUS_EDP                24
+    #define RST_BUS_LVDS       25
+
+    lcd-controller@3c00000 {
+        compatible = "allwinner,sun9i-a80-tcon-lcd";
+        reg = <0x03c00000 0x10000>;
+        interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
+        clocks = <&ccu CLK_BUS_LCD0>, <&ccu CLK_LCD0>;
+        clock-names = "ahb", "tcon-ch0";
+        resets = <&ccu RST_BUS_LCD0>, <&ccu RST_BUS_EDP>, <&ccu RST_BUS_LVDS>;
+        reset-names = "lcd", "edp", "lvds";
+        clock-output-names = "tcon0-pixel-clock";
+        #clock-cells = <0>;
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            port@0 {
+                reg = <0>;
+
+                endpoint {
+                    remote-endpoint = <&drc0_out_tcon0>;
+                };
+            };
+
+            port@1 {
+                reg = <1>;
+            };
+        };
+    };
+
+    #undef CLK_BUS_TCON0
+    #undef CLK_TCON0
+    #undef RST_BUS_TCON0
+    #undef RST_BUS_EDP
+    #undef RST_BUS_LVDS
+
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+    /*
+     * This comes from the clock/sun8i-a83t-ccu.h and
+     * reset/sun8i-a83t-ccu.h headers, but we can't include them since
+     * it would trigger a bunch of warnings for redefinitions of
+     * symbols with the other example.
+     */
+
+    #define CLK_BUS_TCON0      36
+    #define CLK_TCON0          85
+    #define RST_BUS_TCON0      22
+    #define RST_BUS_LVDS       31
+
+    lcd-controller@1c0c000 {
+        compatible = "allwinner,sun8i-a83t-tcon-lcd";
+        reg = <0x01c0c000 0x1000>;
+        interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
+        clocks = <&ccu CLK_BUS_TCON0>, <&ccu CLK_TCON0>;
+        clock-names = "ahb", "tcon-ch0";
+        clock-output-names = "tcon-pixel-clock";
+        #clock-cells = <0>;
+        resets = <&ccu RST_BUS_TCON0>, <&ccu RST_BUS_LVDS>;
+        reset-names = "lcd", "lvds";
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            port@0 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <0>;
+
+                endpoint@0 {
+                    reg = <0>;
+                    remote-endpoint = <&mixer0_out_tcon0>;
+                };
+
+                endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&mixer1_out_tcon0>;
+                };
+            };
+
+            port@1 {
+                reg = <1>;
+            };
+        };
+    };
+
+    #undef CLK_BUS_TCON0
+    #undef CLK_TCON0
+    #undef RST_BUS_TCON0
+    #undef RST_BUS_LVDS
+
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+    /*
+     * This comes from the clock/sun8i-r40-ccu.h and
+     * reset/sun8i-r40-ccu.h headers, but we can't include them since
+     * it would trigger a bunch of warnings for redefinitions of
+     * symbols with the other example.
+     */
+
+    #define CLK_BUS_TCON_TV0   73
+    #define RST_BUS_TCON_TV0   49
+
+    tcon_tv0: lcd-controller@1c73000 {
+        compatible = "allwinner,sun8i-r40-tcon-tv";
+        reg = <0x01c73000 0x1000>;
+        interrupts = <GIC_SPI 51 IRQ_TYPE_LEVEL_HIGH>;
+        clocks = <&ccu CLK_BUS_TCON_TV0>, <&tcon_top 0>;
+        clock-names = "ahb", "tcon-ch1";
+        resets = <&ccu RST_BUS_TCON_TV0>;
+        reset-names = "lcd";
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            port@0 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <0>;
+
+                endpoint@0 {
+                    reg = <0>;
+                    remote-endpoint = <&tcon_top_mixer0_out_tcon_tv0>;
+                };
+
+                endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&tcon_top_mixer1_out_tcon_tv0>;
+                };
+            };
+
+            tcon_tv0_out: port@1 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <1>;
+
+                endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&tcon_top_hdmi_in_tcon_tv0>;
+                };
+            };
+        };
+    };
+
+    #undef CLK_BUS_TCON_TV0
+    #undef RST_BUS_TCON_TV0
+
+...
diff --git a/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tv-encoder.yaml b/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tv-encoder.yaml
new file mode 100644 (file)
index 0000000..5d5d396
--- /dev/null
@@ -0,0 +1,62 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun4i-a10-tv-encoder.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner A10 TV Encoder Device Tree Bindings
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+properties:
+  compatible:
+    const: allwinner,sun4i-a10-tv-encoder
+
+  reg:
+    maxItems: 1
+
+  clocks:
+    maxItems: 1
+
+  resets:
+    maxItems: 1
+
+  port:
+    type: object
+    description:
+      A port node with endpoint definitions as defined in
+      Documentation/devicetree/bindings/media/video-interfaces.txt. The
+      first port should be the input endpoint, usually coming from the
+      associated TCON.
+
+required:
+  - compatible
+  - reg
+  - clocks
+  - resets
+  - port
+
+additionalProperties: false
+
+examples:
+  - |
+    tve0: tv-encoder@1c0a000 {
+        compatible = "allwinner,sun4i-a10-tv-encoder";
+        reg = <0x01c0a000 0x1000>;
+        clocks = <&ahb_gates 34>;
+        resets = <&tcon_ch0_clk 0>;
+
+        port {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            tve0_in_tcon0: endpoint@0 {
+                reg = <0>;
+                remote-endpoint = <&tcon0_out_tve0>;
+            };
+        };
+    };
+
+...
diff --git a/Documentation/devicetree/bindings/display/allwinner,sun6i-a31-drc.yaml b/Documentation/devicetree/bindings/display/allwinner,sun6i-a31-drc.yaml
new file mode 100644 (file)
index 0000000..0c1ce55
--- /dev/null
@@ -0,0 +1,138 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun6i-a31-drc.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner A31 Dynamic Range Controller Device Tree Bindings
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+description: |
+  The DRC (Dynamic Range Controller) allows to dynamically adjust
+  pixel brightness/contrast based on histogram measurements for LCD
+  content adaptive backlight control.
+
+properties:
+  compatible:
+    enum:
+      - allwinner,sun6i-a31-drc
+      - allwinner,sun6i-a31s-drc
+      - allwinner,sun8i-a23-drc
+      - allwinner,sun8i-a33-drc
+      - allwinner,sun9i-a80-drc
+
+  reg:
+    maxItems: 1
+
+  interrupts:
+    maxItems: 1
+
+  clocks:
+    items:
+      - description: The DRC interface clock
+      - description: The DRC module clock
+      - description: The DRC DRAM clock
+
+  clock-names:
+    items:
+      - const: ahb
+      - const: mod
+      - const: ram
+
+  resets:
+    maxItems: 1
+
+  ports:
+    type: object
+    description: |
+      A ports node with endpoint definitions as defined in
+      Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+    properties:
+      "#address-cells":
+        const: 1
+
+      "#size-cells":
+        const: 0
+
+      port@0:
+        type: object
+        description: |
+          Input endpoints of the controller.
+
+      port@1:
+        type: object
+        description: |
+          Output endpoints of the controller.
+
+    required:
+      - "#address-cells"
+      - "#size-cells"
+      - port@0
+      - port@1
+
+    additionalProperties: false
+
+required:
+  - compatible
+  - reg
+  - interrupts
+  - clocks
+  - clock-names
+  - resets
+  - ports
+
+additionalProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+    #include <dt-bindings/clock/sun6i-a31-ccu.h>
+    #include <dt-bindings/reset/sun6i-a31-ccu.h>
+
+    drc0: drc@1e70000 {
+        compatible = "allwinner,sun6i-a31-drc";
+        reg = <0x01e70000 0x10000>;
+        interrupts = <GIC_SPI 91 IRQ_TYPE_LEVEL_HIGH>;
+        clocks = <&ccu CLK_AHB1_DRC0>, <&ccu CLK_IEP_DRC0>,
+                 <&ccu CLK_DRAM_DRC0>;
+        clock-names = "ahb", "mod",
+                      "ram";
+        resets = <&ccu RST_AHB1_DRC0>;
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            drc0_in: port@0 {
+                reg = <0>;
+
+                drc0_in_be0: endpoint {
+                    remote-endpoint = <&be0_out_drc0>;
+                };
+            };
+
+            drc0_out: port@1 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <1>;
+
+                drc0_out_tcon0: endpoint@0 {
+                    reg = <0>;
+                    remote-endpoint = <&tcon0_in_drc0>;
+                };
+
+                drc0_out_tcon1: endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&tcon1_in_drc0>;
+                };
+            };
+        };
+    };
+
+
+...
diff --git a/Documentation/devicetree/bindings/display/allwinner,sun8i-a83t-de2-mixer.yaml b/Documentation/devicetree/bindings/display/allwinner,sun8i-a83t-de2-mixer.yaml
new file mode 100644 (file)
index 0000000..1dee641
--- /dev/null
@@ -0,0 +1,118 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun8i-a83t-de2-mixer.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner Display Engine 2.0 Mixer Device Tree Bindings
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+properties:
+  compatible:
+    enum:
+      - allwinner,sun8i-a83t-de2-mixer-0
+      - allwinner,sun8i-a83t-de2-mixer-1
+      - allwinner,sun8i-h3-de2-mixer-0
+      - allwinner,sun8i-r40-de2-mixer-0
+      - allwinner,sun8i-r40-de2-mixer-1
+      - allwinner,sun8i-v3s-de2-mixer
+      - allwinner,sun50i-a64-de2-mixer-0
+      - allwinner,sun50i-a64-de2-mixer-1
+      - allwinner,sun50i-h6-de3-mixer-0
+
+  reg:
+    maxItems: 1
+
+  clocks:
+    items:
+      - description: The mixer interface clock
+      - description: The mixer module clock
+
+  clock-names:
+    items:
+      - const: bus
+      - const: mod
+
+  resets:
+    maxItems: 1
+
+  ports:
+    type: object
+    description: |
+      A ports node with endpoint definitions as defined in
+      Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+    properties:
+      "#address-cells":
+        const: 1
+
+      "#size-cells":
+        const: 0
+
+      port@0:
+        type: object
+        description: |
+          Input endpoints of the controller.
+
+      port@1:
+        type: object
+        description: |
+          Output endpoints of the controller.
+
+    required:
+      - "#address-cells"
+      - "#size-cells"
+      - port@1
+
+    additionalProperties: false
+
+required:
+  - compatible
+  - reg
+  - clocks
+  - clock-names
+  - resets
+  - ports
+
+additionalProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/clock/sun8i-de2.h>
+    #include <dt-bindings/reset/sun8i-de2.h>
+
+    mixer0: mixer@1100000 {
+        compatible = "allwinner,sun8i-a83t-de2-mixer-0";
+        reg = <0x01100000 0x100000>;
+        clocks = <&display_clocks CLK_BUS_MIXER0>,
+                 <&display_clocks CLK_MIXER0>;
+        clock-names = "bus",
+                      "mod";
+        resets = <&display_clocks RST_MIXER0>;
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            mixer0_out: port@1 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <1>;
+
+                mixer0_out_tcon0: endpoint@0 {
+                    reg = <0>;
+                    remote-endpoint = <&tcon0_in_mixer0>;
+                };
+
+                mixer0_out_tcon1: endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&tcon1_in_mixer0>;
+                };
+            };
+        };
+    };
+
+...
diff --git a/Documentation/devicetree/bindings/display/allwinner,sun8i-a83t-dw-hdmi.yaml b/Documentation/devicetree/bindings/display/allwinner,sun8i-a83t-dw-hdmi.yaml
new file mode 100644 (file)
index 0000000..4d67956
--- /dev/null
@@ -0,0 +1,273 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun8i-a83t-dw-hdmi.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner A83t DWC HDMI TX Encoder Device Tree Bindings
+
+description: |
+  The HDMI transmitter is a Synopsys DesignWare HDMI 1.4 TX controller
+  IP with Allwinner\'s own PHY IP. It supports audio and video outputs
+  and CEC.
+
+  These DT bindings follow the Synopsys DWC HDMI TX bindings defined
+  in Documentation/devicetree/bindings/display/bridge/dw_hdmi.txt with
+  the following device-specific properties.
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+properties:
+  "#phy-cells":
+    const: 0
+
+  compatible:
+    oneOf:
+      - const: allwinner,sun8i-a83t-dw-hdmi
+      - const: allwinner,sun50i-h6-dw-hdmi
+
+      - items:
+        - enum:
+          - allwinner,sun8i-h3-dw-hdmi
+          - allwinner,sun8i-r40-dw-hdmi
+          - allwinner,sun50i-a64-dw-hdmi
+        - const: allwinner,sun8i-a83t-dw-hdmi
+
+  reg:
+    maxItems: 1
+
+  reg-io-width:
+    const: 1
+
+  interrupts:
+    maxItems: 1
+
+  clocks:
+    minItems: 3
+    maxItems: 6
+    items:
+      - description: Bus Clock
+      - description: Register Clock
+      - description: TMDS Clock
+      - description: HDMI CEC Clock
+      - description: HDCP Clock
+      - description: HDCP Bus Clock
+
+  clock-names:
+    minItems: 3
+    maxItems: 6
+    items:
+      - const: iahb
+      - const: isfr
+      - const: tmds
+      - const: cec
+      - const: hdcp
+      - const: hdcp-bus
+
+  resets:
+    minItems: 1
+    maxItems: 2
+    items:
+      - description: HDMI Controller Reset
+      - description: HDCP Reset
+
+  reset-names:
+    minItems: 1
+    maxItems: 2
+    items:
+      - const: ctrl
+      - const: hdcp
+
+  phys:
+    maxItems: 1
+    description:
+      Phandle to the DWC HDMI PHY.
+
+  phy-names:
+    const: phy
+
+  hvcc-supply:
+    description:
+      The VCC power supply of the controller
+
+  ports:
+    type: object
+    description: |
+      A ports node with endpoint definitions as defined in
+      Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+    properties:
+      "#address-cells":
+        const: 1
+
+      "#size-cells":
+        const: 0
+
+      port@0:
+        type: object
+        description: |
+          Input endpoints of the controller. Usually the associated
+          TCON.
+
+      port@1:
+        type: object
+        description: |
+          Output endpoints of the controller. Usually an HDMI
+          connector.
+
+    required:
+      - "#address-cells"
+      - "#size-cells"
+      - port@0
+      - port@1
+
+    additionalProperties: false
+
+required:
+  - compatible
+  - reg
+  - reg-io-width
+  - interrupts
+  - clocks
+  - clock-names
+  - resets
+  - reset-names
+  - phys
+  - phy-names
+  - ports
+
+if:
+  properties:
+    compatible:
+      contains:
+        enum:
+          - allwinner,sun50i-h6-dw-hdmi
+
+then:
+  properties:
+    clocks:
+      minItems: 6
+
+    clock-names:
+      minItems: 6
+
+    resets:
+      minItems: 2
+
+    reset-names:
+      minItems: 2
+
+
+additionalProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+    /*
+     * This comes from the clock/sun8i-a83t-ccu.h and
+     * reset/sun8i-a83t-ccu.h headers, but we can't include them since
+     * it would trigger a bunch of warnings for redefinitions of
+     * symbols with the other example.
+     */
+    #define CLK_BUS_HDMI       39
+    #define CLK_HDMI           93
+    #define CLK_HDMI_SLOW      94
+    #define RST_BUS_HDMI1      26
+
+    hdmi@1ee0000 {
+        compatible = "allwinner,sun8i-a83t-dw-hdmi";
+        reg = <0x01ee0000 0x10000>;
+        reg-io-width = <1>;
+        interrupts = <GIC_SPI 88 IRQ_TYPE_LEVEL_HIGH>;
+        clocks = <&ccu CLK_BUS_HDMI>, <&ccu CLK_HDMI_SLOW>,
+                 <&ccu CLK_HDMI>;
+        clock-names = "iahb", "isfr", "tmds";
+        resets = <&ccu RST_BUS_HDMI1>;
+        reset-names = "ctrl";
+        phys = <&hdmi_phy>;
+        phy-names = "phy";
+        pinctrl-names = "default";
+        pinctrl-0 = <&hdmi_pins>;
+        status = "disabled";
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            port@0 {
+                reg = <0>;
+
+                endpoint {
+                    remote-endpoint = <&tcon1_out_hdmi>;
+                };
+            };
+
+            port@1 {
+                reg = <1>;
+            };
+        };
+    };
+
+    /* Cleanup after ourselves */
+    #undef CLK_BUS_HDMI
+    #undef CLK_HDMI
+    #undef CLK_HDMI_SLOW
+
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+    /*
+     * This comes from the clock/sun50i-h6-ccu.h and
+     * reset/sun50i-h6-ccu.h headers, but we can't include them since
+     * it would trigger a bunch of warnings for redefinitions of
+     * symbols with the other example.
+     */
+    #define CLK_BUS_HDMI       126
+    #define CLK_BUS_HDCP       137
+    #define CLK_HDMI           123
+    #define CLK_HDMI_SLOW      124
+    #define CLK_HDMI_CEC       125
+    #define CLK_HDCP           136
+    #define RST_BUS_HDMI_SUB   57
+    #define RST_BUS_HDCP       62
+
+    hdmi@6000000 {
+        compatible = "allwinner,sun50i-h6-dw-hdmi";
+        reg = <0x06000000 0x10000>;
+        reg-io-width = <1>;
+        interrupts = <GIC_SPI 64 IRQ_TYPE_LEVEL_HIGH>;
+        clocks = <&ccu CLK_BUS_HDMI>, <&ccu CLK_HDMI_SLOW>,
+                 <&ccu CLK_HDMI>, <&ccu CLK_HDMI_CEC>,
+                 <&ccu CLK_HDCP>, <&ccu CLK_BUS_HDCP>;
+        clock-names = "iahb", "isfr", "tmds", "cec", "hdcp",
+                      "hdcp-bus";
+        resets = <&ccu RST_BUS_HDMI_SUB>, <&ccu RST_BUS_HDCP>;
+        reset-names = "ctrl", "hdcp";
+        phys = <&hdmi_phy>;
+        phy-names = "phy";
+        pinctrl-names = "default";
+        pinctrl-0 = <&hdmi_pins>;
+        status = "disabled";
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            port@0 {
+                reg = <0>;
+
+                endpoint {
+                    remote-endpoint = <&tcon_top_hdmi_out_hdmi>;
+                };
+            };
+
+            port@1 {
+                reg = <1>;
+            };
+        };
+    };
+
+...
diff --git a/Documentation/devicetree/bindings/display/allwinner,sun8i-a83t-hdmi-phy.yaml b/Documentation/devicetree/bindings/display/allwinner,sun8i-a83t-hdmi-phy.yaml
new file mode 100644 (file)
index 0000000..501cec1
--- /dev/null
@@ -0,0 +1,117 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun8i-a83t-hdmi-phy.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner A83t HDMI PHY Device Tree Bindings
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+properties:
+  "#phy-cells":
+    const: 0
+
+  compatible:
+    enum:
+      - allwinner,sun8i-a83t-hdmi-phy
+      - allwinner,sun8i-h3-hdmi-phy
+      - allwinner,sun8i-r40-hdmi-phy
+      - allwinner,sun50i-a64-hdmi-phy
+      - allwinner,sun50i-h6-hdmi-phy
+
+  reg:
+    maxItems: 1
+
+  clocks:
+    minItems: 2
+    maxItems: 4
+    items:
+      - description: Bus Clock
+      - description: Module Clock
+      - description: Parent of the PHY clock
+      - description: Second possible parent of the PHY clock
+
+  clock-names:
+    minItems: 2
+    maxItems: 4
+    items:
+      - const: bus
+      - const: mod
+      - const: pll-0
+      - const: pll-1
+
+  resets:
+    maxItems: 1
+
+  reset-names:
+    const: phy
+
+required:
+  - compatible
+  - reg
+  - clocks
+  - clock-names
+  - resets
+  - reset-names
+
+if:
+  properties:
+    compatible:
+      contains:
+        enum:
+          - allwinner,sun8i-r40-hdmi-phy
+
+then:
+  properties:
+    clocks:
+      minItems: 4
+
+    clock-names:
+      minItems: 4
+
+else:
+  if:
+    properties:
+      compatible:
+        contains:
+          enum:
+            - allwinner,sun8i-h3-hdmi-phy
+            - allwinner,sun50i-a64-hdmi-phy
+
+  then:
+    properties:
+      clocks:
+        minItems: 3
+
+      clock-names:
+        minItems: 3
+
+  else:
+    properties:
+      clocks:
+        maxItems: 2
+
+      clock-names:
+        maxItems: 2
+
+additionalProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/clock/sun8i-a83t-ccu.h>
+    #include <dt-bindings/reset/sun8i-a83t-ccu.h>
+
+    hdmi_phy: hdmi-phy@1ef0000 {
+        compatible = "allwinner,sun8i-a83t-hdmi-phy";
+        reg = <0x01ef0000 0x10000>;
+        clocks = <&ccu CLK_BUS_HDMI>, <&ccu CLK_HDMI_SLOW>;
+        clock-names = "bus", "mod";
+        resets = <&ccu RST_BUS_HDMI0>;
+        reset-names = "phy";
+        #phy-cells = <0>;
+    };
+
+...
diff --git a/Documentation/devicetree/bindings/display/allwinner,sun8i-r40-tcon-top.yaml b/Documentation/devicetree/bindings/display/allwinner,sun8i-r40-tcon-top.yaml
new file mode 100644 (file)
index 0000000..b98ca60
--- /dev/null
@@ -0,0 +1,382 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun8i-r40-tcon-top.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner R40 TCON TOP Device Tree Bindings
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+description: |
+  TCON TOPs main purpose is to configure whole display pipeline. It
+  determines relationships between mixers and TCONs, selects source
+  TCON for HDMI, muxes LCD and TV encoder GPIO output, selects TV
+  encoder clock source and contains additional TV TCON and DSI gates.
+
+  It allows display pipeline to be configured in very different ways:
+
+                                  / LCD0/LVDS0
+                   / [0] TCON-LCD0
+                   |              \ MIPI DSI
+   mixer0          |
+          \        / [1] TCON-LCD1 - LCD1/LVDS1
+           TCON-TOP
+          /        \ [2] TCON-TV0 [0] - TVE0/RGB
+   mixer1          |                  \
+                   |                   TCON-TOP - HDMI
+                   |                  /
+                   \ [3] TCON-TV1 [1] - TVE1/RGB
+
+  Note that both TCON TOP references same physical unit. Both mixers
+  can be connected to any TCON. Not all TCON TOP variants support all
+  features.
+
+properties:
+  "#clock-cells":
+    const: 1
+
+  compatible:
+    enum:
+      - allwinner,sun8i-r40-tcon-top
+      - allwinner,sun50i-h6-tcon-top
+
+  reg:
+    maxItems: 1
+
+  clocks:
+    minItems: 2
+    maxItems: 6
+    items:
+      - description: The TCON TOP interface clock
+      - description: The TCON TOP TV0 clock
+      - description: The TCON TOP TVE0 clock
+      - description: The TCON TOP TV1 clock
+      - description: The TCON TOP TVE1 clock
+      - description: The TCON TOP MIPI DSI clock
+
+  clock-names:
+    minItems: 2
+    maxItems: 6
+    items:
+      - const: bus
+      - const: tcon-tv0
+      - const: tve0
+      - const: tcon-tv1
+      - const: tve1
+      - const: dsi
+
+  clock-output-names:
+    minItems: 1
+    maxItems: 3
+    description: >
+      The first item is the name of the clock created for the TV0
+      channel, the second item is the name of the TCON TV1 channel
+      clock and the third one is the name of the DSI channel clock.
+
+  resets:
+    maxItems: 1
+
+  ports:
+    type: object
+    description: |
+      A ports node with endpoint definitions as defined in
+      Documentation/devicetree/bindings/media/video-interfaces.txt.
+      All ports should have only one endpoint connected to
+      remote endpoint.
+
+    properties:
+      "#address-cells":
+        const: 1
+
+      "#size-cells":
+        const: 0
+
+      port@0:
+        type: object
+        description: |
+          Input endpoint for Mixer 0 mux.
+
+      port@1:
+        type: object
+        description: |
+          Output endpoint for Mixer 0 mux
+
+        properties:
+          "#address-cells":
+            const: 1
+
+          "#size-cells":
+            const: 0
+
+          reg: true
+
+        patternProperties:
+          "^endpoint@[0-9]$":
+            type: object
+
+            properties:
+              reg:
+                description: |
+                  ID of the target TCON
+
+            required:
+              - reg
+
+        required:
+          - "#address-cells"
+          - "#size-cells"
+
+        additionalProperties: false
+
+      port@2:
+        type: object
+        description: |
+          Input endpoint for Mixer 1 mux.
+
+      port@3:
+        type: object
+        description: |
+          Output endpoint for Mixer 1 mux
+
+        properties:
+          "#address-cells":
+            const: 1
+
+          "#size-cells":
+            const: 0
+
+          reg: true
+
+        patternProperties:
+          "^endpoint@[0-9]$":
+            type: object
+
+            properties:
+              reg:
+                description: |
+                  ID of the target TCON
+
+            required:
+              - reg
+
+        required:
+          - "#address-cells"
+          - "#size-cells"
+
+        additionalProperties: false
+
+      port@4:
+        type: object
+        description: |
+          Input endpoint for HDMI mux.
+
+        properties:
+          "#address-cells":
+            const: 1
+
+          "#size-cells":
+            const: 0
+
+          reg: true
+
+        patternProperties:
+          "^endpoint@[0-9]$":
+            type: object
+
+            properties:
+              reg:
+                description: |
+                  ID of the target TCON
+
+            required:
+              - reg
+
+        required:
+          - "#address-cells"
+          - "#size-cells"
+
+        additionalProperties: false
+
+      port@5:
+        type: object
+        description: |
+          Output endpoint for HDMI mux
+
+    required:
+      - "#address-cells"
+      - "#size-cells"
+      - port@0
+      - port@1
+      - port@4
+      - port@5
+
+    additionalProperties: false
+
+required:
+  - "#clock-cells"
+  - compatible
+  - reg
+  - clocks
+  - clock-names
+  - clock-output-names
+  - resets
+  - ports
+
+additionalProperties: false
+
+if:
+  properties:
+    compatible:
+      contains:
+        const: allwinner,sun50i-h6-tcon-top
+
+then:
+  properties:
+    clocks:
+      maxItems: 2
+
+    clock-output-names:
+      maxItems: 1
+
+else:
+  properties:
+    clocks:
+      minItems: 6
+
+    clock-output-names:
+      minItems: 3
+
+    ports:
+      required:
+        - port@2
+        - port@3
+
+examples:
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+    #include <dt-bindings/clock/sun8i-r40-ccu.h>
+    #include <dt-bindings/reset/sun8i-r40-ccu.h>
+
+      tcon_top: tcon-top@1c70000 {
+          compatible = "allwinner,sun8i-r40-tcon-top";
+          reg = <0x01c70000 0x1000>;
+          clocks = <&ccu CLK_BUS_TCON_TOP>,
+                   <&ccu CLK_TCON_TV0>,
+                   <&ccu CLK_TVE0>,
+                   <&ccu CLK_TCON_TV1>,
+                   <&ccu CLK_TVE1>,
+                   <&ccu CLK_DSI_DPHY>;
+          clock-names = "bus",
+                        "tcon-tv0",
+                        "tve0",
+                        "tcon-tv1",
+                        "tve1",
+                        "dsi";
+          clock-output-names = "tcon-top-tv0",
+                               "tcon-top-tv1",
+                               "tcon-top-dsi";
+          resets = <&ccu RST_BUS_TCON_TOP>;
+          #clock-cells = <1>;
+
+          ports {
+              #address-cells = <1>;
+              #size-cells = <0>;
+
+              tcon_top_mixer0_in: port@0 {
+                  reg = <0>;
+
+                  tcon_top_mixer0_in_mixer0: endpoint {
+                      remote-endpoint = <&mixer0_out_tcon_top>;
+                  };
+              };
+
+              tcon_top_mixer0_out: port@1 {
+                  #address-cells = <1>;
+                  #size-cells = <0>;
+                  reg = <1>;
+
+                  tcon_top_mixer0_out_tcon_lcd0: endpoint@0 {
+                      reg = <0>;
+                  };
+
+                  tcon_top_mixer0_out_tcon_lcd1: endpoint@1 {
+                      reg = <1>;
+                  };
+
+                  tcon_top_mixer0_out_tcon_tv0: endpoint@2 {
+                      reg = <2>;
+                      remote-endpoint = <&tcon_tv0_in_tcon_top_mixer0>;
+                  };
+
+                  tcon_top_mixer0_out_tcon_tv1: endpoint@3 {
+                      reg = <3>;
+                      remote-endpoint = <&tcon_tv1_in_tcon_top_mixer0>;
+                  };
+              };
+
+              tcon_top_mixer1_in: port@2 {
+                  #address-cells = <1>;
+                  #size-cells = <0>;
+                  reg = <2>;
+
+                  tcon_top_mixer1_in_mixer1: endpoint@1 {
+                      reg = <1>;
+                      remote-endpoint = <&mixer1_out_tcon_top>;
+                  };
+              };
+
+              tcon_top_mixer1_out: port@3 {
+                  #address-cells = <1>;
+                  #size-cells = <0>;
+                  reg = <3>;
+
+                  tcon_top_mixer1_out_tcon_lcd0: endpoint@0 {
+                      reg = <0>;
+                  };
+
+                  tcon_top_mixer1_out_tcon_lcd1: endpoint@1 {
+                      reg = <1>;
+                  };
+
+                  tcon_top_mixer1_out_tcon_tv0: endpoint@2 {
+                      reg = <2>;
+                      remote-endpoint = <&tcon_tv0_in_tcon_top_mixer1>;
+                  };
+
+                  tcon_top_mixer1_out_tcon_tv1: endpoint@3 {
+                      reg = <3>;
+                      remote-endpoint = <&tcon_tv1_in_tcon_top_mixer1>;
+                  };
+              };
+
+              tcon_top_hdmi_in: port@4 {
+                  #address-cells = <1>;
+                  #size-cells = <0>;
+                  reg = <4>;
+
+                  tcon_top_hdmi_in_tcon_tv0: endpoint@0 {
+                      reg = <0>;
+                      remote-endpoint = <&tcon_tv0_out_tcon_top>;
+                  };
+
+                  tcon_top_hdmi_in_tcon_tv1: endpoint@1 {
+                      reg = <1>;
+                      remote-endpoint = <&tcon_tv1_out_tcon_top>;
+                  };
+              };
+
+              tcon_top_hdmi_out: port@5 {
+                  reg = <5>;
+
+                  tcon_top_hdmi_out_hdmi: endpoint {
+                      remote-endpoint = <&hdmi_in_tcon_top>;
+                  };
+              };
+          };
+      };
+
+...
diff --git a/Documentation/devicetree/bindings/display/allwinner,sun9i-a80-deu.yaml b/Documentation/devicetree/bindings/display/allwinner,sun9i-a80-deu.yaml
new file mode 100644 (file)
index 0000000..96de41d
--- /dev/null
@@ -0,0 +1,133 @@
+# SPDX-License-Identifier: GPL-2.0
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/allwinner,sun9i-a80-deu.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Allwinner A80 Detail Enhancement Unit Device Tree Bindings
+
+maintainers:
+  - Chen-Yu Tsai <wens@csie.org>
+  - Maxime Ripard <mripard@kernel.org>
+
+description: |
+  The DEU (Detail Enhancement Unit), found in the Allwinner A80 SoC,
+  can sharpen the display content in both luma and chroma channels.
+
+properties:
+  compatible:
+    const: allwinner,sun9i-a80-deu
+
+  reg:
+    maxItems: 1
+
+  interrupts:
+    maxItems: 1
+
+  clocks:
+    items:
+      - description: The DEU interface clock
+      - description: The DEU module clock
+      - description: The DEU DRAM clock
+
+  clock-names:
+    items:
+      - const: ahb
+      - const: mod
+      - const: ram
+
+  resets:
+    maxItems: 1
+
+  ports:
+    type: object
+    description: |
+      A ports node with endpoint definitions as defined in
+      Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+    properties:
+      "#address-cells":
+        const: 1
+
+      "#size-cells":
+        const: 0
+
+      port@0:
+        type: object
+        description: |
+          Input endpoints of the controller.
+
+      port@1:
+        type: object
+        description: |
+          Output endpoints of the controller.
+
+    required:
+      - "#address-cells"
+      - "#size-cells"
+      - port@0
+      - port@1
+
+    additionalProperties: false
+
+required:
+  - compatible
+  - reg
+  - interrupts
+  - clocks
+  - clock-names
+  - resets
+  - ports
+
+additionalProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+    #include <dt-bindings/clock/sun9i-a80-de.h>
+    #include <dt-bindings/reset/sun9i-a80-de.h>
+
+    deu0: deu@3300000 {
+        compatible = "allwinner,sun9i-a80-deu";
+        reg = <0x03300000 0x40000>;
+        interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_HIGH>;
+        clocks = <&de_clocks CLK_BUS_DEU0>,
+                 <&de_clocks CLK_IEP_DEU0>,
+                 <&de_clocks CLK_DRAM_DEU0>;
+        clock-names = "ahb",
+                      "mod",
+                      "ram";
+        resets = <&de_clocks RST_DEU0>;
+
+        ports {
+            #address-cells = <1>;
+            #size-cells = <0>;
+
+            deu0_in: port@0 {
+                reg = <0>;
+
+                deu0_in_fe0: endpoint {
+                    remote-endpoint = <&fe0_out_deu0>;
+                };
+            };
+
+            deu0_out: port@1 {
+                #address-cells = <1>;
+                #size-cells = <0>;
+                reg = <1>;
+
+                deu0_out_be0: endpoint@0 {
+                    reg = <0>;
+                    remote-endpoint = <&be0_in_deu0>;
+                };
+
+                deu0_out_be1: endpoint@1 {
+                    reg = <1>;
+                    remote-endpoint = <&be1_in_deu0>;
+                };
+            };
+        };
+    };
+
+...
diff --git a/Documentation/devicetree/bindings/display/panel/ampire,am-480272h3tmqw-t01h.yaml b/Documentation/devicetree/bindings/display/panel/ampire,am-480272h3tmqw-t01h.yaml
deleted file mode 100644 (file)
index c6e33e7..0000000
+++ /dev/null
@@ -1,42 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0
-%YAML 1.2
----
-$id: http://devicetree.org/schemas/display/panel/ampire,am-480272h3tmqw-t01h.yaml#
-$schema: http://devicetree.org/meta-schemas/core.yaml#
-
-title: Ampire AM-480272H3TMQW-T01H 4.3" WQVGA TFT LCD panel
-
-maintainers:
-  - Yannick Fertre <yannick.fertre@st.com>
-  - Thierry Reding <treding@nvidia.com>
-
-allOf:
-  - $ref: panel-common.yaml#
-
-properties:
-  compatible:
-    const: ampire,am-480272h3tmqw-t01h
-
-  power-supply: true
-  enable-gpios: true
-  backlight: true
-  port: true
-
-required:
-  - compatible
-
-additionalProperties: false
-
-examples:
-  - |
-    panel_rgb: panel {
-      compatible = "ampire,am-480272h3tmqw-t01h";
-      enable-gpios = <&gpioa 8 1>;
-      port {
-        panel_in_rgb: endpoint {
-          remote-endpoint = <&controller_out_rgb>;
-        };
-      };
-    };
-
-...
diff --git a/Documentation/devicetree/bindings/display/panel/ampire,am800480r3tmqwa1h.txt b/Documentation/devicetree/bindings/display/panel/ampire,am800480r3tmqwa1h.txt
deleted file mode 100644 (file)
index 83e2cae..0000000
+++ /dev/null
@@ -1,7 +0,0 @@
-Ampire AM-800480R3TMQW-A1H 7.0" WVGA TFT LCD panel
-
-Required properties:
-- compatible: should be "ampire,am800480r3tmqwa1h"
-
-This binding is compatible with the simple-panel binding, which is specified
-in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/display/panel/giantplus,gpm940b0.txt b/Documentation/devicetree/bindings/display/panel/giantplus,gpm940b0.txt
deleted file mode 100644 (file)
index 3dab52f..0000000
+++ /dev/null
@@ -1,12 +0,0 @@
-GiantPlus 3.0" (320x240 pixels) 24-bit TFT LCD panel
-
-Required properties:
-- compatible: should be "giantplus,gpm940b0"
-- power-supply: as specified in the base binding
-
-Optional properties:
-- backlight: as specified in the base binding
-- enable-gpios: as specified in the base binding
-
-This binding is compatible with the simple-panel binding, which is specified
-in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/display/panel/panel-simple.yaml b/Documentation/devicetree/bindings/display/panel/panel-simple.yaml
new file mode 100644 (file)
index 0000000..8fe60ee
--- /dev/null
@@ -0,0 +1,69 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/display/panel/panel-simple.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Simple panels with one power supply
+
+maintainers:
+  - Thierry Reding <thierry.reding@gmail.com>
+  - Sam Ravnborg <sam@ravnborg.org>
+
+description: |
+  This binding file is a collection of the simple (dumb) panels that
+  requires only a single power-supply.
+  There are optionally a backlight and an enable GPIO.
+  The panel may use an OF graph binding for the association to the display,
+  or it may be a direct child node of the display.
+
+  If the panel is more advanced a dedicated binding file is required.
+
+allOf:
+  - $ref: panel-common.yaml#
+
+properties:
+
+  compatible:
+    enum:
+    # compatible must be listed in alphabetical order, ordered by compatible.
+    # The description in the comment is mandatory for each compatible.
+
+        # Ampire AM-480272H3TMQW-T01H 4.3" WQVGA TFT LCD panel
+      - ampire,am-480272h3tmqw-t01h
+        # Ampire AM-800480R3TMQW-A1H 7.0" WVGA TFT LCD panel
+      - ampire,am800480r3tmqwa1h
+        # AUO B116XAK01 eDP TFT LCD panel
+      - auo,b116xa01
+        # BOE NV140FHM-N49 14.0" FHD a-Si FT panel
+      - boe,nv140fhmn49
+        # GiantPlus GPM940B0 3.0" QVGA TFT LCD panel
+      - giantplus,gpm940b0
+        # Satoz SAT050AT40H12R2 5.0" WVGA TFT LCD panel
+      - satoz,sat050at40h12r2
+        # Sharp LS020B1DD01D 2.0" HQVGA TFT LCD panel
+      - sharp,ls020b1dd01d
+
+  backlight: true
+  enable-gpios: true
+  port: true
+  power-supply: true
+
+additionalProperties: false
+
+required:
+  - compatible
+  - power-supply
+
+examples:
+  - |
+    panel_rgb: panel-rgb {
+      compatible = "ampire,am-480272h3tmqw-t01h";
+      power-supply = <&vcc_lcd_reg>;
+
+      port {
+        panel_in_rgb: endpoint {
+          remote-endpoint = <&ltdc_out_rgb>;
+        };
+      };
+    };
diff --git a/Documentation/devicetree/bindings/display/panel/sharp,ls020b1dd01d.txt b/Documentation/devicetree/bindings/display/panel/sharp,ls020b1dd01d.txt
deleted file mode 100644 (file)
index e45edbc..0000000
+++ /dev/null
@@ -1,12 +0,0 @@
-Sharp 2.0" (240x160 pixels) 16-bit TFT LCD panel
-
-Required properties:
-- compatible: should be "sharp,ls020b1dd01d"
-- power-supply: as specified in the base binding
-
-Optional properties:
-- backlight: as specified in the base binding
-- enable-gpios: as specified in the base binding
-
-This binding is compatible with the simple-panel binding, which is specified
-in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/display/sunxi/sun4i-drm.txt b/Documentation/devicetree/bindings/display/sunxi/sun4i-drm.txt
deleted file mode 100644 (file)
index 31ab72c..0000000
+++ /dev/null
@@ -1,637 +0,0 @@
-Allwinner A10 Display Pipeline
-==============================
-
-The Allwinner A10 Display pipeline is composed of several components
-that are going to be documented below:
-
-For all connections between components up to the TCONs in the display
-pipeline, when there are multiple components of the same type at the
-same depth, the local endpoint ID must be the same as the remote
-component's index. For example, if the remote endpoint is Frontend 1,
-then the local endpoint ID must be 1.
-
-    Frontend 0  [0] ------- [0]  Backend 0  [0] ------- [0]  TCON 0
-               [1] --   -- [1]             [1] --   -- [1]
-                     \ /                         \ /
-                      X                           X
-                     / \                         / \
-               [0] --   -- [0]             [0] --   -- [0]
-    Frontend 1  [1] ------- [1]  Backend 1  [1] ------- [1]  TCON 1
-
-For a two pipeline system such as the one depicted above, the lines
-represent the connections between the components, while the numbers
-within the square brackets corresponds to the ID of the local endpoint.
-
-The same rule also applies to DE 2.0 mixer-TCON connections:
-
-    Mixer 0  [0] ----------- [0]  TCON 0
-            [1] ----   ---- [1]
-                    \ /
-                     X
-                    / \
-            [0] ----   ---- [0]
-    Mixer 1  [1] ----------- [1]  TCON 1
-
-HDMI Encoder
-------------
-
-The HDMI Encoder supports the HDMI video and audio outputs, and does
-CEC. It is one end of the pipeline.
-
-Required properties:
-  - compatible: value must be one of:
-    * allwinner,sun4i-a10-hdmi
-    * allwinner,sun5i-a10s-hdmi
-    * allwinner,sun6i-a31-hdmi
-  - reg: base address and size of memory-mapped region
-  - interrupts: interrupt associated to this IP
-  - clocks: phandles to the clocks feeding the HDMI encoder
-    * ahb: the HDMI interface clock
-    * mod: the HDMI module clock
-    * ddc: the HDMI ddc clock (A31 only)
-    * pll-0: the first video PLL
-    * pll-1: the second video PLL
-  - clock-names: the clock names mentioned above
-  - resets: phandle to the reset control for the HDMI encoder (A31 only)
-  - dmas: phandles to the DMA channels used by the HDMI encoder
-    * ddc-tx: The channel for DDC transmission
-    * ddc-rx: The channel for DDC reception
-    * audio-tx: The channel used for audio transmission
-  - dma-names: the channel names mentioned above
-
-  - ports: A ports node with endpoint definitions as defined in
-    Documentation/devicetree/bindings/media/video-interfaces.txt. The
-    first port should be the input endpoint. The second should be the
-    output, usually to an HDMI connector.
-
-DWC HDMI TX Encoder
--------------------
-
-The HDMI transmitter is a Synopsys DesignWare HDMI 1.4 TX controller IP
-with Allwinner's own PHY IP. It supports audio and video outputs and CEC.
-
-These DT bindings follow the Synopsys DWC HDMI TX bindings defined in
-Documentation/devicetree/bindings/display/bridge/dw_hdmi.txt with the
-following device-specific properties.
-
-Required properties:
-
-  - compatible: value must be one of:
-    * "allwinner,sun8i-a83t-dw-hdmi"
-    * "allwinner,sun50i-a64-dw-hdmi", "allwinner,sun8i-a83t-dw-hdmi"
-    * "allwinner,sun50i-h6-dw-hdmi"
-  - reg: base address and size of memory-mapped region
-  - reg-io-width: See dw_hdmi.txt. Shall be 1.
-  - interrupts: HDMI interrupt number
-  - clocks: phandles to the clocks feeding the HDMI encoder
-    * iahb: the HDMI bus clock
-    * isfr: the HDMI register clock
-    * tmds: TMDS clock
-    * cec: HDMI CEC clock (H6 only)
-    * hdcp: HDCP clock (H6 only)
-    * hdcp-bus: HDCP bus clock (H6 only)
-  - clock-names: the clock names mentioned above
-  - resets:
-    * ctrl: HDMI controller reset
-    * hdcp: HDCP reset (H6 only)
-  - reset-names: reset names mentioned above
-  - phys: phandle to the DWC HDMI PHY
-  - phy-names: must be "phy"
-
-  - ports: A ports node with endpoint definitions as defined in
-    Documentation/devicetree/bindings/media/video-interfaces.txt. The
-    first port should be the input endpoint. The second should be the
-    output, usually to an HDMI connector.
-
-Optional properties:
-  - hvcc-supply: the VCC power supply of the controller
-
-DWC HDMI PHY
-------------
-
-Required properties:
-  - compatible: value must be one of:
-    * allwinner,sun8i-a83t-hdmi-phy
-    * allwinner,sun8i-h3-hdmi-phy
-    * allwinner,sun8i-r40-hdmi-phy
-    * allwinner,sun50i-a64-hdmi-phy
-    * allwinner,sun50i-h6-hdmi-phy
-  - reg: base address and size of memory-mapped region
-  - clocks: phandles to the clocks feeding the HDMI PHY
-    * bus: the HDMI PHY interface clock
-    * mod: the HDMI PHY module clock
-  - clock-names: the clock names mentioned above
-  - resets: phandle to the reset controller driving the PHY
-  - reset-names: must be "phy"
-
-H3, A64 and R40 HDMI PHY require additional clocks:
-  - pll-0: parent of phy clock
-  - pll-1: second possible phy clock parent (A64/R40 only)
-
-TV Encoder
-----------
-
-The TV Encoder supports the composite and VGA output. It is one end of
-the pipeline.
-
-Required properties:
- - compatible: value should be "allwinner,sun4i-a10-tv-encoder".
- - reg: base address and size of memory-mapped region
- - clocks: the clocks driving the TV encoder
- - resets: phandle to the reset controller driving the encoder
-
-- ports: A ports node with endpoint definitions as defined in
-  Documentation/devicetree/bindings/media/video-interfaces.txt. The
-  first port should be the input endpoint.
-
-TCON
-----
-
-The TCON acts as a timing controller for RGB, LVDS and TV interfaces.
-
-Required properties:
- - compatible: value must be either:
-   * allwinner,sun4i-a10-tcon
-   * allwinner,sun5i-a13-tcon
-   * allwinner,sun6i-a31-tcon
-   * allwinner,sun6i-a31s-tcon
-   * allwinner,sun7i-a20-tcon
-   * allwinner,sun8i-a23-tcon
-   * allwinner,sun8i-a33-tcon
-   * allwinner,sun8i-a83t-tcon-lcd
-   * allwinner,sun8i-a83t-tcon-tv
-   * allwinner,sun8i-r40-tcon-tv
-   * allwinner,sun8i-v3s-tcon
-   * allwinner,sun9i-a80-tcon-lcd
-   * allwinner,sun9i-a80-tcon-tv
-   * "allwinner,sun50i-a64-tcon-lcd", "allwinner,sun8i-a83t-tcon-lcd"
-   * "allwinner,sun50i-a64-tcon-tv", "allwinner,sun8i-a83t-tcon-tv"
-   * allwinner,sun50i-h6-tcon-tv, allwinner,sun8i-r40-tcon-tv
- - reg: base address and size of memory-mapped region
- - interrupts: interrupt associated to this IP
- - clocks: phandles to the clocks feeding the TCON.
-   - 'ahb': the interface clocks
-   - 'tcon-ch0': The clock driving the TCON channel 0, if supported
- - resets: phandles to the reset controllers driving the encoder
-   - "lcd": the reset line for the TCON
-   - "edp": the reset line for the eDP block (A80 only)
-
- - clock-names: the clock names mentioned above
- - reset-names: the reset names mentioned above
- - clock-output-names: Name of the pixel clock created, if TCON supports
-   channel 0.
-
-- ports: A ports node with endpoint definitions as defined in
-  Documentation/devicetree/bindings/media/video-interfaces.txt. The
-  first port should be the input endpoint, the second one the output
-
-  The output may have multiple endpoints. TCON can have 1 or 2 channels,
-  usually with the first channel being used for the panels interfaces
-  (RGB, LVDS, etc.), and the second being used for the outputs that
-  require another controller (TV Encoder, HDMI, etc.). The endpoints
-  will take an extra property, allwinner,tcon-channel, to specify the
-  channel the endpoint is associated to. If that property is not
-  present, the endpoint number will be used as the channel number.
-
-For TCONs with channel 0, there is one more clock required:
-   - 'tcon-ch0': The clock driving the TCON channel 0
-For TCONs with channel 1, there is one more clock required:
-   - 'tcon-ch1': The clock driving the TCON channel 1
-
-When TCON support LVDS (all TCONs except TV TCONs on A83T, R40 and those found
-in A13, H3, H5 and V3s SoCs), you need one more reset line:
-   - 'lvds': The reset line driving the LVDS logic
-
-And on the A23, A31, A31s and A33, you need one more clock line:
-   - 'lvds-alt': An alternative clock source, separate from the TCON channel 0
-                 clock, that can be used to drive the LVDS clock
-
-TCON TOP
---------
-
-TCON TOPs main purpose is to configure whole display pipeline. It determines
-relationships between mixers and TCONs, selects source TCON for HDMI, muxes
-LCD and TV encoder GPIO output, selects TV encoder clock source and contains
-additional TV TCON and DSI gates.
-
-It allows display pipeline to be configured in very different ways:
-
-                                / LCD0/LVDS0
-                 / [0] TCON-LCD0
-                 |              \ MIPI DSI
- mixer0          |
-        \        / [1] TCON-LCD1 - LCD1/LVDS1
-         TCON-TOP
-        /        \ [2] TCON-TV0 [0] - TVE0/RGB
- mixer1          |                  \
-                 |                   TCON-TOP - HDMI
-                 |                  /
-                 \ [3] TCON-TV1 [1] - TVE1/RGB
-
-Note that both TCON TOP references same physical unit. Both mixers can be
-connected to any TCON. Not all TCON TOP variants support all features.
-
-Required properties:
-  - compatible: value must be one of:
-    * allwinner,sun8i-r40-tcon-top
-    * allwinner,sun50i-h6-tcon-top
-  - reg: base address and size of the memory-mapped region.
-  - clocks: phandle to the clocks feeding the TCON TOP
-    * bus: TCON TOP interface clock
-    * tcon-tv0: TCON TV0 clock
-    * tve0: TVE0 clock (R40 only)
-    * tcon-tv1: TCON TV1 clock (R40 only)
-    * tve1: TVE0 clock (R40 only)
-    * dsi: MIPI DSI clock (R40 only)
-  - clock-names: clock name mentioned above
-  - resets: phandle to the reset line driving the TCON TOP
-  - #clock-cells : must contain 1
-  - clock-output-names: Names of clocks created for TCON TV0 channel clock,
-    TCON TV1 channel clock (R40 only) and DSI channel clock (R40 only), in
-    that order.
-
-- ports: A ports node with endpoint definitions as defined in
-    Documentation/devicetree/bindings/media/video-interfaces.txt. 6 ports should
-    be defined:
-    * port 0 is input for mixer0 mux
-    * port 1 is output for mixer0 mux
-    * port 2 is input for mixer1 mux
-    * port 3 is output for mixer1 mux
-    * port 4 is input for HDMI mux
-    * port 5 is output for HDMI mux
-    All output endpoints for mixer muxes and input endpoints for HDMI mux should
-    have reg property with the id of the target TCON, as shown in above graph
-    (0-3 for mixer muxes and 0-1 for HDMI mux). All ports should have only one
-    endpoint connected to remote endpoint.
-
-DRC
----
-
-The DRC (Dynamic Range Controller), found in the latest Allwinner SoCs
-(A31, A23, A33, A80), allows to dynamically adjust pixel
-brightness/contrast based on histogram measurements for LCD content
-adaptive backlight control.
-
-
-Required properties:
-  - compatible: value must be one of:
-    * allwinner,sun6i-a31-drc
-    * allwinner,sun6i-a31s-drc
-    * allwinner,sun8i-a23-drc
-    * allwinner,sun8i-a33-drc
-    * allwinner,sun9i-a80-drc
-  - reg: base address and size of the memory-mapped region.
-  - interrupts: interrupt associated to this IP
-  - clocks: phandles to the clocks feeding the DRC
-    * ahb: the DRC interface clock
-    * mod: the DRC module clock
-    * ram: the DRC DRAM clock
-  - clock-names: the clock names mentioned above
-  - resets: phandles to the reset line driving the DRC
-
-- ports: A ports node with endpoint definitions as defined in
-  Documentation/devicetree/bindings/media/video-interfaces.txt. The
-  first port should be the input endpoints, the second one the outputs
-
-Display Engine Backend
-----------------------
-
-The display engine backend exposes layers and sprites to the
-system.
-
-Required properties:
-  - compatible: value must be one of:
-    * allwinner,sun4i-a10-display-backend
-    * allwinner,sun5i-a13-display-backend
-    * allwinner,sun6i-a31-display-backend
-    * allwinner,sun7i-a20-display-backend
-    * allwinner,sun8i-a23-display-backend
-    * allwinner,sun8i-a33-display-backend
-    * allwinner,sun9i-a80-display-backend
-  - reg: base address and size of the memory-mapped region.
-  - interrupts: interrupt associated to this IP
-  - clocks: phandles to the clocks feeding the frontend and backend
-    * ahb: the backend interface clock
-    * mod: the backend module clock
-    * ram: the backend DRAM clock
-  - clock-names: the clock names mentioned above
-  - resets: phandles to the reset controllers driving the backend
-
-- ports: A ports node with endpoint definitions as defined in
-  Documentation/devicetree/bindings/media/video-interfaces.txt. The
-  first port should be the input endpoints, the second one the output
-
-On the A33, some additional properties are required:
-  - reg needs to have an additional region corresponding to the SAT
-  - reg-names need to be set, with "be" and "sat"
-  - clocks and clock-names need to have a phandle to the SAT bus
-    clocks, whose name will be "sat"
-  - resets and reset-names need to have a phandle to the SAT bus
-    resets, whose name will be "sat"
-
-DEU
----
-
-The DEU (Detail Enhancement Unit), found in the Allwinner A80 SoC,
-can sharpen the display content in both luma and chroma channels.
-
-Required properties:
-  - compatible: value must be one of:
-    * allwinner,sun9i-a80-deu
-  - reg: base address and size of the memory-mapped region.
-  - interrupts: interrupt associated to this IP
-  - clocks: phandles to the clocks feeding the DEU
-    * ahb: the DEU interface clock
-    * mod: the DEU module clock
-    * ram: the DEU DRAM clock
-  - clock-names: the clock names mentioned above
-  - resets: phandles to the reset line driving the DEU
-
-- ports: A ports node with endpoint definitions as defined in
-  Documentation/devicetree/bindings/media/video-interfaces.txt. The
-  first port should be the input endpoints, the second one the outputs
-
-Display Engine Frontend
------------------------
-
-The display engine frontend does formats conversion, scaling,
-deinterlacing and color space conversion.
-
-Required properties:
-  - compatible: value must be one of:
-    * allwinner,sun4i-a10-display-frontend
-    * allwinner,sun5i-a13-display-frontend
-    * allwinner,sun6i-a31-display-frontend
-    * allwinner,sun7i-a20-display-frontend
-    * allwinner,sun8i-a23-display-frontend
-    * allwinner,sun8i-a33-display-frontend
-    * allwinner,sun9i-a80-display-frontend
-  - reg: base address and size of the memory-mapped region.
-  - interrupts: interrupt associated to this IP
-  - clocks: phandles to the clocks feeding the frontend and backend
-    * ahb: the backend interface clock
-    * mod: the backend module clock
-    * ram: the backend DRAM clock
-  - clock-names: the clock names mentioned above
-  - resets: phandles to the reset controllers driving the backend
-
-- ports: A ports node with endpoint definitions as defined in
-  Documentation/devicetree/bindings/media/video-interfaces.txt. The
-  first port should be the input endpoints, the second one the outputs
-
-Display Engine 2.0 Mixer
-------------------------
-
-The DE2 mixer have many functionalities, currently only layer blending is
-supported.
-
-Required properties:
-  - compatible: value must be one of:
-    * allwinner,sun8i-a83t-de2-mixer-0
-    * allwinner,sun8i-a83t-de2-mixer-1
-    * allwinner,sun8i-h3-de2-mixer-0
-    * allwinner,sun8i-r40-de2-mixer-0
-    * allwinner,sun8i-r40-de2-mixer-1
-    * allwinner,sun8i-v3s-de2-mixer
-    * allwinner,sun50i-a64-de2-mixer-0
-    * allwinner,sun50i-a64-de2-mixer-1
-    * allwinner,sun50i-h6-de3-mixer-0
-  - reg: base address and size of the memory-mapped region.
-  - clocks: phandles to the clocks feeding the mixer
-    * bus: the mixer interface clock
-    * mod: the mixer module clock
-  - clock-names: the clock names mentioned above
-  - resets: phandles to the reset controllers driving the mixer
-
-- ports: A ports node with endpoint definitions as defined in
-  Documentation/devicetree/bindings/media/video-interfaces.txt. The
-  first port should be the input endpoints, the second one the output
-
-
-Display Engine Pipeline
------------------------
-
-The display engine pipeline (and its entry point, since it can be
-either directly the backend or the frontend) is represented as an
-extra node.
-
-Required properties:
-  - compatible: value must be one of:
-    * allwinner,sun4i-a10-display-engine
-    * allwinner,sun5i-a10s-display-engine
-    * allwinner,sun5i-a13-display-engine
-    * allwinner,sun6i-a31-display-engine
-    * allwinner,sun6i-a31s-display-engine
-    * allwinner,sun7i-a20-display-engine
-    * allwinner,sun8i-a23-display-engine
-    * allwinner,sun8i-a33-display-engine
-    * allwinner,sun8i-a83t-display-engine
-    * allwinner,sun8i-h3-display-engine
-    * allwinner,sun8i-r40-display-engine
-    * allwinner,sun8i-v3s-display-engine
-    * allwinner,sun9i-a80-display-engine
-    * allwinner,sun50i-a64-display-engine
-    * allwinner,sun50i-h6-display-engine
-
-  - allwinner,pipelines: list of phandle to the display engine
-    frontends (DE 1.0) or mixers (DE 2.0/3.0) available.
-
-Example:
-
-panel: panel {
-       compatible = "olimex,lcd-olinuxino-43-ts";
-       #address-cells = <1>;
-       #size-cells = <0>;
-
-       port {
-               #address-cells = <1>;
-               #size-cells = <0>;
-
-               panel_input: endpoint {
-                       remote-endpoint = <&tcon0_out_panel>;
-               };
-       };
-};
-
-connector {
-       compatible = "hdmi-connector";
-       type = "a";
-
-       port {
-               hdmi_con_in: endpoint {
-                       remote-endpoint = <&hdmi_out_con>;
-               };
-       };
-};
-
-hdmi: hdmi@1c16000 {
-       compatible = "allwinner,sun5i-a10s-hdmi";
-       reg = <0x01c16000 0x1000>;
-       interrupts = <58>;
-       clocks = <&ccu CLK_AHB_HDMI>, <&ccu CLK_HDMI>,
-                <&ccu CLK_PLL_VIDEO0_2X>,
-                <&ccu CLK_PLL_VIDEO1_2X>;
-       clock-names = "ahb", "mod", "pll-0", "pll-1";
-       dmas = <&dma SUN4I_DMA_NORMAL 16>,
-              <&dma SUN4I_DMA_NORMAL 16>,
-              <&dma SUN4I_DMA_DEDICATED 24>;
-       dma-names = "ddc-tx", "ddc-rx", "audio-tx";
-
-       ports {
-               #address-cells = <1>;
-               #size-cells = <0>;
-
-               port@0 {
-                       #address-cells = <1>;
-                       #size-cells = <0>;
-                       reg = <0>;
-
-                       hdmi_in_tcon0: endpoint {
-                               remote-endpoint = <&tcon0_out_hdmi>;
-                       };
-               };
-
-               port@1 {
-                       #address-cells = <1>;
-                       #size-cells = <0>;
-                       reg = <1>;
-
-                       hdmi_out_con: endpoint {
-                               remote-endpoint = <&hdmi_con_in>;
-                       };
-               };
-       };
-};
-
-tve0: tv-encoder@1c0a000 {
-       compatible = "allwinner,sun4i-a10-tv-encoder";
-       reg = <0x01c0a000 0x1000>;
-       clocks = <&ahb_gates 34>;
-       resets = <&tcon_ch0_clk 0>;
-
-       port {
-               #address-cells = <1>;
-               #size-cells = <0>;
-
-               tve0_in_tcon0: endpoint@0 {
-                       reg = <0>;
-                       remote-endpoint = <&tcon0_out_tve0>;
-               };
-       };
-};
-
-tcon0: lcd-controller@1c0c000 {
-       compatible = "allwinner,sun5i-a13-tcon";
-       reg = <0x01c0c000 0x1000>;
-       interrupts = <44>;
-       resets = <&tcon_ch0_clk 1>;
-       reset-names = "lcd";
-       clocks = <&ahb_gates 36>,
-                <&tcon_ch0_clk>,
-                <&tcon_ch1_clk>;
-       clock-names = "ahb",
-                     "tcon-ch0",
-                     "tcon-ch1";
-       clock-output-names = "tcon-pixel-clock";
-
-       ports {
-               #address-cells = <1>;
-               #size-cells = <0>;
-
-               tcon0_in: port@0 {
-                       #address-cells = <1>;
-                       #size-cells = <0>;
-                       reg = <0>;
-
-                       tcon0_in_be0: endpoint@0 {
-                               reg = <0>;
-                               remote-endpoint = <&be0_out_tcon0>;
-                       };
-               };
-
-               tcon0_out: port@1 {
-                       #address-cells = <1>;
-                       #size-cells = <0>;
-                       reg = <1>;
-
-                       tcon0_out_panel: endpoint@0 {
-                               reg = <0>;
-                               remote-endpoint = <&panel_input>;
-                       };
-
-                       tcon0_out_tve0: endpoint@1 {
-                               reg = <1>;
-                               remote-endpoint = <&tve0_in_tcon0>;
-                       };
-               };
-       };
-};
-
-fe0: display-frontend@1e00000 {
-       compatible = "allwinner,sun5i-a13-display-frontend";
-       reg = <0x01e00000 0x20000>;
-       interrupts = <47>;
-       clocks = <&ahb_gates 46>, <&de_fe_clk>,
-                <&dram_gates 25>;
-       clock-names = "ahb", "mod",
-                     "ram";
-       resets = <&de_fe_clk>;
-
-       ports {
-               #address-cells = <1>;
-               #size-cells = <0>;
-
-               fe0_out: port@1 {
-                       #address-cells = <1>;
-                       #size-cells = <0>;
-                       reg = <1>;
-
-                       fe0_out_be0: endpoint {
-                               remote-endpoint = <&be0_in_fe0>;
-                       };
-               };
-       };
-};
-
-be0: display-backend@1e60000 {
-       compatible = "allwinner,sun5i-a13-display-backend";
-       reg = <0x01e60000 0x10000>;
-       interrupts = <47>;
-       clocks = <&ahb_gates 44>, <&de_be_clk>,
-                <&dram_gates 26>;
-       clock-names = "ahb", "mod",
-                     "ram";
-       resets = <&de_be_clk>;
-
-       ports {
-               #address-cells = <1>;
-               #size-cells = <0>;
-
-               be0_in: port@0 {
-                       #address-cells = <1>;
-                       #size-cells = <0>;
-                       reg = <0>;
-
-                       be0_in_fe0: endpoint@0 {
-                               reg = <0>;
-                               remote-endpoint = <&fe0_out_be0>;
-                       };
-               };
-
-               be0_out: port@1 {
-                       #address-cells = <1>;
-                       #size-cells = <0>;
-                       reg = <1>;
-
-                       be0_out_tcon0: endpoint@0 {
-                               reg = <0>;
-                               remote-endpoint = <&tcon0_in_be0>;
-                       };
-               };
-       };
-};
-
-display-engine {
-       compatible = "allwinner,sun5i-a13-display-engine";
-       allwinner,pipelines = <&fe0>;
-};
index 4e6248e..835579e 100644 (file)
@@ -825,6 +825,8 @@ patternProperties:
     description: Sancloud Ltd
   "^sandisk,.*":
     description: Sandisk Corporation
+  "^satoz,.*":
+    description: Satoz International Co., Ltd
   "^sbs,.*":
     description: Smart Battery System
   "^schindler,.*":
index bfebe68..aa9add5 100644 (file)
@@ -5357,6 +5357,12 @@ S:       Maintained
 F:     drivers/gpu/drm/tiny/st7735r.c
 F:     Documentation/devicetree/bindings/display/sitronix,st7735r.txt
 
+DRM DRIVER FOR SONY ACX424AKP PANELS
+M:     Linus Walleij <linus.walleij@linaro.org>
+T:     git git://anongit.freedesktop.org/drm/drm-misc
+S:     Maintained
+F:     drivers/gpu/drm/panel/panel-sony-acx424akp.c
+
 DRM DRIVER FOR ST-ERICSSON MCDE
 M:     Linus Walleij <linus.walleij@linaro.org>
 T:     git git://anongit.freedesktop.org/drm/drm-misc
index 7041323..d0aa6cf 100644 (file)
@@ -168,6 +168,7 @@ config DRM_LOAD_EDID_FIRMWARE
 
 config DRM_DP_CEC
        bool "Enable DisplayPort CEC-Tunneling-over-AUX HDMI support"
+       depends on DRM
        select CEC_CORE
        help
          Choose this option if you want to enable HDMI CEC support for
index 81a531b..f42e8d4 100644 (file)
@@ -636,9 +636,8 @@ struct amdgpu_fw_vram_usage {
        struct amdgpu_bo *reserved_bo;
        void *va;
 
-       /* Offset on the top of VRAM, used as c2p write buffer.
+       /* GDDR6 training support flag.
        */
-       u64 mem_train_fb_loc;
        bool mem_train_support;
 };
 
@@ -994,8 +993,6 @@ struct amdgpu_device {
 
        bool                            pm_sysfs_en;
        bool                            ucode_sysfs_en;
-
-       bool                            in_baco;
 };
 
 static inline struct amdgpu_device *amdgpu_ttm_adev(struct ttm_bo_device *bdev)
index b6713e0..3c11940 100644 (file)
@@ -46,6 +46,8 @@
 #include "soc15.h"
 #include "soc15d.h"
 #include "amdgpu_amdkfd_gfx_v9.h"
+#include "gfxhub_v1_0.h"
+#include "mmhub_v9_4.h"
 
 #define HQD_N_REGS 56
 #define DUMP_REG(addr) do {                            \
@@ -258,6 +260,22 @@ static int kgd_hqd_sdma_destroy(struct kgd_dev *kgd, void *mqd,
        return 0;
 }
 
+static void kgd_set_vm_context_page_table_base(struct kgd_dev *kgd, uint32_t vmid,
+               uint64_t page_table_base)
+{
+       struct amdgpu_device *adev = get_amdgpu_device(kgd);
+
+       if (!amdgpu_amdkfd_is_kfd_vmid(adev, vmid)) {
+               pr_err("trying to set page table base for wrong VMID %u\n",
+                      vmid);
+               return;
+       }
+
+       mmhub_v9_4_setup_vm_pt_regs(adev, vmid, page_table_base);
+
+       gfxhub_v1_0_setup_vm_pt_regs(adev, vmid, page_table_base);
+}
+
 const struct kfd2kgd_calls arcturus_kfd2kgd = {
        .program_sh_mem_settings = kgd_gfx_v9_program_sh_mem_settings,
        .set_pasid_vmid_mapping = kgd_gfx_v9_set_pasid_vmid_mapping,
@@ -277,7 +295,7 @@ const struct kfd2kgd_calls arcturus_kfd2kgd = {
        .get_atc_vmid_pasid_mapping_info =
                        kgd_gfx_v9_get_atc_vmid_pasid_mapping_info,
        .get_tile_config = kgd_gfx_v9_get_tile_config,
-       .set_vm_context_page_table_base = kgd_gfx_v9_set_vm_context_page_table_base,
+       .set_vm_context_page_table_base = kgd_set_vm_context_page_table_base,
        .invalidate_tlbs = kgd_gfx_v9_invalidate_tlbs,
        .invalidate_tlbs_vmid = kgd_gfx_v9_invalidate_tlbs_vmid,
        .get_hive_id = amdgpu_amdkfd_get_hive_id,
index 6f1a467..e7861f0 100644 (file)
@@ -40,7 +40,6 @@
 #include "soc15d.h"
 #include "mmhub_v1_0.h"
 #include "gfxhub_v1_0.h"
-#include "mmhub_v9_4.h"
 
 
 enum hqd_dequeue_request_type {
@@ -758,8 +757,8 @@ uint32_t kgd_gfx_v9_address_watch_get_offset(struct kgd_dev *kgd,
        return 0;
 }
 
-void kgd_gfx_v9_set_vm_context_page_table_base(struct kgd_dev *kgd, uint32_t vmid,
-               uint64_t page_table_base)
+static void kgd_gfx_v9_set_vm_context_page_table_base(struct kgd_dev *kgd,
+                       uint32_t vmid, uint64_t page_table_base)
 {
        struct amdgpu_device *adev = get_amdgpu_device(kgd);
 
@@ -769,14 +768,7 @@ void kgd_gfx_v9_set_vm_context_page_table_base(struct kgd_dev *kgd, uint32_t vmi
                return;
        }
 
-       /* TODO: take advantage of per-process address space size. For
-        * now, all processes share the same address space size, like
-        * on GFX8 and older.
-        */
-       if (adev->asic_type == CHIP_ARCTURUS) {
-               mmhub_v9_4_setup_vm_pt_regs(adev, vmid, page_table_base);
-       } else
-               mmhub_v1_0_setup_vm_pt_regs(adev, vmid, page_table_base);
+       mmhub_v1_0_setup_vm_pt_regs(adev, vmid, page_table_base);
 
        gfxhub_v1_0_setup_vm_pt_regs(adev, vmid, page_table_base);
 }
index d9e9ad2..02b1426 100644 (file)
@@ -57,8 +57,6 @@ uint32_t kgd_gfx_v9_address_watch_get_offset(struct kgd_dev *kgd,
 
 bool kgd_gfx_v9_get_atc_vmid_pasid_mapping_info(struct kgd_dev *kgd,
                                        uint8_t vmid, uint16_t *p_pasid);
-void kgd_gfx_v9_set_vm_context_page_table_base(struct kgd_dev *kgd, uint32_t vmid,
-               uint64_t page_table_base);
 int kgd_gfx_v9_invalidate_tlbs(struct kgd_dev *kgd, uint16_t pasid);
 int kgd_gfx_v9_invalidate_tlbs_vmid(struct kgd_dev *kgd, uint16_t vmid);
 int kgd_gfx_v9_get_tile_config(struct kgd_dev *kgd,
index 9ba80d8..fdd52d8 100644 (file)
@@ -2022,7 +2022,7 @@ int amdgpu_atombios_init(struct amdgpu_device *adev)
        if (adev->is_atom_fw) {
                amdgpu_atomfirmware_scratch_regs_init(adev);
                amdgpu_atomfirmware_allocate_fb_scratch(adev);
-               ret = amdgpu_atomfirmware_get_mem_train_fb_loc(adev);
+               ret = amdgpu_atomfirmware_get_mem_train_info(adev);
                if (ret) {
                        DRM_ERROR("Failed to get mem train fb location.\n");
                        return ret;
index ff4eb96..58f9d8c 100644 (file)
@@ -525,16 +525,12 @@ static int gddr6_mem_train_support(struct amdgpu_device *adev)
        return ret;
 }
 
-int amdgpu_atomfirmware_get_mem_train_fb_loc(struct amdgpu_device *adev)
+int amdgpu_atomfirmware_get_mem_train_info(struct amdgpu_device *adev)
 {
        struct atom_context *ctx = adev->mode_info.atom_context;
-       unsigned char *bios = ctx->bios;
-       struct vram_reserve_block *reserved_block;
-       int index, block_number;
+       int index;
        uint8_t frev, crev;
        uint16_t data_offset, size;
-       uint32_t start_address_in_kb;
-       uint64_t offset;
        int ret;
 
        adev->fw_vram_usage.mem_train_support = false;
@@ -569,32 +565,6 @@ int amdgpu_atomfirmware_get_mem_train_fb_loc(struct amdgpu_device *adev)
                return -EINVAL;
        }
 
-       reserved_block = (struct vram_reserve_block *)
-               (bios + data_offset + sizeof(struct atom_common_table_header));
-       block_number = ((unsigned int)size - sizeof(struct atom_common_table_header))
-               / sizeof(struct vram_reserve_block);
-       reserved_block += (block_number > 0) ? block_number-1 : 0;
-       DRM_DEBUG("block_number:0x%04x, last block: 0x%08xkb sz, %dkb fw, %dkb drv.\n",
-                 block_number,
-                 le32_to_cpu(reserved_block->start_address_in_kb),
-                 le16_to_cpu(reserved_block->used_by_firmware_in_kb),
-                 le16_to_cpu(reserved_block->used_by_driver_in_kb));
-       if (reserved_block->used_by_firmware_in_kb > 0) {
-               start_address_in_kb = le32_to_cpu(reserved_block->start_address_in_kb);
-               offset = (uint64_t)start_address_in_kb * ONE_KiB;
-               if ((offset & (ONE_MiB - 1)) < (4 * ONE_KiB + 1) ) {
-                       offset -= ONE_MiB;
-               }
-
-               offset &= ~(ONE_MiB - 1);
-               adev->fw_vram_usage.mem_train_fb_loc = offset;
-               adev->fw_vram_usage.mem_train_support = true;
-               DRM_DEBUG("mem_train_fb_loc:0x%09llx.\n", offset);
-               ret = 0;
-       } else {
-               DRM_ERROR("used_by_firmware_in_kb is 0!\n");
-               ret = -EINVAL;
-       }
-
-       return ret;
+       adev->fw_vram_usage.mem_train_support = true;
+       return 0;
 }
index f871af5..434fe2f 100644 (file)
@@ -31,7 +31,7 @@ void amdgpu_atomfirmware_scratch_regs_init(struct amdgpu_device *adev);
 int amdgpu_atomfirmware_allocate_fb_scratch(struct amdgpu_device *adev);
 int amdgpu_atomfirmware_get_vram_info(struct amdgpu_device *adev,
        int *vram_width, int *vram_type, int *vram_vendor);
-int amdgpu_atomfirmware_get_mem_train_fb_loc(struct amdgpu_device *adev);
+int amdgpu_atomfirmware_get_mem_train_info(struct amdgpu_device *adev);
 int amdgpu_atomfirmware_get_clock_info(struct amdgpu_device *adev);
 int amdgpu_atomfirmware_get_gfx_info(struct amdgpu_device *adev);
 bool amdgpu_atomfirmware_mem_ecc_supported(struct amdgpu_device *adev);
index a97fb75..3e35a8f 100644 (file)
@@ -613,7 +613,17 @@ static bool amdgpu_atpx_detect(void)
        bool d3_supported = false;
        struct pci_dev *parent_pdev;
 
-       while ((pdev = pci_get_class(PCI_BASE_CLASS_DISPLAY << 16, pdev)) != NULL) {
+       while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) {
+               vga_count++;
+
+               has_atpx |= (amdgpu_atpx_pci_probe_handle(pdev) == true);
+
+               parent_pdev = pci_upstream_bridge(pdev);
+               d3_supported |= parent_pdev && parent_pdev->bridge_d3;
+               amdgpu_atpx_get_quirks(pdev);
+       }
+
+       while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) {
                vga_count++;
 
                has_atpx |= (amdgpu_atpx_pci_probe_handle(pdev) == true);
index 1d2bbf1..64e2bab 100644 (file)
@@ -74,7 +74,7 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev,
                           struct amdgpu_ctx *ctx)
 {
        unsigned num_entities = amdgpu_ctx_total_num_entities();
-       unsigned i, j, k;
+       unsigned i, j;
        int r;
 
        if (priority < 0 || priority >= DRM_SCHED_PRIORITY_MAX)
@@ -121,72 +121,57 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev,
        ctx->override_priority = DRM_SCHED_PRIORITY_UNSET;
 
        for (i = 0; i < AMDGPU_HW_IP_NUM; ++i) {
-               struct amdgpu_ring *rings[AMDGPU_MAX_RINGS];
-               struct drm_sched_rq *rqs[AMDGPU_MAX_RINGS];
-               unsigned num_rings = 0;
-               unsigned num_rqs = 0;
+               struct drm_gpu_scheduler **scheds;
+               struct drm_gpu_scheduler *sched;
+               unsigned num_scheds = 0;
 
                switch (i) {
                case AMDGPU_HW_IP_GFX:
-                       rings[0] = &adev->gfx.gfx_ring[0];
-                       num_rings = 1;
+                       sched = &adev->gfx.gfx_ring[0].sched;
+                       scheds = &sched;
+                       num_scheds = 1;
                        break;
                case AMDGPU_HW_IP_COMPUTE:
-                       for (j = 0; j < adev->gfx.num_compute_rings; ++j)
-                               rings[j] = &adev->gfx.compute_ring[j];
-                       num_rings = adev->gfx.num_compute_rings;
+                       scheds = adev->gfx.compute_sched;
+                       num_scheds = adev->gfx.num_compute_sched;
                        break;
                case AMDGPU_HW_IP_DMA:
-                       for (j = 0; j < adev->sdma.num_instances; ++j)
-                               rings[j] = &adev->sdma.instance[j].ring;
-                       num_rings = adev->sdma.num_instances;
+                       scheds = adev->sdma.sdma_sched;
+                       num_scheds = adev->sdma.num_sdma_sched;
                        break;
                case AMDGPU_HW_IP_UVD:
-                       rings[0] = &adev->uvd.inst[0].ring;
-                       num_rings = 1;
+                       sched = &adev->uvd.inst[0].ring.sched;
+                       scheds = &sched;
+                       num_scheds = 1;
                        break;
                case AMDGPU_HW_IP_VCE:
-                       rings[0] = &adev->vce.ring[0];
-                       num_rings = 1;
+                       sched = &adev->vce.ring[0].sched;
+                       scheds = &sched;
+                       num_scheds = 1;
                        break;
                case AMDGPU_HW_IP_UVD_ENC:
-                       rings[0] = &adev->uvd.inst[0].ring_enc[0];
-                       num_rings = 1;
+                       sched = &adev->uvd.inst[0].ring_enc[0].sched;
+                       scheds = &sched;
+                       num_scheds = 1;
                        break;
                case AMDGPU_HW_IP_VCN_DEC:
-                       for (j = 0; j < adev->vcn.num_vcn_inst; ++j) {
-                               if (adev->vcn.harvest_config & (1 << j))
-                                       continue;
-                               rings[num_rings++] = &adev->vcn.inst[j].ring_dec;
-                       }
+                       scheds = adev->vcn.vcn_dec_sched;
+                       num_scheds =  adev->vcn.num_vcn_dec_sched;
                        break;
                case AMDGPU_HW_IP_VCN_ENC:
-                       for (j = 0; j < adev->vcn.num_vcn_inst; ++j) {
-                               if (adev->vcn.harvest_config & (1 << j))
-                                       continue;
-                               for (k = 0; k < adev->vcn.num_enc_rings; ++k)
-                                       rings[num_rings++] = &adev->vcn.inst[j].ring_enc[k];
-                       }
+                       scheds = adev->vcn.vcn_enc_sched;
+                       num_scheds =  adev->vcn.num_vcn_enc_sched;
                        break;
                case AMDGPU_HW_IP_VCN_JPEG:
-                       for (j = 0; j < adev->jpeg.num_jpeg_inst; ++j) {
-                               if (adev->jpeg.harvest_config & (1 << j))
-                                       continue;
-                               rings[num_rings++] = &adev->jpeg.inst[j].ring_dec;
-                       }
+                       scheds = adev->jpeg.jpeg_sched;
+                       num_scheds =  adev->jpeg.num_jpeg_sched;
                        break;
                }
 
-               for (j = 0; j < num_rings; ++j) {
-                       if (!rings[j]->adev)
-                               continue;
-
-                       rqs[num_rqs++] = &rings[j]->sched.sched_rq[priority];
-               }
-
                for (j = 0; j < amdgpu_ctx_num_entities[i]; ++j)
                        r = drm_sched_entity_init(&ctx->entities[i][j].entity,
-                                                 rqs, num_rqs, &ctx->guilty);
+                                                 priority, scheds,
+                                                 num_scheds, &ctx->guilty);
                if (r)
                        goto error_cleanup_entities;
        }
@@ -627,3 +612,45 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr)
        idr_destroy(&mgr->ctx_handles);
        mutex_destroy(&mgr->lock);
 }
+
+void amdgpu_ctx_init_sched(struct amdgpu_device *adev)
+{
+       int i, j;
+
+       for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+               adev->gfx.gfx_sched[i] = &adev->gfx.gfx_ring[i].sched;
+               adev->gfx.num_gfx_sched++;
+       }
+
+       for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+               adev->gfx.compute_sched[i] = &adev->gfx.compute_ring[i].sched;
+               adev->gfx.num_compute_sched++;
+       }
+
+       for (i = 0; i < adev->sdma.num_instances; i++) {
+               adev->sdma.sdma_sched[i] = &adev->sdma.instance[i].ring.sched;
+               adev->sdma.num_sdma_sched++;
+       }
+
+       for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+               if (adev->vcn.harvest_config & (1 << i))
+                       continue;
+               adev->vcn.vcn_dec_sched[adev->vcn.num_vcn_dec_sched++] =
+                       &adev->vcn.inst[i].ring_dec.sched;
+       }
+
+       for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+               if (adev->vcn.harvest_config & (1 << i))
+                       continue;
+               for (j = 0; j < adev->vcn.num_enc_rings; ++j)
+                       adev->vcn.vcn_enc_sched[adev->vcn.num_vcn_enc_sched++] =
+                               &adev->vcn.inst[i].ring_enc[j].sched;
+       }
+
+       for (i = 0; i < adev->jpeg.num_jpeg_inst; ++i) {
+               if (adev->jpeg.harvest_config & (1 << i))
+                       continue;
+               adev->jpeg.jpeg_sched[adev->jpeg.num_jpeg_sched++] =
+                       &adev->jpeg.inst[i].ring_dec.sched;
+       }
+}
index da80863..4ad90a4 100644 (file)
@@ -87,4 +87,7 @@ void amdgpu_ctx_mgr_entity_fini(struct amdgpu_ctx_mgr *mgr);
 long amdgpu_ctx_mgr_entity_flush(struct amdgpu_ctx_mgr *mgr, long timeout);
 void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);
 
+void amdgpu_ctx_init_sched(struct amdgpu_device *adev);
+
+
 #endif
index 8e6726e..63343bb 100644 (file)
@@ -129,7 +129,7 @@ static int  amdgpu_debugfs_process_reg_op(bool read, struct file *f,
                        sh_bank = 0xFFFFFFFF;
                if (instance_bank == 0x3FF)
                        instance_bank = 0xFFFFFFFF;
-               use_bank = 1;
+               use_bank = true;
        } else if (*pos & (1ULL << 61)) {
 
                me = (*pos & GENMASK_ULL(33, 24)) >> 24;
@@ -137,9 +137,9 @@ static int  amdgpu_debugfs_process_reg_op(bool read, struct file *f,
                queue = (*pos & GENMASK_ULL(53, 44)) >> 44;
                vmid = (*pos & GENMASK_ULL(58, 54)) >> 54;
 
-               use_ring = 1;
+               use_ring = true;
        } else {
-               use_bank = use_ring = 0;
+               use_bank = use_ring = false;
        }
 
        *pos &= (1UL << 22) - 1;
index a979468..9b4c18b 100644 (file)
@@ -66,6 +66,7 @@
 #include "amdgpu_pmu.h"
 
 #include <linux/suspend.h>
+#include <drm/task_barrier.h>
 
 MODULE_FIRMWARE("amdgpu/vega10_gpu_info.bin");
 MODULE_FIRMWARE("amdgpu/vega12_gpu_info.bin");
@@ -1031,8 +1032,6 @@ def_value:
  */
 static int amdgpu_device_check_arguments(struct amdgpu_device *adev)
 {
-       int ret = 0;
-
        if (amdgpu_sched_jobs < 4) {
                dev_warn(adev->dev, "sched jobs (%d) must be at least 4\n",
                         amdgpu_sched_jobs);
@@ -1072,7 +1071,7 @@ static int amdgpu_device_check_arguments(struct amdgpu_device *adev)
 
        adev->firmware.load_type = amdgpu_ucode_get_load_type(adev, amdgpu_fw_load_type);
 
-       return ret;
+       return 0;
 }
 
 /**
@@ -1810,7 +1809,8 @@ static int amdgpu_device_fw_loading(struct amdgpu_device *adev)
                }
        }
 
-       r = amdgpu_pm_load_smu_firmware(adev, &smu_version);
+       if (!amdgpu_sriov_vf(adev) || adev->asic_type == CHIP_TONGA)
+               r = amdgpu_pm_load_smu_firmware(adev, &smu_version);
 
        return r;
 }
@@ -2439,7 +2439,8 @@ static int amdgpu_device_ip_reinit_late_sriov(struct amdgpu_device *adev)
                AMD_IP_BLOCK_TYPE_GFX,
                AMD_IP_BLOCK_TYPE_SDMA,
                AMD_IP_BLOCK_TYPE_UVD,
-               AMD_IP_BLOCK_TYPE_VCE
+               AMD_IP_BLOCK_TYPE_VCE,
+               AMD_IP_BLOCK_TYPE_VCN
        };
 
        for (i = 0; i < ARRAY_SIZE(ip_order); i++) {
@@ -2454,7 +2455,11 @@ static int amdgpu_device_ip_reinit_late_sriov(struct amdgpu_device *adev)
                                block->status.hw)
                                continue;
 
-                       r = block->version->funcs->hw_init(adev);
+                       if (block->version->type == AMD_IP_BLOCK_TYPE_SMC)
+                               r = block->version->funcs->resume(adev);
+                       else
+                               r = block->version->funcs->hw_init(adev);
+
                        DRM_INFO("RE-INIT-late: %s %s\n", block->version->funcs->name, r?"failed":"succeeded");
                        if (r)
                                return r;
@@ -2663,14 +2668,38 @@ static void amdgpu_device_xgmi_reset_func(struct work_struct *__work)
 {
        struct amdgpu_device *adev =
                container_of(__work, struct amdgpu_device, xgmi_reset_work);
+       struct amdgpu_hive_info *hive = amdgpu_get_xgmi_hive(adev, 0);
 
-       if (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO)
-               adev->asic_reset_res = (adev->in_baco == false) ?
-                               amdgpu_device_baco_enter(adev->ddev) :
-                               amdgpu_device_baco_exit(adev->ddev);
-       else
-               adev->asic_reset_res = amdgpu_asic_reset(adev);
+       /* It's a bug to not have a hive within this function */
+       if (WARN_ON(!hive))
+               return;
+
+       /*
+        * Use task barrier to synchronize all xgmi reset works across the
+        * hive. task_barrier_enter and task_barrier_exit will block
+        * until all the threads running the xgmi reset works reach
+        * those points. task_barrier_full will do both blocks.
+        */
+       if (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) {
+
+               task_barrier_enter(&hive->tb);
+               adev->asic_reset_res = amdgpu_device_baco_enter(adev->ddev);
+
+               if (adev->asic_reset_res)
+                       goto fail;
 
+               task_barrier_exit(&hive->tb);
+               adev->asic_reset_res = amdgpu_device_baco_exit(adev->ddev);
+
+               if (adev->asic_reset_res)
+                       goto fail;
+       } else {
+
+               task_barrier_full(&hive->tb);
+               adev->asic_reset_res =  amdgpu_asic_reset(adev);
+       }
+
+fail:
        if (adev->asic_reset_res)
                DRM_WARN("ASIC reset failed with error, %d for drm dev, %s",
                         adev->asic_reset_res, adev->ddev->unique);
@@ -2785,7 +2814,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
        adev->mman.buffer_funcs = NULL;
        adev->mman.buffer_funcs_ring = NULL;
        adev->vm_manager.vm_pte_funcs = NULL;
-       adev->vm_manager.vm_pte_num_rqs = 0;
+       adev->vm_manager.vm_pte_num_scheds = 0;
        adev->gmc.gmc_funcs = NULL;
        adev->fence_context = dma_fence_context_alloc(AMDGPU_MAX_RINGS);
        bitmap_zero(adev->gfx.pipe_reserve_bitmap, AMDGPU_MAX_COMPUTE_QUEUES);
@@ -3029,6 +3058,14 @@ fence_driver_init:
                goto failed;
        }
 
+       DRM_DEBUG("SE %d, SH per SE %d, CU per SH %d, active_cu_number %d\n",
+                       adev->gfx.config.max_shader_engines,
+                       adev->gfx.config.max_sh_per_se,
+                       adev->gfx.config.max_cu_per_sh,
+                       adev->gfx.cu_info.number);
+
+       amdgpu_ctx_init_sched(adev);
+
        adev->accel_working = true;
 
        amdgpu_vm_check_compute_bug(adev);
@@ -3660,8 +3697,6 @@ static int amdgpu_device_reset_sriov(struct amdgpu_device *adev,
        if (r)
                return r;
 
-       amdgpu_amdkfd_pre_reset(adev);
-
        /* Resume IP prior to SMC */
        r = amdgpu_device_ip_reinit_early_sriov(adev);
        if (r)
@@ -3790,18 +3825,13 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
        return r;
 }
 
-static int amdgpu_do_asic_reset(struct amdgpu_device *adev,
-                              struct amdgpu_hive_info *hive,
+static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive,
                               struct list_head *device_list_handle,
                               bool *need_full_reset_arg)
 {
        struct amdgpu_device *tmp_adev = NULL;
        bool need_full_reset = *need_full_reset_arg, vram_lost = false;
        int r = 0;
-       int cpu = smp_processor_id();
-       bool use_baco =
-               (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) ?
-               true : false;
 
        /*
         * ASIC reset has to be done on all HGMI hive nodes ASAP
@@ -3809,62 +3839,22 @@ static int amdgpu_do_asic_reset(struct amdgpu_device *adev,
         */
        if (need_full_reset) {
                list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {
-                       /*
-                        * For XGMI run all resets in parallel to speed up the
-                        * process by scheduling the highpri wq on different
-                        * cpus. For XGMI with baco reset, all nodes must enter
-                        * baco within close proximity before anyone exit.
-                        */
+                       /* For XGMI run all resets in parallel to speed up the process */
                        if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
-                               if (!queue_work_on(cpu, system_highpri_wq,
-                                                  &tmp_adev->xgmi_reset_work))
+                               if (!queue_work(system_unbound_wq, &tmp_adev->xgmi_reset_work))
                                        r = -EALREADY;
-                               cpu = cpumask_next(cpu, cpu_online_mask);
                        } else
                                r = amdgpu_asic_reset(tmp_adev);
-                       if (r)
-                               break;
-               }
 
-               /* For XGMI wait for all work to complete before proceed */
-               if (!r) {
-                       list_for_each_entry(tmp_adev, device_list_handle,
-                                           gmc.xgmi.head) {
-                               if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
-                                       flush_work(&tmp_adev->xgmi_reset_work);
-                                       r = tmp_adev->asic_reset_res;
-                                       if (r)
-                                               break;
-                                       if (use_baco)
-                                               tmp_adev->in_baco = true;
-                               }
-                       }
-               }
-
-               /*
-                * For XGMI with baco reset, need exit baco phase by scheduling
-                * xgmi_reset_work one more time. PSP reset and sGPU skips this
-                * phase. Not assume the situation that PSP reset and baco reset
-                * coexist within an XGMI hive.
-                */
-
-               if (!r && use_baco) {
-                       cpu = smp_processor_id();
-                       list_for_each_entry(tmp_adev, device_list_handle,
-                                           gmc.xgmi.head) {
-                               if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
-                                       if (!queue_work_on(cpu,
-                                               system_highpri_wq,
-                                               &tmp_adev->xgmi_reset_work))
-                                               r = -EALREADY;
-                                       if (r)
-                                               break;
-                                       cpu = cpumask_next(cpu, cpu_online_mask);
-                               }
+                       if (r) {
+                               DRM_ERROR("ASIC reset failed with error, %d for drm dev, %s",
+                                        r, tmp_adev->ddev->unique);
+                               break;
                        }
                }
 
-               if (!r && use_baco) {
+               /* For XGMI wait for all resets to complete before proceed */
+               if (!r) {
                        list_for_each_entry(tmp_adev, device_list_handle,
                                            gmc.xgmi.head) {
                                if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
@@ -3872,16 +3862,9 @@ static int amdgpu_do_asic_reset(struct amdgpu_device *adev,
                                        r = tmp_adev->asic_reset_res;
                                        if (r)
                                                break;
-                                       tmp_adev->in_baco = false;
                                }
                        }
                }
-
-               if (r) {
-                       DRM_ERROR("ASIC reset failed with error, %d for drm dev, %s",
-                                r, tmp_adev->ddev->unique);
-                       goto end;
-               }
        }
 
        if (!r && amdgpu_ras_intr_triggered())
@@ -3974,7 +3957,7 @@ static bool amdgpu_device_lock_adev(struct amdgpu_device *adev, bool trylock)
                mutex_lock(&adev->lock_reset);
 
        atomic_inc(&adev->gpu_reset_counter);
-       adev->in_gpu_reset = 1;
+       adev->in_gpu_reset = true;
        switch (amdgpu_asic_reset_method(adev)) {
        case AMD_RESET_METHOD_MODE1:
                adev->mp1_state = PP_MP1_STATE_SHUTDOWN;
@@ -3994,7 +3977,7 @@ static void amdgpu_device_unlock_adev(struct amdgpu_device *adev)
 {
        amdgpu_vf_error_trans_all(adev);
        adev->mp1_state = PP_MP1_STATE_NONE;
-       adev->in_gpu_reset = 0;
+       adev->in_gpu_reset = false;
        mutex_unlock(&adev->lock_reset);
 }
 
@@ -4175,8 +4158,7 @@ retry:    /* Rest of adevs pre asic reset from XGMI hive. */
                if (r)
                        adev->asic_reset_res = r;
        } else {
-               r  = amdgpu_do_asic_reset(adev, hive, device_list_handle,
-                                         &need_full_reset);
+               r  = amdgpu_do_asic_reset(hive, device_list_handle, &need_full_reset);
                if (r && r == -EAGAIN)
                        goto retry;
        }
index 9cc270e..cd76fbf 100644 (file)
@@ -951,16 +951,31 @@ int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device *adev, uint32_t block
        case AMD_IP_BLOCK_TYPE_VCN:
        case AMD_IP_BLOCK_TYPE_VCE:
        case AMD_IP_BLOCK_TYPE_SDMA:
+               if (swsmu) {
+                       ret = smu_dpm_set_power_gate(&adev->smu, block_type, gate);
+               } else {
+                       if (adev->powerplay.pp_funcs &&
+                           adev->powerplay.pp_funcs->set_powergating_by_smu) {
+                               mutex_lock(&adev->pm.mutex);
+                               ret = ((adev)->powerplay.pp_funcs->set_powergating_by_smu(
+                                       (adev)->powerplay.pp_handle, block_type, gate));
+                               mutex_unlock(&adev->pm.mutex);
+                       }
+               }
+               break;
+       case AMD_IP_BLOCK_TYPE_JPEG:
                if (swsmu)
                        ret = smu_dpm_set_power_gate(&adev->smu, block_type, gate);
-               else
-                       ret = ((adev)->powerplay.pp_funcs->set_powergating_by_smu(
-                               (adev)->powerplay.pp_handle, block_type, gate));
                break;
        case AMD_IP_BLOCK_TYPE_GMC:
        case AMD_IP_BLOCK_TYPE_ACP:
-               ret = ((adev)->powerplay.pp_funcs->set_powergating_by_smu(
+               if (adev->powerplay.pp_funcs &&
+                   adev->powerplay.pp_funcs->set_powergating_by_smu) {
+                       mutex_lock(&adev->pm.mutex);
+                       ret = ((adev)->powerplay.pp_funcs->set_powergating_by_smu(
                                (adev)->powerplay.pp_handle, block_type, gate));
+                       mutex_unlock(&adev->pm.mutex);
+               }
                break;
        default:
                break;
index 3f6f14c..a9c4edc 100644 (file)
@@ -142,7 +142,7 @@ int amdgpu_async_gfx_ring = 1;
 int amdgpu_mcbp = 0;
 int amdgpu_discovery = -1;
 int amdgpu_mes = 0;
-int amdgpu_noretry = 1;
+int amdgpu_noretry;
 int amdgpu_force_asic_type = -1;
 
 struct amdgpu_mgpu_info mgpu_info = {
@@ -588,7 +588,7 @@ MODULE_PARM_DESC(mes,
 module_param_named(mes, amdgpu_mes, int, 0444);
 
 MODULE_PARM_DESC(noretry,
-       "Disable retry faults (0 = retry enabled, 1 = retry disabled (default))");
+       "Disable retry faults (0 = retry enabled (default), 1 = retry disabled)");
 module_param_named(noretry, amdgpu_noretry, int, 0644);
 
 /**
@@ -1203,13 +1203,23 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
        struct pci_dev *pdev = to_pci_dev(dev);
        struct drm_device *drm_dev = pci_get_drvdata(pdev);
        struct amdgpu_device *adev = drm_dev->dev_private;
-       int ret;
+       int ret, i;
 
        if (!adev->runpm) {
                pm_runtime_forbid(dev);
                return -EBUSY;
        }
 
+       /* wait for all rings to drain before suspending */
+       for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
+               struct amdgpu_ring *ring = adev->rings[i];
+               if (ring && ring->sched.ready) {
+                       ret = amdgpu_fence_wait_empty(ring);
+                       if (ret)
+                               return -EBUSY;
+               }
+       }
+
        if (amdgpu_device_supports_boco(drm_dev))
                drm_dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
        drm_kms_helper_poll_disable(drm_dev);
@@ -1381,7 +1391,8 @@ static struct drm_driver kms_driver = {
        .driver_features =
            DRIVER_USE_AGP | DRIVER_ATOMIC |
            DRIVER_GEM |
-           DRIVER_RENDER | DRIVER_MODESET | DRIVER_SYNCOBJ,
+           DRIVER_RENDER | DRIVER_MODESET | DRIVER_SYNCOBJ |
+           DRIVER_SYNCOBJ_TIMELINE,
        .load = amdgpu_driver_load_kms,
        .open = amdgpu_driver_open_kms,
        .postclose = amdgpu_driver_postclose_kms,
index 377fe20..e9efee0 100644 (file)
@@ -34,6 +34,7 @@
 #include <linux/kref.h>
 #include <linux/slab.h>
 #include <linux/firmware.h>
+#include <linux/pm_runtime.h>
 
 #include <drm/drm_debugfs.h>
 
@@ -154,7 +155,7 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
                       seq);
        amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,
                               seq, flags | AMDGPU_FENCE_FLAG_INT);
-
+       pm_runtime_get_noresume(adev->ddev->dev);
        ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask];
        if (unlikely(rcu_dereference_protected(*ptr, 1))) {
                struct dma_fence *old;
@@ -234,6 +235,7 @@ static void amdgpu_fence_schedule_fallback(struct amdgpu_ring *ring)
 bool amdgpu_fence_process(struct amdgpu_ring *ring)
 {
        struct amdgpu_fence_driver *drv = &ring->fence_drv;
+       struct amdgpu_device *adev = ring->adev;
        uint32_t seq, last_seq;
        int r;
 
@@ -274,6 +276,8 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring)
                        BUG();
 
                dma_fence_put(fence);
+               pm_runtime_mark_last_busy(adev->ddev->dev);
+               pm_runtime_put_autosuspend(adev->ddev->dev);
        } while (last_seq != seq);
 
        return true;
index e00b461..db7b2b3 100644 (file)
@@ -641,7 +641,7 @@ int amdgpu_gfx_process_ras_data_cb(struct amdgpu_device *adev,
                kgd2kfd_set_sram_ecc_flag(adev->kfd.dev);
                if (adev->gfx.funcs->query_ras_error_count)
                        adev->gfx.funcs->query_ras_error_count(adev, err_data);
-               amdgpu_ras_reset_gpu(adev, 0);
+               amdgpu_ras_reset_gpu(adev);
        }
        return AMDGPU_RAS_SUCCESS;
 }
index 0ae0a27..8e88e04 100644 (file)
@@ -269,8 +269,12 @@ struct amdgpu_gfx {
        bool                            me_fw_write_wait;
        bool                            cp_fw_write_wait;
        struct amdgpu_ring              gfx_ring[AMDGPU_MAX_GFX_RINGS];
+       struct drm_gpu_scheduler        *gfx_sched[AMDGPU_MAX_GFX_RINGS];
+       uint32_t                        num_gfx_sched;
        unsigned                        num_gfx_rings;
        struct amdgpu_ring              compute_ring[AMDGPU_MAX_COMPUTE_RINGS];
+       struct drm_gpu_scheduler        *compute_sched[AMDGPU_MAX_COMPUTE_RINGS];
+       uint32_t                        num_compute_sched;
        unsigned                        num_compute_rings;
        struct amdgpu_irq_src           eop_irq;
        struct amdgpu_irq_src           priv_reg_irq;
index a12f33c..5884ab5 100644 (file)
@@ -223,7 +223,7 @@ void amdgpu_gmc_agp_location(struct amdgpu_device *adev, struct amdgpu_gmc *mc)
        u64 size_af, size_bf;
 
        if (amdgpu_sriov_vf(adev)) {
-               mc->agp_start = 0xffffffff;
+               mc->agp_start = 0xffffffffffff;
                mc->agp_end = 0x0;
                mc->agp_size = 0;
 
@@ -333,3 +333,43 @@ void amdgpu_gmc_ras_fini(struct amdgpu_device *adev)
        amdgpu_mmhub_ras_fini(adev);
        amdgpu_xgmi_ras_fini(adev);
 }
+
+       /*
+        * The latest engine allocation on gfx9/10 is:
+        * Engine 2, 3: firmware
+        * Engine 0, 1, 4~16: amdgpu ring,
+        *                    subject to change when ring number changes
+        * Engine 17: Gart flushes
+        */
+#define GFXHUB_FREE_VM_INV_ENGS_BITMAP         0x1FFF3
+#define MMHUB_FREE_VM_INV_ENGS_BITMAP          0x1FFF3
+
+int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev)
+{
+       struct amdgpu_ring *ring;
+       unsigned vm_inv_engs[AMDGPU_MAX_VMHUBS] =
+               {GFXHUB_FREE_VM_INV_ENGS_BITMAP, MMHUB_FREE_VM_INV_ENGS_BITMAP,
+               GFXHUB_FREE_VM_INV_ENGS_BITMAP};
+       unsigned i;
+       unsigned vmhub, inv_eng;
+
+       for (i = 0; i < adev->num_rings; ++i) {
+               ring = adev->rings[i];
+               vmhub = ring->funcs->vmhub;
+
+               inv_eng = ffs(vm_inv_engs[vmhub]);
+               if (!inv_eng) {
+                       dev_err(adev->dev, "no VM inv eng for ring %s\n",
+                               ring->name);
+                       return -EINVAL;
+               }
+
+               ring->vm_inv_eng = inv_eng - 1;
+               vm_inv_engs[vmhub] &= ~(1 << ring->vm_inv_eng);
+
+               dev_info(adev->dev, "ring %s uses VM inv eng %u on hub %u\n",
+                        ring->name, ring->vm_inv_eng, ring->funcs->vmhub);
+       }
+
+       return 0;
+}
index b499a3d..c91dd60 100644 (file)
@@ -267,5 +267,6 @@ bool amdgpu_gmc_filter_faults(struct amdgpu_device *adev, uint64_t addr,
                              uint16_t pasid, uint64_t timestamp);
 int amdgpu_gmc_ras_late_init(struct amdgpu_device *adev);
 void amdgpu_gmc_ras_fini(struct amdgpu_device *adev);
+int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev);
 
 #endif
index 5131a0a..bd9ef9c 100644 (file)
@@ -43,6 +43,8 @@ struct amdgpu_jpeg {
        uint8_t num_jpeg_inst;
        struct amdgpu_jpeg_inst inst[AMDGPU_MAX_JPEG_INSTANCES];
        struct amdgpu_jpeg_reg internal;
+       struct drm_gpu_scheduler *jpeg_sched[AMDGPU_MAX_JPEG_INSTANCES];
+       uint32_t num_jpeg_sched;
        unsigned harvest_config;
        struct delayed_work idle_work;
        enum amd_powergating_state cur_state;
index b32adda..285d460 100644 (file)
@@ -2762,17 +2762,12 @@ static void amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
 void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable)
 {
        int ret = 0;
-       if (is_support_sw_smu(adev)) {
-           ret = smu_dpm_set_power_gate(&adev->smu, AMD_IP_BLOCK_TYPE_UVD, enable);
-           if (ret)
-               DRM_ERROR("[SW SMU]: dpm enable uvd failed, state = %s, ret = %d. \n",
-                         enable ? "true" : "false", ret);
-       } else if (adev->powerplay.pp_funcs->set_powergating_by_smu) {
-               /* enable/disable UVD */
-               mutex_lock(&adev->pm.mutex);
-               amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_UVD, !enable);
-               mutex_unlock(&adev->pm.mutex);
-       }
+
+       ret = amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_UVD, !enable);
+       if (ret)
+               DRM_ERROR("Dpm %s uvd failed, ret = %d. \n",
+                         enable ? "enable" : "disable", ret);
+
        /* enable/disable Low Memory PState for UVD (4k videos) */
        if (adev->asic_type == CHIP_STONEY &&
                adev->uvd.decode_image_width >= WIDTH_4K) {
@@ -2789,17 +2784,11 @@ void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable)
 void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable)
 {
        int ret = 0;
-       if (is_support_sw_smu(adev)) {
-           ret = smu_dpm_set_power_gate(&adev->smu, AMD_IP_BLOCK_TYPE_VCE, enable);
-           if (ret)
-               DRM_ERROR("[SW SMU]: dpm enable vce failed, state = %s, ret = %d. \n",
-                         enable ? "true" : "false", ret);
-       } else if (adev->powerplay.pp_funcs->set_powergating_by_smu) {
-               /* enable/disable VCE */
-               mutex_lock(&adev->pm.mutex);
-               amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_VCE, !enable);
-               mutex_unlock(&adev->pm.mutex);
-       }
+
+       ret = amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_VCE, !enable);
+       if (ret)
+               DRM_ERROR("Dpm %s vce failed, ret = %d. \n",
+                         enable ? "enable" : "disable", ret);
 }
 
 void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
@@ -2818,12 +2807,10 @@ void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable)
 {
        int ret = 0;
 
-       if (is_support_sw_smu(adev)) {
-               ret = smu_dpm_set_power_gate(&adev->smu, AMD_IP_BLOCK_TYPE_JPEG, enable);
-               if (ret)
-                       DRM_ERROR("[SW SMU]: dpm enable jpeg failed, state = %s, ret = %d. \n",
-                                 enable ? "true" : "false", ret);
-       }
+       ret = amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_JPEG, !enable);
+       if (ret)
+               DRM_ERROR("Dpm %s jpeg failed, ret = %d. \n",
+                         enable ? "enable" : "disable", ret);
 }
 
 int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_version)
index 0e6dba9..cf21ad0 100644 (file)
@@ -107,7 +107,7 @@ static void amdgpu_perf_read(struct perf_event *event)
                default:
                        count = 0;
                        break;
-               };
+               }
        } while (local64_cmpxchg(&hwc->prev_count, prev, count) != prev);
 
        local64_add(count - prev, &event->count);
@@ -130,7 +130,7 @@ static void amdgpu_perf_stop(struct perf_event *event, int flags)
                break;
        default:
                break;
-       };
+       }
 
        WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
        hwc->state |= PERF_HES_STOPPED;
@@ -160,7 +160,7 @@ static int amdgpu_perf_add(struct perf_event *event, int flags)
                break;
        default:
                return 0;
-       };
+       }
 
        if (retval)
                return retval;
@@ -188,7 +188,7 @@ static void amdgpu_perf_del(struct perf_event *event, int flags)
                break;
        default:
                break;
-       };
+       }
 
        perf_event_update_userpage(event);
 }
index c14f2cc..281d896 100644 (file)
@@ -191,9 +191,9 @@ psp_cmd_submit_buf(struct psp_context *psp,
                if (ucode)
                        DRM_WARN("failed to load ucode id (%d) ",
                                  ucode->ucode_id);
-               DRM_DEBUG_DRIVER("psp command (0x%X) failed and response status is (0x%X)\n",
+               DRM_WARN("psp command (0x%X) failed and response status is (0x%X)\n",
                         psp->cmd_buf_mem->cmd_id,
-                        psp->cmd_buf_mem->resp.status & GFX_CMD_STATUS_MASK);
+                        psp->cmd_buf_mem->resp.status);
                if (!timeout) {
                        mutex_unlock(&psp->mutex);
                        return -EINVAL;
@@ -365,11 +365,11 @@ static int psp_asd_load(struct psp_context *psp)
        return ret;
 }
 
-static void psp_prep_asd_unload_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-                                       uint32_t asd_session_id)
+static void psp_prep_ta_unload_cmd_buf(struct psp_gfx_cmd_resp *cmd,
+                                      uint32_t session_id)
 {
        cmd->cmd_id = GFX_CMD_ID_UNLOAD_TA;
-       cmd->cmd.cmd_unload_ta.session_id = asd_session_id;
+       cmd->cmd.cmd_unload_ta.session_id = session_id;
 }
 
 static int psp_asd_unload(struct psp_context *psp)
@@ -387,7 +387,7 @@ static int psp_asd_unload(struct psp_context *psp)
        if (!cmd)
                return -ENOMEM;
 
-       psp_prep_asd_unload_cmd_buf(cmd, psp->asd_context.session_id);
+       psp_prep_ta_unload_cmd_buf(cmd, psp->asd_context.session_id);
 
        ret = psp_cmd_submit_buf(psp, NULL, cmd,
                                 psp->fence_buf_mc_addr);
@@ -427,18 +427,20 @@ int psp_reg_program(struct psp_context *psp, enum psp_reg_prog_id reg,
        return ret;
 }
 
-static void psp_prep_xgmi_ta_load_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-                                         uint64_t xgmi_ta_mc, uint64_t xgmi_mc_shared,
-                                         uint32_t xgmi_ta_size, uint32_t shared_size)
+static void psp_prep_ta_load_cmd_buf(struct psp_gfx_cmd_resp *cmd,
+                                    uint64_t ta_bin_mc,
+                                    uint32_t ta_bin_size,
+                                    uint64_t ta_shared_mc,
+                                    uint32_t ta_shared_size)
 {
-        cmd->cmd_id = GFX_CMD_ID_LOAD_TA;
-        cmd->cmd.cmd_load_ta.app_phy_addr_lo = lower_32_bits(xgmi_ta_mc);
-        cmd->cmd.cmd_load_ta.app_phy_addr_hi = upper_32_bits(xgmi_ta_mc);
-        cmd->cmd.cmd_load_ta.app_len = xgmi_ta_size;
+       cmd->cmd_id                             = GFX_CMD_ID_LOAD_TA;
+       cmd->cmd.cmd_load_ta.app_phy_addr_lo    = lower_32_bits(ta_bin_mc);
+       cmd->cmd.cmd_load_ta.app_phy_addr_hi    = upper_32_bits(ta_bin_mc);
+       cmd->cmd.cmd_load_ta.app_len            = ta_bin_size;
 
-        cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_lo = lower_32_bits(xgmi_mc_shared);
-        cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_hi = upper_32_bits(xgmi_mc_shared);
-        cmd->cmd.cmd_load_ta.cmd_buf_len = shared_size;
+       cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_lo = lower_32_bits(ta_shared_mc);
+       cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_hi = upper_32_bits(ta_shared_mc);
+       cmd->cmd.cmd_load_ta.cmd_buf_len         = ta_shared_size;
 }
 
 static int psp_xgmi_init_shared_buf(struct psp_context *psp)
@@ -458,6 +460,36 @@ static int psp_xgmi_init_shared_buf(struct psp_context *psp)
        return ret;
 }
 
+static void psp_prep_ta_invoke_cmd_buf(struct psp_gfx_cmd_resp *cmd,
+                                      uint32_t ta_cmd_id,
+                                      uint32_t session_id)
+{
+       cmd->cmd_id                             = GFX_CMD_ID_INVOKE_CMD;
+       cmd->cmd.cmd_invoke_cmd.session_id      = session_id;
+       cmd->cmd.cmd_invoke_cmd.ta_cmd_id       = ta_cmd_id;
+}
+
+int psp_ta_invoke(struct psp_context *psp,
+                 uint32_t ta_cmd_id,
+                 uint32_t session_id)
+{
+       int ret;
+       struct psp_gfx_cmd_resp *cmd;
+
+       cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
+       if (!cmd)
+               return -ENOMEM;
+
+       psp_prep_ta_invoke_cmd_buf(cmd, ta_cmd_id, session_id);
+
+       ret = psp_cmd_submit_buf(psp, NULL, cmd,
+                                psp->fence_buf_mc_addr);
+
+       kfree(cmd);
+
+       return ret;
+}
+
 static int psp_xgmi_load(struct psp_context *psp)
 {
        int ret;
@@ -466,8 +498,6 @@ static int psp_xgmi_load(struct psp_context *psp)
        /*
         * TODO: bypass the loading in sriov for now
         */
-       if (amdgpu_sriov_vf(psp->adev))
-               return 0;
 
        cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
        if (!cmd)
@@ -476,9 +506,11 @@ static int psp_xgmi_load(struct psp_context *psp)
        memset(psp->fw_pri_buf, 0, PSP_1_MEG);
        memcpy(psp->fw_pri_buf, psp->ta_xgmi_start_addr, psp->ta_xgmi_ucode_size);
 
-       psp_prep_xgmi_ta_load_cmd_buf(cmd, psp->fw_pri_mc_addr,
-                                     psp->xgmi_context.xgmi_shared_mc_addr,
-                                     psp->ta_xgmi_ucode_size, PSP_XGMI_SHARED_MEM_SIZE);
+       psp_prep_ta_load_cmd_buf(cmd,
+                                psp->fw_pri_mc_addr,
+                                psp->ta_xgmi_ucode_size,
+                                psp->xgmi_context.xgmi_shared_mc_addr,
+                                PSP_XGMI_SHARED_MEM_SIZE);
 
        ret = psp_cmd_submit_buf(psp, NULL, cmd,
                                 psp->fence_buf_mc_addr);
@@ -493,13 +525,6 @@ static int psp_xgmi_load(struct psp_context *psp)
        return ret;
 }
 
-static void psp_prep_xgmi_ta_unload_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-                                           uint32_t xgmi_session_id)
-{
-       cmd->cmd_id = GFX_CMD_ID_UNLOAD_TA;
-       cmd->cmd.cmd_unload_ta.session_id = xgmi_session_id;
-}
-
 static int psp_xgmi_unload(struct psp_context *psp)
 {
        int ret;
@@ -508,14 +533,12 @@ static int psp_xgmi_unload(struct psp_context *psp)
        /*
         * TODO: bypass the unloading in sriov for now
         */
-       if (amdgpu_sriov_vf(psp->adev))
-               return 0;
 
        cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
        if (!cmd)
                return -ENOMEM;
 
-       psp_prep_xgmi_ta_unload_cmd_buf(cmd, psp->xgmi_context.session_id);
+       psp_prep_ta_unload_cmd_buf(cmd, psp->xgmi_context.session_id);
 
        ret = psp_cmd_submit_buf(psp, NULL, cmd,
                                 psp->fence_buf_mc_addr);
@@ -525,40 +548,9 @@ static int psp_xgmi_unload(struct psp_context *psp)
        return ret;
 }
 
-static void psp_prep_xgmi_ta_invoke_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-                                           uint32_t ta_cmd_id,
-                                           uint32_t xgmi_session_id)
-{
-       cmd->cmd_id = GFX_CMD_ID_INVOKE_CMD;
-       cmd->cmd.cmd_invoke_cmd.session_id = xgmi_session_id;
-       cmd->cmd.cmd_invoke_cmd.ta_cmd_id = ta_cmd_id;
-       /* Note: cmd_invoke_cmd.buf is not used for now */
-}
-
 int psp_xgmi_invoke(struct psp_context *psp, uint32_t ta_cmd_id)
 {
-       int ret;
-       struct psp_gfx_cmd_resp *cmd;
-
-       /*
-        * TODO: bypass the loading in sriov for now
-       */
-       if (amdgpu_sriov_vf(psp->adev))
-               return 0;
-
-       cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
-       if (!cmd)
-               return -ENOMEM;
-
-       psp_prep_xgmi_ta_invoke_cmd_buf(cmd, ta_cmd_id,
-                                       psp->xgmi_context.session_id);
-
-       ret = psp_cmd_submit_buf(psp, NULL, cmd,
-                                psp->fence_buf_mc_addr);
-
-       kfree(cmd);
-
-        return ret;
+       return psp_ta_invoke(psp, ta_cmd_id, psp->xgmi_context.session_id);
 }
 
 static int psp_xgmi_terminate(struct psp_context *psp)
@@ -614,20 +606,6 @@ static int psp_xgmi_initialize(struct psp_context *psp)
 }
 
 // ras begin
-static void psp_prep_ras_ta_load_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-               uint64_t ras_ta_mc, uint64_t ras_mc_shared,
-               uint32_t ras_ta_size, uint32_t shared_size)
-{
-       cmd->cmd_id = GFX_CMD_ID_LOAD_TA;
-       cmd->cmd.cmd_load_ta.app_phy_addr_lo = lower_32_bits(ras_ta_mc);
-       cmd->cmd.cmd_load_ta.app_phy_addr_hi = upper_32_bits(ras_ta_mc);
-       cmd->cmd.cmd_load_ta.app_len = ras_ta_size;
-
-       cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_lo = lower_32_bits(ras_mc_shared);
-       cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_hi = upper_32_bits(ras_mc_shared);
-       cmd->cmd.cmd_load_ta.cmd_buf_len = shared_size;
-}
-
 static int psp_ras_init_shared_buf(struct psp_context *psp)
 {
        int ret;
@@ -663,15 +641,17 @@ static int psp_ras_load(struct psp_context *psp)
        memset(psp->fw_pri_buf, 0, PSP_1_MEG);
        memcpy(psp->fw_pri_buf, psp->ta_ras_start_addr, psp->ta_ras_ucode_size);
 
-       psp_prep_ras_ta_load_cmd_buf(cmd, psp->fw_pri_mc_addr,
-                       psp->ras.ras_shared_mc_addr,
-                       psp->ta_ras_ucode_size, PSP_RAS_SHARED_MEM_SIZE);
+       psp_prep_ta_load_cmd_buf(cmd,
+                                psp->fw_pri_mc_addr,
+                                psp->ta_ras_ucode_size,
+                                psp->ras.ras_shared_mc_addr,
+                                PSP_RAS_SHARED_MEM_SIZE);
 
        ret = psp_cmd_submit_buf(psp, NULL, cmd,
                        psp->fence_buf_mc_addr);
 
        if (!ret) {
-               psp->ras.ras_initialized = 1;
+               psp->ras.ras_initialized = true;
                psp->ras.session_id = cmd->resp.session_id;
        }
 
@@ -680,13 +660,6 @@ static int psp_ras_load(struct psp_context *psp)
        return ret;
 }
 
-static void psp_prep_ras_ta_unload_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-                                               uint32_t ras_session_id)
-{
-       cmd->cmd_id = GFX_CMD_ID_UNLOAD_TA;
-       cmd->cmd.cmd_unload_ta.session_id = ras_session_id;
-}
-
 static int psp_ras_unload(struct psp_context *psp)
 {
        int ret;
@@ -702,7 +675,7 @@ static int psp_ras_unload(struct psp_context *psp)
        if (!cmd)
                return -ENOMEM;
 
-       psp_prep_ras_ta_unload_cmd_buf(cmd, psp->ras.session_id);
+       psp_prep_ta_unload_cmd_buf(cmd, psp->ras.session_id);
 
        ret = psp_cmd_submit_buf(psp, NULL, cmd,
                        psp->fence_buf_mc_addr);
@@ -712,40 +685,15 @@ static int psp_ras_unload(struct psp_context *psp)
        return ret;
 }
 
-static void psp_prep_ras_ta_invoke_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-               uint32_t ta_cmd_id,
-               uint32_t ras_session_id)
-{
-       cmd->cmd_id = GFX_CMD_ID_INVOKE_CMD;
-       cmd->cmd.cmd_invoke_cmd.session_id = ras_session_id;
-       cmd->cmd.cmd_invoke_cmd.ta_cmd_id = ta_cmd_id;
-       /* Note: cmd_invoke_cmd.buf is not used for now */
-}
-
 int psp_ras_invoke(struct psp_context *psp, uint32_t ta_cmd_id)
 {
-       int ret;
-       struct psp_gfx_cmd_resp *cmd;
-
        /*
         * TODO: bypass the loading in sriov for now
         */
        if (amdgpu_sriov_vf(psp->adev))
                return 0;
 
-       cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
-       if (!cmd)
-               return -ENOMEM;
-
-       psp_prep_ras_ta_invoke_cmd_buf(cmd, ta_cmd_id,
-                       psp->ras.session_id);
-
-       ret = psp_cmd_submit_buf(psp, NULL, cmd,
-                       psp->fence_buf_mc_addr);
-
-       kfree(cmd);
-
-       return ret;
+       return psp_ta_invoke(psp, ta_cmd_id, psp->ras.session_id);
 }
 
 int psp_ras_enable_features(struct psp_context *psp,
@@ -791,7 +739,7 @@ static int psp_ras_terminate(struct psp_context *psp)
        if (ret)
                return ret;
 
-       psp->ras.ras_initialized = 0;
+       psp->ras.ras_initialized = false;
 
        /* free ras shared memory */
        amdgpu_bo_free_kernel(&psp->ras.ras_shared_bo,
@@ -832,24 +780,6 @@ static int psp_ras_initialize(struct psp_context *psp)
 // ras end
 
 // HDCP start
-static void psp_prep_hdcp_ta_load_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-                                         uint64_t hdcp_ta_mc,
-                                         uint64_t hdcp_mc_shared,
-                                         uint32_t hdcp_ta_size,
-                                         uint32_t shared_size)
-{
-       cmd->cmd_id = GFX_CMD_ID_LOAD_TA;
-       cmd->cmd.cmd_load_ta.app_phy_addr_lo = lower_32_bits(hdcp_ta_mc);
-       cmd->cmd.cmd_load_ta.app_phy_addr_hi = upper_32_bits(hdcp_ta_mc);
-       cmd->cmd.cmd_load_ta.app_len = hdcp_ta_size;
-
-       cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_lo =
-               lower_32_bits(hdcp_mc_shared);
-       cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_hi =
-               upper_32_bits(hdcp_mc_shared);
-       cmd->cmd.cmd_load_ta.cmd_buf_len = shared_size;
-}
-
 static int psp_hdcp_init_shared_buf(struct psp_context *psp)
 {
        int ret;
@@ -886,15 +816,16 @@ static int psp_hdcp_load(struct psp_context *psp)
        memcpy(psp->fw_pri_buf, psp->ta_hdcp_start_addr,
               psp->ta_hdcp_ucode_size);
 
-       psp_prep_hdcp_ta_load_cmd_buf(cmd, psp->fw_pri_mc_addr,
-                                     psp->hdcp_context.hdcp_shared_mc_addr,
-                                     psp->ta_hdcp_ucode_size,
-                                     PSP_HDCP_SHARED_MEM_SIZE);
+       psp_prep_ta_load_cmd_buf(cmd,
+                                psp->fw_pri_mc_addr,
+                                psp->ta_hdcp_ucode_size,
+                                psp->hdcp_context.hdcp_shared_mc_addr,
+                                PSP_HDCP_SHARED_MEM_SIZE);
 
        ret = psp_cmd_submit_buf(psp, NULL, cmd, psp->fence_buf_mc_addr);
 
        if (!ret) {
-               psp->hdcp_context.hdcp_initialized = 1;
+               psp->hdcp_context.hdcp_initialized = true;
                psp->hdcp_context.session_id = cmd->resp.session_id;
        }
 
@@ -930,12 +861,6 @@ static int psp_hdcp_initialize(struct psp_context *psp)
 
        return 0;
 }
-static void psp_prep_hdcp_ta_unload_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-                                           uint32_t hdcp_session_id)
-{
-       cmd->cmd_id = GFX_CMD_ID_UNLOAD_TA;
-       cmd->cmd.cmd_unload_ta.session_id = hdcp_session_id;
-}
 
 static int psp_hdcp_unload(struct psp_context *psp)
 {
@@ -952,7 +877,7 @@ static int psp_hdcp_unload(struct psp_context *psp)
        if (!cmd)
                return -ENOMEM;
 
-       psp_prep_hdcp_ta_unload_cmd_buf(cmd, psp->hdcp_context.session_id);
+       psp_prep_ta_unload_cmd_buf(cmd, psp->hdcp_context.session_id);
 
        ret = psp_cmd_submit_buf(psp, NULL, cmd, psp->fence_buf_mc_addr);
 
@@ -961,39 +886,15 @@ static int psp_hdcp_unload(struct psp_context *psp)
        return ret;
 }
 
-static void psp_prep_hdcp_ta_invoke_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-                                           uint32_t ta_cmd_id,
-                                           uint32_t hdcp_session_id)
-{
-       cmd->cmd_id = GFX_CMD_ID_INVOKE_CMD;
-       cmd->cmd.cmd_invoke_cmd.session_id = hdcp_session_id;
-       cmd->cmd.cmd_invoke_cmd.ta_cmd_id = ta_cmd_id;
-       /* Note: cmd_invoke_cmd.buf is not used for now */
-}
-
 int psp_hdcp_invoke(struct psp_context *psp, uint32_t ta_cmd_id)
 {
-       int ret;
-       struct psp_gfx_cmd_resp *cmd;
-
        /*
         * TODO: bypass the loading in sriov for now
         */
        if (amdgpu_sriov_vf(psp->adev))
                return 0;
 
-       cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
-       if (!cmd)
-               return -ENOMEM;
-
-       psp_prep_hdcp_ta_invoke_cmd_buf(cmd, ta_cmd_id,
-                                       psp->hdcp_context.session_id);
-
-       ret = psp_cmd_submit_buf(psp, NULL, cmd, psp->fence_buf_mc_addr);
-
-       kfree(cmd);
-
-       return ret;
+       return psp_ta_invoke(psp, ta_cmd_id, psp->hdcp_context.session_id);
 }
 
 static int psp_hdcp_terminate(struct psp_context *psp)
@@ -1013,7 +914,7 @@ static int psp_hdcp_terminate(struct psp_context *psp)
        if (ret)
                return ret;
 
-       psp->hdcp_context.hdcp_initialized = 0;
+       psp->hdcp_context.hdcp_initialized = false;
 
        /* free hdcp shared memory */
        amdgpu_bo_free_kernel(&psp->hdcp_context.hdcp_shared_bo,
@@ -1025,22 +926,6 @@ static int psp_hdcp_terminate(struct psp_context *psp)
 // HDCP end
 
 // DTM start
-static void psp_prep_dtm_ta_load_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-                                        uint64_t dtm_ta_mc,
-                                        uint64_t dtm_mc_shared,
-                                        uint32_t dtm_ta_size,
-                                        uint32_t shared_size)
-{
-       cmd->cmd_id = GFX_CMD_ID_LOAD_TA;
-       cmd->cmd.cmd_load_ta.app_phy_addr_lo = lower_32_bits(dtm_ta_mc);
-       cmd->cmd.cmd_load_ta.app_phy_addr_hi = upper_32_bits(dtm_ta_mc);
-       cmd->cmd.cmd_load_ta.app_len = dtm_ta_size;
-
-       cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_lo = lower_32_bits(dtm_mc_shared);
-       cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_hi = upper_32_bits(dtm_mc_shared);
-       cmd->cmd.cmd_load_ta.cmd_buf_len = shared_size;
-}
-
 static int psp_dtm_init_shared_buf(struct psp_context *psp)
 {
        int ret;
@@ -1076,15 +961,16 @@ static int psp_dtm_load(struct psp_context *psp)
        memset(psp->fw_pri_buf, 0, PSP_1_MEG);
        memcpy(psp->fw_pri_buf, psp->ta_dtm_start_addr, psp->ta_dtm_ucode_size);
 
-       psp_prep_dtm_ta_load_cmd_buf(cmd, psp->fw_pri_mc_addr,
-                                    psp->dtm_context.dtm_shared_mc_addr,
-                                    psp->ta_dtm_ucode_size,
-                                    PSP_DTM_SHARED_MEM_SIZE);
+       psp_prep_ta_load_cmd_buf(cmd,
+                                psp->fw_pri_mc_addr,
+                                psp->ta_dtm_ucode_size,
+                                psp->dtm_context.dtm_shared_mc_addr,
+                                PSP_DTM_SHARED_MEM_SIZE);
 
        ret = psp_cmd_submit_buf(psp, NULL, cmd, psp->fence_buf_mc_addr);
 
        if (!ret) {
-               psp->dtm_context.dtm_initialized = 1;
+               psp->dtm_context.dtm_initialized = true;
                psp->dtm_context.session_id = cmd->resp.session_id;
        }
 
@@ -1122,39 +1008,15 @@ static int psp_dtm_initialize(struct psp_context *psp)
        return 0;
 }
 
-static void psp_prep_dtm_ta_invoke_cmd_buf(struct psp_gfx_cmd_resp *cmd,
-                                          uint32_t ta_cmd_id,
-                                          uint32_t dtm_session_id)
-{
-       cmd->cmd_id = GFX_CMD_ID_INVOKE_CMD;
-       cmd->cmd.cmd_invoke_cmd.session_id = dtm_session_id;
-       cmd->cmd.cmd_invoke_cmd.ta_cmd_id = ta_cmd_id;
-       /* Note: cmd_invoke_cmd.buf is not used for now */
-}
-
 int psp_dtm_invoke(struct psp_context *psp, uint32_t ta_cmd_id)
 {
-       int ret;
-       struct psp_gfx_cmd_resp *cmd;
-
        /*
         * TODO: bypass the loading in sriov for now
         */
        if (amdgpu_sriov_vf(psp->adev))
                return 0;
 
-       cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
-       if (!cmd)
-               return -ENOMEM;
-
-       psp_prep_dtm_ta_invoke_cmd_buf(cmd, ta_cmd_id,
-                                      psp->dtm_context.session_id);
-
-       ret = psp_cmd_submit_buf(psp, NULL, cmd, psp->fence_buf_mc_addr);
-
-       kfree(cmd);
-
-       return ret;
+       return psp_ta_invoke(psp, ta_cmd_id, psp->dtm_context.session_id);
 }
 
 static int psp_dtm_terminate(struct psp_context *psp)
@@ -1174,7 +1036,7 @@ static int psp_dtm_terminate(struct psp_context *psp)
        if (ret)
                return ret;
 
-       psp->dtm_context.dtm_initialized = 0;
+       psp->dtm_context.dtm_initialized = false;
 
        /* free hdcp shared memory */
        amdgpu_bo_free_kernel(&psp->dtm_context.dtm_shared_bo,
@@ -1310,6 +1172,9 @@ static int psp_get_fw_type(struct amdgpu_firmware_info *ucode,
        case AMDGPU_UCODE_ID_VCN:
                *type = GFX_FW_TYPE_VCN;
                break;
+       case AMDGPU_UCODE_ID_VCN1:
+               *type = GFX_FW_TYPE_VCN1;
+               break;
        case AMDGPU_UCODE_ID_DMCU_ERAM:
                *type = GFX_FW_TYPE_DMCU_ERAM;
                break;
@@ -1454,7 +1319,8 @@ out:
                     || ucode->ucode_id == AMDGPU_UCODE_ID_RLC_G
                    || ucode->ucode_id == AMDGPU_UCODE_ID_RLC_RESTORE_LIST_CNTL
                    || ucode->ucode_id == AMDGPU_UCODE_ID_RLC_RESTORE_LIST_GPM_MEM
-                   || ucode->ucode_id == AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM))
+                   || ucode->ucode_id == AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM
+                   || ucode->ucode_id == AMDGPU_UCODE_ID_SMC))
                        /*skip ucode loading in SRIOV VF */
                        continue;
 
@@ -1472,7 +1338,7 @@ out:
 
                /* Start rlc autoload after psp recieved all the gfx firmware */
                if (psp->autoload_supported && ucode->ucode_id == (amdgpu_sriov_vf(adev) ?
-                   AMDGPU_UCODE_ID_CP_MEC2 : AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM)) {
+                   AMDGPU_UCODE_ID_CP_MEC2 : AMDGPU_UCODE_ID_RLC_G)) {
                        ret = psp_rlc_autoload(psp);
                        if (ret) {
                                DRM_ERROR("Failed to start rlc autoload\n");
@@ -1503,16 +1369,13 @@ static int psp_load_fw(struct amdgpu_device *adev)
        if (!psp->cmd)
                return -ENOMEM;
 
-       /* this fw pri bo is not used under SRIOV */
-       if (!amdgpu_sriov_vf(psp->adev)) {
-               ret = amdgpu_bo_create_kernel(adev, PSP_1_MEG, PSP_1_MEG,
-                                             AMDGPU_GEM_DOMAIN_GTT,
-                                             &psp->fw_pri_bo,
-                                             &psp->fw_pri_mc_addr,
-                                             &psp->fw_pri_buf);
-               if (ret)
-                       goto failed;
-       }
+       ret = amdgpu_bo_create_kernel(adev, PSP_1_MEG, PSP_1_MEG,
+                                       AMDGPU_GEM_DOMAIN_GTT,
+                                       &psp->fw_pri_bo,
+                                       &psp->fw_pri_mc_addr,
+                                       &psp->fw_pri_buf);
+       if (ret)
+               goto failed;
 
        ret = amdgpu_bo_create_kernel(adev, PSP_FENCE_BUFFER_SIZE, PAGE_SIZE,
                                        AMDGPU_GEM_DOMAIN_VRAM,
index 5f8fd3e..3265487 100644 (file)
@@ -202,7 +202,6 @@ struct psp_memory_training_context {
 
        /*vram offset of the p2c training data*/
        u64 p2c_train_data_offset;
-       struct amdgpu_bo *p2c_bo;
 
        /*vram offset of the c2p training data*/
        u64 c2p_train_data_offset;
index 04394c4..96fc538 100644 (file)
@@ -315,7 +315,7 @@ static ssize_t amdgpu_ras_debugfs_ctrl_write(struct file *f, const char __user *
        default:
                ret = -EINVAL;
                break;
-       };
+       }
 
        if (ret)
                return -EINVAL;
@@ -1311,6 +1311,7 @@ static int amdgpu_ras_badpages_read(struct amdgpu_device *adev,
        data = con->eh_data;
        if (!data || data->count == 0) {
                *bps = NULL;
+               ret = -EINVAL;
                goto out;
        }
 
@@ -1870,7 +1871,7 @@ void amdgpu_ras_resume(struct amdgpu_device *adev)
                 * See feature_enable_on_boot
                 */
                amdgpu_ras_disable_all_features(adev, 1);
-               amdgpu_ras_reset_gpu(adev, 0);
+               amdgpu_ras_reset_gpu(adev);
        }
 }
 
@@ -1933,6 +1934,6 @@ void amdgpu_ras_global_ras_isr(struct amdgpu_device *adev)
        if (atomic_cmpxchg(&amdgpu_ras_in_intr, 0, 1) == 0) {
                DRM_WARN("RAS event of type ERREVENT_ATHUB_INTERRUPT detected!\n");
 
-               amdgpu_ras_reset_gpu(adev, false);
+               amdgpu_ras_reset_gpu(adev);
        }
 }
index d4ade47..a5fe29a 100644 (file)
@@ -494,8 +494,7 @@ int amdgpu_ras_add_bad_pages(struct amdgpu_device *adev,
 
 int amdgpu_ras_reserve_bad_pages(struct amdgpu_device *adev);
 
-static inline int amdgpu_ras_reset_gpu(struct amdgpu_device *adev,
-               bool is_baco)
+static inline int amdgpu_ras_reset_gpu(struct amdgpu_device *adev)
 {
        struct amdgpu_ras *ras = amdgpu_ras_get_context(adev);
 
index 6010999..a2ee30b 100644 (file)
@@ -160,7 +160,7 @@ int amdgpu_sdma_process_ras_data_cb(struct amdgpu_device *adev,
                struct amdgpu_iv_entry *entry)
 {
        kgd2kfd_set_sram_ecc_flag(adev->kfd.dev);
-       amdgpu_ras_reset_gpu(adev, 0);
+       amdgpu_ras_reset_gpu(adev);
 
        return AMDGPU_RAS_SUCCESS;
 }
index 761ff8b..346dcb1 100644 (file)
@@ -52,6 +52,8 @@ struct amdgpu_sdma_instance {
 
 struct amdgpu_sdma {
        struct amdgpu_sdma_instance instance[AMDGPU_MAX_SDMA_INSTANCES];
+       struct drm_gpu_scheduler    *sdma_sched[AMDGPU_MAX_SDMA_INSTANCES];
+       uint32_t                    num_sdma_sched;
        struct amdgpu_irq_src   trap_irq;
        struct amdgpu_irq_src   illegal_inst_irq;
        struct amdgpu_irq_src   ecc_irq;
index 445de59..3114d8a 100644 (file)
@@ -1714,12 +1714,17 @@ static int amdgpu_ttm_training_reserve_vram_fini(struct amdgpu_device *adev)
        amdgpu_bo_free_kernel(&ctx->c2p_bo, NULL, NULL);
        ctx->c2p_bo = NULL;
 
-       amdgpu_bo_free_kernel(&ctx->p2c_bo, NULL, NULL);
-       ctx->p2c_bo = NULL;
-
        return 0;
 }
 
+static u64 amdgpu_ttm_training_get_c2p_offset(u64 vram_size)
+{
+       if ((vram_size & (SZ_1M - 1)) < (SZ_4K + 1) )
+               vram_size -= SZ_1M;
+
+       return ALIGN(vram_size, SZ_1M);
+}
+
 /**
  * amdgpu_ttm_training_reserve_vram_init - create bo vram reservation from memory training
  *
@@ -1738,7 +1743,7 @@ static int amdgpu_ttm_training_reserve_vram_init(struct amdgpu_device *adev)
                return 0;
        }
 
-       ctx->c2p_train_data_offset = adev->fw_vram_usage.mem_train_fb_loc;
+       ctx->c2p_train_data_offset = amdgpu_ttm_training_get_c2p_offset(adev->gmc.mc_vram_size);
        ctx->p2c_train_data_offset = (adev->gmc.mc_vram_size - GDDR6_MEM_TRAINING_OFFSET);
        ctx->train_data_size = GDDR6_MEM_TRAINING_DATA_SIZE_IN_BYTES;
 
@@ -1748,17 +1753,6 @@ static int amdgpu_ttm_training_reserve_vram_init(struct amdgpu_device *adev)
                  ctx->c2p_train_data_offset);
 
        ret = amdgpu_bo_create_kernel_at(adev,
-                                        ctx->p2c_train_data_offset,
-                                        ctx->train_data_size,
-                                        AMDGPU_GEM_DOMAIN_VRAM,
-                                        &ctx->p2c_bo,
-                                        NULL);
-       if (ret) {
-               DRM_ERROR("alloc p2c_bo failed(%d)!\n", ret);
-               goto Err_out;
-       }
-
-       ret = amdgpu_bo_create_kernel_at(adev,
                                         ctx->c2p_train_data_offset,
                                         ctx->train_data_size,
                                         AMDGPU_GEM_DOMAIN_VRAM,
@@ -1766,15 +1760,12 @@ static int amdgpu_ttm_training_reserve_vram_init(struct amdgpu_device *adev)
                                         NULL);
        if (ret) {
                DRM_ERROR("alloc c2p_bo failed(%d)!\n", ret);
-               goto Err_out;
+               amdgpu_ttm_training_reserve_vram_fini(adev);
+               return ret;
        }
 
        ctx->init = PSP_MEM_TRAIN_RESERVE_SUCCESS;
        return 0;
-
-Err_out:
-       amdgpu_ttm_training_reserve_vram_fini(adev);
-       return ret;
 }
 
 /**
@@ -1987,11 +1978,13 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 
        if (enable) {
                struct amdgpu_ring *ring;
-               struct drm_sched_rq *rq;
+               struct drm_gpu_scheduler *sched;
 
                ring = adev->mman.buffer_funcs_ring;
-               rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_KERNEL];
-               r = drm_sched_entity_init(&adev->mman.entity, &rq, 1, NULL);
+               sched = &ring->sched;
+               r = drm_sched_entity_init(&adev->mman.entity,
+                                         DRM_SCHED_PRIORITY_KERNEL, &sched,
+                                         1, NULL);
                if (r) {
                        DRM_ERROR("Failed setting up TTM BO move entity (%d)\n",
                                  r);
index eaf2d5b..b0e6564 100644 (file)
@@ -300,10 +300,10 @@ enum AMDGPU_UCODE_ID {
        AMDGPU_UCODE_ID_CP_MEC2_JT,
        AMDGPU_UCODE_ID_CP_MES,
        AMDGPU_UCODE_ID_CP_MES_DATA,
-       AMDGPU_UCODE_ID_RLC_G,
        AMDGPU_UCODE_ID_RLC_RESTORE_LIST_CNTL,
        AMDGPU_UCODE_ID_RLC_RESTORE_LIST_GPM_MEM,
        AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM,
+       AMDGPU_UCODE_ID_RLC_G,
        AMDGPU_UCODE_ID_STORAGE,
        AMDGPU_UCODE_ID_SMC,
        AMDGPU_UCODE_ID_UVD,
index d4fb9cf..f4d4085 100644 (file)
@@ -95,13 +95,6 @@ int amdgpu_umc_process_ras_data_cb(struct amdgpu_device *adev,
 {
        struct ras_err_data *err_data = (struct ras_err_data *)ras_error_status;
 
-       /* When “Full RAS” is enabled, the per-IP interrupt sources should
-        * be disabled and the driver should only look for the aggregated
-        * interrupt via sync flood
-        */
-       if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__GFX))
-               return AMDGPU_RAS_SUCCESS;
-
        kgd2kfd_set_sram_ecc_flag(adev->kfd.dev);
        if (adev->umc.funcs &&
            adev->umc.funcs->query_ras_error_count)
@@ -113,6 +106,7 @@ int amdgpu_umc_process_ras_data_cb(struct amdgpu_device *adev,
                err_data->err_addr =
                        kcalloc(adev->umc.max_ras_err_cnt_per_query,
                                sizeof(struct eeprom_table_record), GFP_KERNEL);
+
                /* still call query_ras_error_address to clear error status
                 * even NOMEM error is encountered
                 */
@@ -132,7 +126,7 @@ int amdgpu_umc_process_ras_data_cb(struct amdgpu_device *adev,
                                                err_data->err_addr_cnt))
                        DRM_WARN("Failed to add ras bad page!\n");
 
-               amdgpu_ras_reset_gpu(adev, 0);
+               amdgpu_ras_reset_gpu(adev);
        }
 
        kfree(err_data->err_addr);
index 3283032..a615a1e 100644 (file)
 #ifndef __AMDGPU_UMC_H__
 #define __AMDGPU_UMC_H__
 
-/* implement 64 bits REG operations via 32 bits interface */
-#define RREG64_UMC(reg)        (RREG32(reg) | \
-                               ((uint64_t)RREG32((reg) + 1) << 32))
-#define WREG64_UMC(reg, v)     \
-       do {    \
-               WREG32((reg), lower_32_bits(v));        \
-               WREG32((reg) + 1, upper_32_bits(v));    \
-       } while (0)
-
-/*
- * void (*func)(struct amdgpu_device *adev, struct ras_err_data *err_data,
- *                             uint32_t umc_reg_offset, uint32_t channel_index)
- */
-#define amdgpu_umc_for_each_channel(func)      \
-       struct ras_err_data *err_data = (struct ras_err_data *)ras_error_status;        \
-       uint32_t umc_inst, channel_inst, umc_reg_offset, channel_index; \
-       for (umc_inst = 0; umc_inst < adev->umc.umc_inst_num; umc_inst++) {     \
-               /* enable the index mode to query eror count per channel */     \
-               adev->umc.funcs->enable_umc_index_mode(adev, umc_inst); \
-               for (channel_inst = 0;  \
-                       channel_inst < adev->umc.channel_inst_num;      \
-                       channel_inst++) {       \
-                       /* calc the register offset according to channel instance */    \
-                       umc_reg_offset = adev->umc.channel_offs * channel_inst; \
-                       /* get channel index of interleaved memory */   \
-                       channel_index = adev->umc.channel_idx_tbl[      \
-                               umc_inst * adev->umc.channel_inst_num + channel_inst];  \
-                       (func)(adev, err_data, umc_reg_offset, channel_index);  \
-               }       \
-       }       \
-       adev->umc.funcs->disable_umc_index_mode(adev);
-
 struct amdgpu_umc_funcs {
        void (*err_cnt_init)(struct amdgpu_device *adev);
        int (*ras_late_init)(struct amdgpu_device *adev);
@@ -60,9 +28,6 @@ struct amdgpu_umc_funcs {
                                        void *ras_error_status);
        void (*query_ras_error_address)(struct amdgpu_device *adev,
                                        void *ras_error_status);
-       void (*enable_umc_index_mode)(struct amdgpu_device *adev,
-                                       uint32_t umc_instance);
-       void (*disable_umc_index_mode)(struct amdgpu_device *adev);
        void (*init_registers)(struct amdgpu_device *adev);
 };
 
index d587ffe..a92f3b1 100644 (file)
@@ -330,12 +330,13 @@ int amdgpu_uvd_sw_fini(struct amdgpu_device *adev)
 int amdgpu_uvd_entity_init(struct amdgpu_device *adev)
 {
        struct amdgpu_ring *ring;
-       struct drm_sched_rq *rq;
+       struct drm_gpu_scheduler *sched;
        int r;
 
        ring = &adev->uvd.inst[0].ring;
-       rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];
-       r = drm_sched_entity_init(&adev->uvd.entity, &rq, 1, NULL);
+       sched = &ring->sched;
+       r = drm_sched_entity_init(&adev->uvd.entity, DRM_SCHED_PRIORITY_NORMAL,
+                                 &sched, 1, NULL);
        if (r) {
                DRM_ERROR("Failed setting up UVD kernel entity.\n");
                return r;
index 46b590a..ceb0dbf 100644 (file)
@@ -240,12 +240,13 @@ int amdgpu_vce_sw_fini(struct amdgpu_device *adev)
 int amdgpu_vce_entity_init(struct amdgpu_device *adev)
 {
        struct amdgpu_ring *ring;
-       struct drm_sched_rq *rq;
+       struct drm_gpu_scheduler *sched;
        int r;
 
        ring = &adev->vce.ring[0];
-       rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];
-       r = drm_sched_entity_init(&adev->vce.entity, &rq, 1, NULL);
+       sched = &ring->sched;
+       r = drm_sched_entity_init(&adev->vce.entity, DRM_SCHED_PRIORITY_NORMAL,
+                                 &sched, 1, NULL);
        if (r != 0) {
                DRM_ERROR("Failed setting up VCE run queue.\n");
                return r;
index 428cfd5..ed106d9 100644 (file)
 #include <linux/module.h>
 #include <linux/pci.h>
 
-#include <drm/drm.h>
-
 #include "amdgpu.h"
 #include "amdgpu_pm.h"
 #include "amdgpu_vcn.h"
 #include "soc15d.h"
-#include "soc15_common.h"
-
-#include "vcn/vcn_1_0_offset.h"
-#include "vcn/vcn_1_0_sh_mask.h"
-
-/* 1 second timeout */
-#define VCN_IDLE_TIMEOUT       msecs_to_jiffies(1000)
 
 /* Firmware Names */
 #define FIRMWARE_RAVEN         "amdgpu/raven_vcn.bin"
@@ -294,6 +285,7 @@ static void amdgpu_vcn_idle_work_handler(struct work_struct *work)
        for (j = 0; j < adev->vcn.num_vcn_inst; ++j) {
                if (adev->vcn.harvest_config & (1 << j))
                        continue;
+
                for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
                        fence[j] += amdgpu_fence_count_emitted(&adev->vcn.inst[j].ring_enc[i]);
                }
@@ -306,26 +298,17 @@ static void amdgpu_vcn_idle_work_handler(struct work_struct *work)
                        else
                                new_state.fw_based = VCN_DPG_STATE__UNPAUSE;
 
-                       if (amdgpu_fence_count_emitted(&adev->jpeg.inst[j].ring_dec))
-                               new_state.jpeg = VCN_DPG_STATE__PAUSE;
-                       else
-                               new_state.jpeg = VCN_DPG_STATE__UNPAUSE;
-
                        adev->vcn.pause_dpg_mode(adev, &new_state);
                }
 
-               fence[j] += amdgpu_fence_count_emitted(&adev->jpeg.inst[j].ring_dec);
                fence[j] += amdgpu_fence_count_emitted(&adev->vcn.inst[j].ring_dec);
                fences += fence[j];
        }
 
        if (fences == 0) {
                amdgpu_gfx_off_ctrl(adev, true);
-               if (adev->asic_type < CHIP_ARCTURUS && adev->pm.dpm_enabled)
-                       amdgpu_dpm_enable_uvd(adev, false);
-               else
-                       amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCN,
-                                                              AMD_PG_STATE_GATE);
+               amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCN,
+                      AMD_PG_STATE_GATE);
        } else {
                schedule_delayed_work(&adev->vcn.idle_work, VCN_IDLE_TIMEOUT);
        }
@@ -338,11 +321,8 @@ void amdgpu_vcn_ring_begin_use(struct amdgpu_ring *ring)
 
        if (set_clocks) {
                amdgpu_gfx_off_ctrl(adev, false);
-               if (adev->asic_type < CHIP_ARCTURUS && adev->pm.dpm_enabled)
-                       amdgpu_dpm_enable_uvd(adev, true);
-               else
-                       amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCN,
-                                                              AMD_PG_STATE_UNGATE);
+               amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCN,
+                      AMD_PG_STATE_UNGATE);
        }
 
        if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG)    {
@@ -358,15 +338,8 @@ void amdgpu_vcn_ring_begin_use(struct amdgpu_ring *ring)
                else
                        new_state.fw_based = VCN_DPG_STATE__UNPAUSE;
 
-               if (amdgpu_fence_count_emitted(&adev->jpeg.inst[ring->me].ring_dec))
-                       new_state.jpeg = VCN_DPG_STATE__PAUSE;
-               else
-                       new_state.jpeg = VCN_DPG_STATE__UNPAUSE;
-
                if (ring->funcs->type == AMDGPU_RING_TYPE_VCN_ENC)
                        new_state.fw_based = VCN_DPG_STATE__PAUSE;
-               else if (ring->funcs->type == AMDGPU_RING_TYPE_VCN_JPEG)
-                       new_state.jpeg = VCN_DPG_STATE__PAUSE;
 
                adev->vcn.pause_dpg_mode(adev, &new_state);
        }
@@ -518,9 +491,14 @@ static int amdgpu_vcn_dec_get_destroy_msg(struct amdgpu_ring *ring, uint32_t han
 
 int amdgpu_vcn_dec_ring_test_ib(struct amdgpu_ring *ring, long timeout)
 {
+       struct amdgpu_device *adev = ring->adev;
        struct dma_fence *fence;
        long r;
 
+       /* temporarily disable ib test for sriov */
+       if (amdgpu_sriov_vf(adev))
+               return 0;
+
        r = amdgpu_vcn_dec_get_create_msg(ring, 1, NULL);
        if (r)
                goto error;
@@ -676,10 +654,15 @@ err:
 
 int amdgpu_vcn_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout)
 {
+       struct amdgpu_device *adev = ring->adev;
        struct dma_fence *fence = NULL;
        struct amdgpu_bo *bo = NULL;
        long r;
 
+       /* temporarily disable ib test for sriov */
+       if (amdgpu_sriov_vf(adev))
+               return 0;
+
        r = amdgpu_bo_create_reserved(ring->adev, 128 * 1024, PAGE_SIZE,
                                      AMDGPU_GEM_DOMAIN_VRAM,
                                      &bo, NULL, NULL);
index 402a504..e6dee82 100644 (file)
@@ -31,6 +31,7 @@
 #define AMDGPU_VCN_MAX_ENC_RINGS       3
 
 #define AMDGPU_MAX_VCN_INSTANCES       2
+#define AMDGPU_MAX_VCN_ENC_RINGS  AMDGPU_VCN_MAX_ENC_RINGS * AMDGPU_MAX_VCN_INSTANCES
 
 #define AMDGPU_VCN_HARVEST_VCN0 (1 << 0)
 #define AMDGPU_VCN_HARVEST_VCN1 (1 << 1)
@@ -56,6 +57,9 @@
 #define VCN_VID_IP_ADDRESS_2_0         0x0
 #define VCN_AON_IP_ADDRESS_2_0         0x30000
 
+/* 1 second timeout */
+#define VCN_IDLE_TIMEOUT       msecs_to_jiffies(1000)
+
 #define RREG32_SOC15_DPG_MODE(ip, inst, reg, mask, sram_sel)                           \
        ({      WREG32_SOC15(ip, inst, mmUVD_DPG_LMA_MASK, mask);                       \
                WREG32_SOC15(ip, inst, mmUVD_DPG_LMA_CTL,                               \
@@ -186,8 +190,12 @@ struct amdgpu_vcn {
        uint32_t                *dpg_sram_curr_addr;
 
        uint8_t num_vcn_inst;
-       struct amdgpu_vcn_inst  inst[AMDGPU_MAX_VCN_INSTANCES];
-       struct amdgpu_vcn_reg   internal;
+       struct amdgpu_vcn_inst   inst[AMDGPU_MAX_VCN_INSTANCES];
+       struct amdgpu_vcn_reg    internal;
+       struct drm_gpu_scheduler *vcn_enc_sched[AMDGPU_MAX_VCN_ENC_RINGS];
+       struct drm_gpu_scheduler *vcn_dec_sched[AMDGPU_MAX_VCN_INSTANCES];
+       uint32_t                 num_vcn_enc_sched;
+       uint32_t                 num_vcn_dec_sched;
 
        unsigned        harvest_config;
        int (*pause_dpg_mode)(struct amdgpu_device *adev,
index 8f26504..4dc75ed 100644 (file)
@@ -2753,14 +2753,17 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
        spin_lock_init(&vm->invalidated_lock);
        INIT_LIST_HEAD(&vm->freed);
 
+
        /* create scheduler entities for page table updates */
-       r = drm_sched_entity_init(&vm->direct, adev->vm_manager.vm_pte_rqs,
-                                 adev->vm_manager.vm_pte_num_rqs, NULL);
+       r = drm_sched_entity_init(&vm->direct, DRM_SCHED_PRIORITY_NORMAL,
+                                 adev->vm_manager.vm_pte_scheds,
+                                 adev->vm_manager.vm_pte_num_scheds, NULL);
        if (r)
                return r;
 
-       r = drm_sched_entity_init(&vm->delayed, adev->vm_manager.vm_pte_rqs,
-                                 adev->vm_manager.vm_pte_num_rqs, NULL);
+       r = drm_sched_entity_init(&vm->delayed, DRM_SCHED_PRIORITY_NORMAL,
+                                 adev->vm_manager.vm_pte_scheds,
+                                 adev->vm_manager.vm_pte_num_scheds, NULL);
        if (r)
                goto error_free_direct;
 
index 7e0eb36..fade4f4 100644 (file)
@@ -327,8 +327,8 @@ struct amdgpu_vm_manager {
        u64                                     vram_base_offset;
        /* vm pte handling */
        const struct amdgpu_vm_pte_funcs        *vm_pte_funcs;
-       struct drm_sched_rq                     *vm_pte_rqs[AMDGPU_MAX_RINGS];
-       unsigned                                vm_pte_num_rqs;
+       struct drm_gpu_scheduler                *vm_pte_scheds[AMDGPU_MAX_RINGS];
+       unsigned                                vm_pte_num_scheds;
        struct amdgpu_ring                      *page_fault;
 
        /* partial resident texture handling */
index 61d13d8..5cf920d 100644 (file)
@@ -261,6 +261,7 @@ struct amdgpu_hive_info *amdgpu_get_xgmi_hive(struct amdgpu_device *adev, int lo
        INIT_LIST_HEAD(&tmp->device_list);
        mutex_init(&tmp->hive_lock);
        mutex_init(&tmp->reset_lock);
+       task_barrier_init(&tmp->tb);
 
        if (lock)
                mutex_lock(&tmp->hive_lock);
@@ -408,6 +409,8 @@ int amdgpu_xgmi_add_device(struct amdgpu_device *adev)
        top_info->num_nodes = count;
        hive->number_devices = count;
 
+       task_barrier_add_task(&hive->tb);
+
        if (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_PSP)) {
                list_for_each_entry(tmp_adev, &hive->device_list, gmc.xgmi.head) {
                        /* update node list for other device in the hive */
@@ -470,6 +473,7 @@ void amdgpu_xgmi_remove_device(struct amdgpu_device *adev)
                mutex_destroy(&hive->hive_lock);
                mutex_destroy(&hive->reset_lock);
        } else {
+               task_barrier_rem_task(&hive->tb);
                amdgpu_xgmi_sysfs_rem_dev_info(adev, hive);
                mutex_unlock(&hive->hive_lock);
        }
index bbf504f..74011fb 100644 (file)
@@ -22,6 +22,7 @@
 #ifndef __AMDGPU_XGMI_H__
 #define __AMDGPU_XGMI_H__
 
+#include <drm/task_barrier.h>
 #include "amdgpu_psp.h"
 
 struct amdgpu_hive_info {
@@ -33,6 +34,7 @@ struct amdgpu_hive_info {
        struct device_attribute dev_attr;
        struct amdgpu_device *adev;
        int pstate; /*0 -- low , 1 -- high , -1 unknown*/
+       struct task_barrier tb;
 };
 
 struct amdgpu_hive_info *amdgpu_get_xgmi_hive(struct amdgpu_device *adev, int lock);
index c45304f..580d3f9 100644 (file)
@@ -228,7 +228,7 @@ static void cik_sdma_ring_emit_ib(struct amdgpu_ring *ring,
        u32 extra_bits = vmid & 0xf;
 
        /* IB packet must end on a 8 DW boundary */
-       cik_sdma_ring_insert_nop(ring, (12 - (lower_32_bits(ring->wptr) & 7)) % 8);
+       cik_sdma_ring_insert_nop(ring, (4 - lower_32_bits(ring->wptr)) & 7);
 
        amdgpu_ring_write(ring, SDMA_PACKET(SDMA_OPCODE_INDIRECT_BUFFER, 0, extra_bits));
        amdgpu_ring_write(ring, ib->gpu_addr & 0xffffffe0); /* base must be 32 byte aligned */
@@ -811,7 +811,7 @@ static void cik_sdma_ring_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib)
        u32 pad_count;
        int i;
 
-       pad_count = (8 - (ib->length_dw & 0x7)) % 8;
+       pad_count = (-ib->length_dw) & 7;
        for (i = 0; i < pad_count; i++)
                if (sdma && sdma->burst_nop && (i == 0))
                        ib->ptr[ib->length_dw++] =
@@ -1372,16 +1372,14 @@ static const struct amdgpu_vm_pte_funcs cik_sdma_vm_pte_funcs = {
 
 static void cik_sdma_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-       struct drm_gpu_scheduler *sched;
        unsigned i;
 
        adev->vm_manager.vm_pte_funcs = &cik_sdma_vm_pte_funcs;
        for (i = 0; i < adev->sdma.num_instances; i++) {
-               sched = &adev->sdma.instance[i].ring.sched;
-               adev->vm_manager.vm_pte_rqs[i] =
-                       &sched->sched_rq[DRM_SCHED_PRIORITY_KERNEL];
+               adev->vm_manager.vm_pte_scheds[i] =
+                       &adev->sdma.instance[i].ring.sched;
        }
-       adev->vm_manager.vm_pte_num_rqs = adev->sdma.num_instances;
+       adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
 }
 
 const struct amdgpu_ip_block_version cik_sdma_ip_block =
index 4043ebc..2f884d9 100644 (file)
@@ -183,6 +183,61 @@ static void df_v3_6_perfmon_wreg(struct amdgpu_device *adev, uint32_t lo_addr,
        spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
 }
 
+/* same as perfmon_wreg but return status on write value check */
+static int df_v3_6_perfmon_arm_with_status(struct amdgpu_device *adev,
+                                         uint32_t lo_addr, uint32_t lo_val,
+                                         uint32_t hi_addr, uint32_t  hi_val)
+{
+       unsigned long flags, address, data;
+       uint32_t lo_val_rb, hi_val_rb;
+
+       address = adev->nbio.funcs->get_pcie_index_offset(adev);
+       data = adev->nbio.funcs->get_pcie_data_offset(adev);
+
+       spin_lock_irqsave(&adev->pcie_idx_lock, flags);
+       WREG32(address, lo_addr);
+       WREG32(data, lo_val);
+       WREG32(address, hi_addr);
+       WREG32(data, hi_val);
+
+       WREG32(address, lo_addr);
+       lo_val_rb = RREG32(data);
+       WREG32(address, hi_addr);
+       hi_val_rb = RREG32(data);
+       spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
+
+       if (!(lo_val == lo_val_rb && hi_val == hi_val_rb))
+               return -EBUSY;
+
+       return 0;
+}
+
+
+/*
+ * retry arming counters every 100 usecs within 1 millisecond interval.
+ * if retry fails after time out, return error.
+ */
+#define ARM_RETRY_USEC_TIMEOUT 1000
+#define ARM_RETRY_USEC_INTERVAL        100
+static int df_v3_6_perfmon_arm_with_retry(struct amdgpu_device *adev,
+                                         uint32_t lo_addr, uint32_t lo_val,
+                                         uint32_t hi_addr, uint32_t  hi_val)
+{
+       int countdown = ARM_RETRY_USEC_TIMEOUT;
+
+       while (countdown) {
+
+               if (!df_v3_6_perfmon_arm_with_status(adev, lo_addr, lo_val,
+                                                    hi_addr, hi_val))
+                       break;
+
+               countdown -= ARM_RETRY_USEC_INTERVAL;
+               udelay(ARM_RETRY_USEC_INTERVAL);
+       }
+
+       return countdown > 0 ? 0 : -ETIME;
+}
+
 /* get the number of df counters available */
 static ssize_t df_v3_6_get_df_cntr_avail(struct device *dev,
                struct device_attribute *attr,
@@ -334,20 +389,20 @@ static void df_v3_6_pmc_get_addr(struct amdgpu_device *adev,
        switch (target_cntr) {
 
        case 0:
-               *lo_base_addr = is_ctrl ? smnPerfMonCtlLo0 : smnPerfMonCtrLo0;
-               *hi_base_addr = is_ctrl ? smnPerfMonCtlHi0 : smnPerfMonCtrHi0;
+               *lo_base_addr = is_ctrl ? smnPerfMonCtlLo4 : smnPerfMonCtrLo4;
+               *hi_base_addr = is_ctrl ? smnPerfMonCtlHi4 : smnPerfMonCtrHi4;
                break;
        case 1:
-               *lo_base_addr = is_ctrl ? smnPerfMonCtlLo1 : smnPerfMonCtrLo1;
-               *hi_base_addr = is_ctrl ? smnPerfMonCtlHi1 : smnPerfMonCtrHi1;
+               *lo_base_addr = is_ctrl ? smnPerfMonCtlLo5 : smnPerfMonCtrLo5;
+               *hi_base_addr = is_ctrl ? smnPerfMonCtlHi5 : smnPerfMonCtrHi5;
                break;
        case 2:
-               *lo_base_addr = is_ctrl ? smnPerfMonCtlLo2 : smnPerfMonCtrLo2;
-               *hi_base_addr = is_ctrl ? smnPerfMonCtlHi2 : smnPerfMonCtrHi2;
+               *lo_base_addr = is_ctrl ? smnPerfMonCtlLo6 : smnPerfMonCtrLo6;
+               *hi_base_addr = is_ctrl ? smnPerfMonCtlHi6 : smnPerfMonCtrHi6;
                break;
        case 3:
-               *lo_base_addr = is_ctrl ? smnPerfMonCtlLo3 : smnPerfMonCtrLo3;
-               *hi_base_addr = is_ctrl ? smnPerfMonCtlHi3 : smnPerfMonCtrHi3;
+               *lo_base_addr = is_ctrl ? smnPerfMonCtlLo7 : smnPerfMonCtrLo7;
+               *hi_base_addr = is_ctrl ? smnPerfMonCtlHi7 : smnPerfMonCtrHi7;
                break;
 
        }
@@ -422,6 +477,44 @@ static int df_v3_6_pmc_add_cntr(struct amdgpu_device *adev,
        return -ENOSPC;
 }
 
+#define DEFERRED_ARM_MASK      (1 << 31)
+static int df_v3_6_pmc_set_deferred(struct amdgpu_device *adev,
+                                   uint64_t config, bool is_deferred)
+{
+       int target_cntr;
+
+       target_cntr = df_v3_6_pmc_config_2_cntr(adev, config);
+
+       if (target_cntr < 0)
+               return -EINVAL;
+
+       if (is_deferred)
+               adev->df_perfmon_config_assign_mask[target_cntr] |=
+                                                       DEFERRED_ARM_MASK;
+       else
+               adev->df_perfmon_config_assign_mask[target_cntr] &=
+                                                       ~DEFERRED_ARM_MASK;
+
+       return 0;
+}
+
+static bool df_v3_6_pmc_is_deferred(struct amdgpu_device *adev,
+                                   uint64_t config)
+{
+       int target_cntr;
+
+       target_cntr = df_v3_6_pmc_config_2_cntr(adev, config);
+
+       /*
+        * we never get target_cntr < 0 since this funciton is only called in
+        * pmc_count for now but we should check anyways.
+        */
+       return (target_cntr >= 0 &&
+                       (adev->df_perfmon_config_assign_mask[target_cntr]
+                       & DEFERRED_ARM_MASK));
+
+}
+
 /* release performance counter */
 static void df_v3_6_pmc_release_cntr(struct amdgpu_device *adev,
                                     uint64_t config)
@@ -451,29 +544,33 @@ static int df_v3_6_pmc_start(struct amdgpu_device *adev, uint64_t config,
                             int is_enable)
 {
        uint32_t lo_base_addr, hi_base_addr, lo_val, hi_val;
-       int ret = 0;
+       int err = 0, ret = 0;
 
        switch (adev->asic_type) {
        case CHIP_VEGA20:
+               if (is_enable)
+                       return df_v3_6_pmc_add_cntr(adev, config);
 
                df_v3_6_reset_perfmon_cntr(adev, config);
 
-               if (is_enable) {
-                       ret = df_v3_6_pmc_add_cntr(adev, config);
-               } else {
-                       ret = df_v3_6_pmc_get_ctrl_settings(adev,
+               ret = df_v3_6_pmc_get_ctrl_settings(adev,
                                        config,
                                        &lo_base_addr,
                                        &hi_base_addr,
                                        &lo_val,
                                        &hi_val);
 
-                       if (ret)
-                               return ret;
+               if (ret)
+                       return ret;
+
+               err = df_v3_6_perfmon_arm_with_retry(adev,
+                                                    lo_base_addr,
+                                                    lo_val,
+                                                    hi_base_addr,
+                                                    hi_val);
 
-                       df_v3_6_perfmon_wreg(adev, lo_base_addr, lo_val,
-                                       hi_base_addr, hi_val);
-               }
+               if (err)
+                       ret = df_v3_6_pmc_set_deferred(adev, config, true);
 
                break;
        default:
@@ -501,7 +598,7 @@ static int df_v3_6_pmc_stop(struct amdgpu_device *adev, uint64_t config,
                if (ret)
                        return ret;
 
-               df_v3_6_perfmon_wreg(adev, lo_base_addr, 0, hi_base_addr, 0);
+               df_v3_6_reset_perfmon_cntr(adev, config);
 
                if (is_disable)
                        df_v3_6_pmc_release_cntr(adev, config);
@@ -518,18 +615,29 @@ static void df_v3_6_pmc_get_count(struct amdgpu_device *adev,
                                  uint64_t config,
                                  uint64_t *count)
 {
-       uint32_t lo_base_addr, hi_base_addr, lo_val, hi_val;
+       uint32_t lo_base_addr, hi_base_addr, lo_val = 0, hi_val = 0;
        *count = 0;
 
        switch (adev->asic_type) {
        case CHIP_VEGA20:
-
                df_v3_6_pmc_get_read_settings(adev, config, &lo_base_addr,
                                      &hi_base_addr);
 
                if ((lo_base_addr == 0) || (hi_base_addr == 0))
                        return;
 
+               /* rearm the counter or throw away count value on failure */
+               if (df_v3_6_pmc_is_deferred(adev, config)) {
+                       int rearm_err = df_v3_6_perfmon_arm_with_status(adev,
+                                                       lo_base_addr, lo_val,
+                                                       hi_base_addr, hi_val);
+
+                       if (rearm_err)
+                               return;
+
+                       df_v3_6_pmc_set_deferred(adev, config, false);
+               }
+
                df_v3_6_perfmon_rreg(adev, lo_base_addr, &lo_val,
                                hi_base_addr, &hi_val);
 
@@ -542,7 +650,6 @@ static void df_v3_6_pmc_get_count(struct amdgpu_device *adev,
                         config, lo_base_addr, hi_base_addr, lo_val, hi_val);
 
                break;
-
        default:
                break;
        }
index 98db252..6bc3b93 100644 (file)
@@ -471,18 +471,10 @@ static int gfx_v10_0_ring_test_ring(struct amdgpu_ring *ring)
                else
                        udelay(1);
        }
-       if (i < adev->usec_timeout) {
-               if (amdgpu_emu_mode == 1)
-                       DRM_INFO("ring test on %d succeeded in %d msecs\n",
-                                ring->idx, i);
-               else
-                       DRM_INFO("ring test on %d succeeded in %d usecs\n",
-                                ring->idx, i);
-       } else {
-               DRM_ERROR("amdgpu: ring %d test failed (scratch(0x%04X)=0x%08X)\n",
-                         ring->idx, scratch, tmp);
-               r = -EINVAL;
-       }
+
+       if (i >= adev->usec_timeout)
+               r = -ETIMEDOUT;
+
        amdgpu_gfx_scratch_free(adev, scratch);
 
        return r;
@@ -532,14 +524,10 @@ static int gfx_v10_0_ring_test_ib(struct amdgpu_ring *ring, long timeout)
        }
 
        tmp = RREG32(scratch);
-       if (tmp == 0xDEADBEEF) {
-               DRM_INFO("ib test on ring %d succeeded\n", ring->idx);
+       if (tmp == 0xDEADBEEF)
                r = 0;
-       } else {
-               DRM_ERROR("amdgpu: ib test failed (scratch(0x%04X)=0x%08X)\n",
-                         scratch, tmp);
+       else
                r = -EINVAL;
-       }
 err2:
        amdgpu_ib_free(adev, &ib, NULL);
        dma_fence_put(f);
@@ -588,8 +576,7 @@ static void gfx_v10_0_check_fw_write_wait(struct amdgpu_device *adev)
        }
 
        if (adev->gfx.cp_fw_write_wait == false)
-               DRM_WARN_ONCE("Warning: check cp_fw_version and update it to realize \
-                             GRBM requires 1-cycle delay in cp firmware\n");
+               DRM_WARN_ONCE("CP firmware version too old, please update!");
 }
 
 
@@ -1963,7 +1950,7 @@ static int gfx_v10_0_parse_rlc_toc(struct amdgpu_device *adev)
                rlc_autoload_info[rlc_toc->id].size = rlc_toc->size * 4;
 
                rlc_toc++;
-       };
+       }
 
        return 0;
 }
@@ -3606,23 +3593,16 @@ static int gfx_v10_0_cp_resume(struct amdgpu_device *adev)
 
        for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
                ring = &adev->gfx.gfx_ring[i];
-               DRM_INFO("gfx %d ring me %d pipe %d q %d\n",
-                        i, ring->me, ring->pipe, ring->queue);
-               r = amdgpu_ring_test_ring(ring);
-               if (r) {
-                       ring->sched.ready = false;
+               r = amdgpu_ring_test_helper(ring);
+               if (r)
                        return r;
-               }
        }
 
        for (i = 0; i < adev->gfx.num_compute_rings; i++) {
                ring = &adev->gfx.compute_ring[i];
-               ring->sched.ready = true;
-               DRM_INFO("compute ring %d mec %d pipe %d q %d\n",
-                        i, ring->me, ring->pipe, ring->queue);
-               r = amdgpu_ring_test_ring(ring);
+               r = amdgpu_ring_test_helper(ring);
                if (r)
-                       ring->sched.ready = false;
+                       return r;
        }
 
        return 0;
index 2616f1b..a5492e3 100644 (file)
 
 #include "amdgpu_ras.h"
 
-#include "sdma0/sdma0_4_0_offset.h"
-#include "sdma1/sdma1_4_0_offset.h"
+#include "sdma0/sdma0_4_2_offset.h"
+#include "sdma1/sdma1_4_2_offset.h"
+#include "sdma2/sdma2_4_2_2_offset.h"
+#include "sdma3/sdma3_4_2_2_offset.h"
+#include "sdma4/sdma4_4_2_2_offset.h"
+#include "sdma5/sdma5_4_2_2_offset.h"
+#include "sdma6/sdma6_4_2_2_offset.h"
+#include "sdma7/sdma7_4_2_2_offset.h"
+
 #define GFX9_NUM_GFX_RINGS     1
 #define GFX9_MEC_HPD_SIZE 4096
 #define RLCG_UCODE_LOADING_START_ADDRESS 0x00002000L
@@ -981,8 +988,7 @@ static void gfx_v9_0_check_fw_write_wait(struct amdgpu_device *adev)
            (adev->gfx.mec_feature_version < 46) ||
            (adev->gfx.pfp_fw_version < 0x000000b7) ||
            (adev->gfx.pfp_feature_version < 46))
-               DRM_WARN_ONCE("Warning: check cp_fw_version and update it to realize \
-                             GRBM requires 1-cycle delay in cp firmware\n");
+               DRM_WARN_ONCE("CP firmware version too old, please update!");
 
        switch (adev->asic_type) {
        case CHIP_VEGA10:
@@ -1042,17 +1048,10 @@ static void gfx_v9_0_check_if_need_gfxoff(struct amdgpu_device *adev)
        case CHIP_VEGA20:
                break;
        case CHIP_RAVEN:
-               /* Disable GFXOFF on original raven.  There are combinations
-                * of sbios and platforms that are not stable.
-                */
-               if (!(adev->rev_id >= 0x8 || adev->pdev->device == 0x15d8))
-                       adev->pm.pp_feature &= ~PP_GFXOFF_MASK;
-               else if (!(adev->rev_id >= 0x8 || adev->pdev->device == 0x15d8)
-                        &&((adev->gfx.rlc_fw_version != 106 &&
-                            adev->gfx.rlc_fw_version < 531) ||
-                           (adev->gfx.rlc_fw_version == 53815) ||
-                           (adev->gfx.rlc_feature_version < 1) ||
-                           !adev->gfx.rlc.is_rlc_v2_1))
+               if (!(adev->rev_id >= 0x8 ||
+                     adev->pdev->device == 0x15d8) &&
+                   (adev->pm.fw_version < 0x41e2b || /* not raven1 fresh */
+                    !adev->gfx.rlc.is_rlc_v2_1)) /* without rlc save restore ucodes */
                        adev->pm.pp_feature &= ~PP_GFXOFF_MASK;
 
                if (adev->pm.pp_feature & PP_GFXOFF_MASK)
@@ -3933,43 +3932,58 @@ static const u32 sgpr_init_compute_shader[] =
        0xbe800080, 0xbf810000,
 };
 
+/* When below register arrays changed, please update gpr_reg_size,
+  and sec_ded_counter_reg_size in function gfx_v9_0_do_edc_gpr_workarounds,
+  to cover all gfx9 ASICs */
 static const struct soc15_reg_entry vgpr_init_regs[] = {
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE0), 0xffffffff },
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE1), 0xffffffff },
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE2), 0xffffffff },
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE3), 0xffffffff },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_RESOURCE_LIMITS), 0x0000000 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_NUM_THREAD_X), 0x40 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_NUM_THREAD_Y), 4 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_NUM_THREAD_Z), 1 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_PGM_RSRC1), 0x3f },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_PGM_RSRC2), 0x400000 },  /* 64KB LDS */
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE0), 0xffffffff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE1), 0xffffffff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE2), 0xffffffff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE3), 0xffffffff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE4), 0xffffffff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE5), 0xffffffff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE6), 0xffffffff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE7), 0xffffffff },
 };
 
 static const struct soc15_reg_entry sgpr1_init_regs[] = {
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE0), 0x000000ff },
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE1), 0x000000ff },
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE2), 0x000000ff },
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE3), 0x000000ff },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_RESOURCE_LIMITS), 0x0000000 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_NUM_THREAD_X), 0x40 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_NUM_THREAD_Y), 8 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_NUM_THREAD_Z), 1 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_PGM_RSRC1), 0x240 }, /* (80 GPRS) */
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_PGM_RSRC2), 0x0 },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE0), 0x000000ff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE1), 0x000000ff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE2), 0x000000ff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE3), 0x000000ff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE4), 0x000000ff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE5), 0x000000ff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE6), 0x000000ff },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE7), 0x000000ff },
 };
 
 static const struct soc15_reg_entry sgpr2_init_regs[] = {
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE0), 0x0000ff00 },
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE1), 0x0000ff00 },
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE2), 0x0000ff00 },
-   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE3), 0x0000ff00 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_RESOURCE_LIMITS), 0x0000000 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_NUM_THREAD_X), 0x40 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_NUM_THREAD_Y), 8 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_NUM_THREAD_Z), 1 },
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_PGM_RSRC1), 0x240 }, /* (80 GPRS) */
    { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_PGM_RSRC2), 0x0 },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE0), 0x0000ff00 },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE1), 0x0000ff00 },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE2), 0x0000ff00 },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE3), 0x0000ff00 },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE4), 0x0000ff00 },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE5), 0x0000ff00 },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE6), 0x0000ff00 },
+   { SOC15_REG_ENTRY(GC, 0, mmCOMPUTE_STATIC_THREAD_MGMT_SE7), 0x0000ff00 },
 };
 
 static const struct soc15_reg_entry sec_ded_counter_registers[] = {
@@ -4006,9 +4020,15 @@ static const struct soc15_reg_entry sec_ded_counter_registers[] = {
    { SOC15_REG_ENTRY(GC, 0, mmTCC_EDC_CNT2), 0, 1, 16},
    { SOC15_REG_ENTRY(GC, 0, mmTCA_EDC_CNT), 0, 1, 2},
    { SOC15_REG_ENTRY(GC, 0, mmSQC_EDC_CNT3), 0, 4, 6},
+   { SOC15_REG_ENTRY(HDP, 0, mmHDP_EDC_CNT), 0, 1, 1},
    { SOC15_REG_ENTRY(SDMA0, 0, mmSDMA0_EDC_COUNTER), 0, 1, 1},
    { SOC15_REG_ENTRY(SDMA1, 0, mmSDMA1_EDC_COUNTER), 0, 1, 1},
-   { SOC15_REG_ENTRY(HDP, 0, mmHDP_EDC_CNT), 0, 1, 1},
+   { SOC15_REG_ENTRY(SDMA2, 0, mmSDMA2_EDC_COUNTER), 0, 1, 1},
+   { SOC15_REG_ENTRY(SDMA3, 0, mmSDMA3_EDC_COUNTER), 0, 1, 1},
+   { SOC15_REG_ENTRY(SDMA4, 0, mmSDMA4_EDC_COUNTER), 0, 1, 1},
+   { SOC15_REG_ENTRY(SDMA5, 0, mmSDMA5_EDC_COUNTER), 0, 1, 1},
+   { SOC15_REG_ENTRY(SDMA6, 0, mmSDMA6_EDC_COUNTER), 0, 1, 1},
+   { SOC15_REG_ENTRY(SDMA7, 0, mmSDMA7_EDC_COUNTER), 0, 1, 1},
 };
 
 static int gfx_v9_0_do_edc_gds_workarounds(struct amdgpu_device *adev)
@@ -4067,6 +4087,13 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev)
        unsigned total_size, vgpr_offset, sgpr_offset;
        u64 gpu_addr;
 
+       int compute_dim_x = adev->gfx.config.max_shader_engines *
+                                               adev->gfx.config.max_cu_per_sh *
+                                               adev->gfx.config.max_sh_per_se;
+       int sgpr_work_group_size = 5;
+       int gpr_reg_size = compute_dim_x / 16 + 6;
+       int sec_ded_counter_reg_size = adev->sdma.num_instances + 34;
+
        /* only support when RAS is enabled */
        if (!amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__GFX))
                return 0;
@@ -4076,11 +4103,11 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev)
                return 0;
 
        total_size =
-               ((ARRAY_SIZE(vgpr_init_regs) * 3) + 4 + 5 + 2) * 4;
+               (gpr_reg_size * 3 + 4 + 5 + 2) * 4; /* VGPRS */
        total_size +=
-               ((ARRAY_SIZE(sgpr1_init_regs) * 3) + 4 + 5 + 2) * 4;
+               (gpr_reg_size * 3 + 4 + 5 + 2) * 4; /* SGPRS1 */
        total_size +=
-               ((ARRAY_SIZE(sgpr2_init_regs) * 3) + 4 + 5 + 2) * 4;
+               (gpr_reg_size * 3 + 4 + 5 + 2) * 4; /* SGPRS2 */
        total_size = ALIGN(total_size, 256);
        vgpr_offset = total_size;
        total_size += ALIGN(sizeof(vgpr_init_compute_shader), 256);
@@ -4107,7 +4134,7 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev)
 
        /* VGPR */
        /* write the register state for the compute dispatch */
-       for (i = 0; i < ARRAY_SIZE(vgpr_init_regs); i++) {
+       for (i = 0; i < gpr_reg_size; i++) {
                ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1);
                ib.ptr[ib.length_dw++] = SOC15_REG_ENTRY_OFFSET(vgpr_init_regs[i])
                                                                - PACKET3_SET_SH_REG_START;
@@ -4123,7 +4150,7 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev)
 
        /* write dispatch packet */
        ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3);
-       ib.ptr[ib.length_dw++] = 0x40*2; /* x */
+       ib.ptr[ib.length_dw++] = compute_dim_x; /* x */
        ib.ptr[ib.length_dw++] = 1; /* y */
        ib.ptr[ib.length_dw++] = 1; /* z */
        ib.ptr[ib.length_dw++] =
@@ -4135,7 +4162,7 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev)
 
        /* SGPR1 */
        /* write the register state for the compute dispatch */
-       for (i = 0; i < ARRAY_SIZE(sgpr1_init_regs); i++) {
+       for (i = 0; i < gpr_reg_size; i++) {
                ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1);
                ib.ptr[ib.length_dw++] = SOC15_REG_ENTRY_OFFSET(sgpr1_init_regs[i])
                                                                - PACKET3_SET_SH_REG_START;
@@ -4151,7 +4178,7 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev)
 
        /* write dispatch packet */
        ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3);
-       ib.ptr[ib.length_dw++] = 0xA0*2; /* x */
+       ib.ptr[ib.length_dw++] = compute_dim_x / 2 * sgpr_work_group_size; /* x */
        ib.ptr[ib.length_dw++] = 1; /* y */
        ib.ptr[ib.length_dw++] = 1; /* z */
        ib.ptr[ib.length_dw++] =
@@ -4163,7 +4190,7 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev)
 
        /* SGPR2 */
        /* write the register state for the compute dispatch */
-       for (i = 0; i < ARRAY_SIZE(sgpr2_init_regs); i++) {
+       for (i = 0; i < gpr_reg_size; i++) {
                ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1);
                ib.ptr[ib.length_dw++] = SOC15_REG_ENTRY_OFFSET(sgpr2_init_regs[i])
                                                                - PACKET3_SET_SH_REG_START;
@@ -4179,7 +4206,7 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev)
 
        /* write dispatch packet */
        ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3);
-       ib.ptr[ib.length_dw++] = 0xA0*2; /* x */
+       ib.ptr[ib.length_dw++] = compute_dim_x / 2 * sgpr_work_group_size; /* x */
        ib.ptr[ib.length_dw++] = 1; /* y */
        ib.ptr[ib.length_dw++] = 1; /* z */
        ib.ptr[ib.length_dw++] =
@@ -4205,7 +4232,7 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev)
 
        /* read back registers to clear the counters */
        mutex_lock(&adev->grbm_idx_mutex);
-       for (i = 0; i < ARRAY_SIZE(sec_ded_counter_registers); i++) {
+       for (i = 0; i < sec_ded_counter_reg_size; i++) {
                for (j = 0; j < sec_ded_counter_registers[i].se_num; j++) {
                        for (k = 0; k < sec_ded_counter_registers[i].instance; k++) {
                                gfx_v9_0_select_se_sh(adev, j, 0x0, k);
index e91bd79..1a2f18b 100644 (file)
@@ -75,40 +75,45 @@ static void gfxhub_v1_0_init_system_aperture_regs(struct amdgpu_device *adev)
        WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_BOT, adev->gmc.agp_start >> 24);
        WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_TOP, adev->gmc.agp_end >> 24);
 
-       /* Program the system aperture low logical page number. */
-       WREG32_SOC15_RLC(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,
-                    min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
-
-       if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8)
-               /*
-                * Raven2 has a HW issue that it is unable to use the vram which
-                * is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the
-                * workaround that increase system aperture high address (add 1)
-                * to get rid of the VM fault and hardware hang.
-                */
-               WREG32_SOC15_RLC(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,
-                            max((adev->gmc.fb_end >> 18) + 0x1,
-                                adev->gmc.agp_end >> 18));
-       else
-               WREG32_SOC15_RLC(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,
-                            max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);
-
-       /* Set default page address. */
-       value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start
-               + adev->vm_manager.vram_base_offset;
-       WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB,
-                    (u32)(value >> 12));
-       WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB,
-                    (u32)(value >> 44));
-
-       /* Program "protection fault". */
-       WREG32_SOC15(GC, 0, mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32,
-                    (u32)(adev->dummy_page_addr >> 12));
-       WREG32_SOC15(GC, 0, mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32,
-                    (u32)((u64)adev->dummy_page_addr >> 44));
-
-       WREG32_FIELD15(GC, 0, VM_L2_PROTECTION_FAULT_CNTL2,
-                      ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY, 1);
+       if (!amdgpu_sriov_vf(adev) || adev->asic_type <= CHIP_VEGA10) {
+               /* Program the system aperture low logical page number. */
+               WREG32_SOC15_RLC(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,
+                       min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
+
+               if (adev->asic_type == CHIP_RAVEN && adev->rev_id >= 0x8)
+                       /*
+                       * Raven2 has a HW issue that it is unable to use the
+                       * vram which is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR.
+                       * So here is the workaround that increase system
+                       * aperture high address (add 1) to get rid of the VM
+                       * fault and hardware hang.
+                       */
+                       WREG32_SOC15_RLC(GC, 0,
+                                        mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,
+                                        max((adev->gmc.fb_end >> 18) + 0x1,
+                                            adev->gmc.agp_end >> 18));
+               else
+                       WREG32_SOC15_RLC(
+                               GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,
+                               max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);
+
+               /* Set default page address. */
+               value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start +
+                       adev->vm_manager.vram_base_offset;
+               WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB,
+                            (u32)(value >> 12));
+               WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB,
+                            (u32)(value >> 44));
+
+               /* Program "protection fault". */
+               WREG32_SOC15(GC, 0, mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32,
+                            (u32)(adev->dummy_page_addr >> 12));
+               WREG32_SOC15(GC, 0, mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32,
+                            (u32)((u64)adev->dummy_page_addr >> 44));
+
+               WREG32_FIELD15(GC, 0, VM_L2_PROTECTION_FAULT_CNTL2,
+                              ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY, 1);
+       }
 }
 
 static void gfxhub_v1_0_init_tlb_regs(struct amdgpu_device *adev)
@@ -264,7 +269,7 @@ static void gfxhub_v1_0_program_invalidation(struct amdgpu_device *adev)
 
 int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev)
 {
-       if (amdgpu_sriov_vf(adev)) {
+       if (amdgpu_sriov_vf(adev) && adev->asic_type != CHIP_ARCTURUS) {
                /*
                 * MC_VM_FB_LOCATION_BASE/TOP is NULL for VF, becuase they are
                 * VF copy registers so vbios post doesn't program them, for
@@ -280,10 +285,12 @@ int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev)
        gfxhub_v1_0_init_gart_aperture_regs(adev);
        gfxhub_v1_0_init_system_aperture_regs(adev);
        gfxhub_v1_0_init_tlb_regs(adev);
-       gfxhub_v1_0_init_cache_regs(adev);
+       if (!amdgpu_sriov_vf(adev))
+               gfxhub_v1_0_init_cache_regs(adev);
 
        gfxhub_v1_0_enable_system_domain(adev);
-       gfxhub_v1_0_disable_identity_aperture(adev);
+       if (!amdgpu_sriov_vf(adev))
+               gfxhub_v1_0_disable_identity_aperture(adev);
        gfxhub_v1_0_setup_vmid_config(adev);
        gfxhub_v1_0_program_invalidation(adev);
 
index f572533..da9765f 100644 (file)
@@ -564,22 +564,11 @@ static int gmc_v10_0_early_init(void *handle)
 static int gmc_v10_0_late_init(void *handle)
 {
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-       unsigned vm_inv_eng[AMDGPU_MAX_VMHUBS] = { 4, 4 };
-       unsigned i;
-
-       for(i = 0; i < adev->num_rings; ++i) {
-               struct amdgpu_ring *ring = adev->rings[i];
-               unsigned vmhub = ring->funcs->vmhub;
-
-               ring->vm_inv_eng = vm_inv_eng[vmhub]++;
-               dev_info(adev->dev, "ring %u(%s) uses VM inv eng %u on hub %u\n",
-                        ring->idx, ring->name, ring->vm_inv_eng,
-                        ring->funcs->vmhub);
-       }
+       int r;
 
-       /* Engine 17 is used for GART flushes */
-       for(i = 0; i < AMDGPU_MAX_VMHUBS; ++i)
-               BUG_ON(vm_inv_eng[i] > 17);
+       r = amdgpu_gmc_allocate_vm_inv_eng(adev);
+       if (r)
+               return r;
 
        return amdgpu_irq_get(adev, &adev->gmc.vm_fault, 0);
 }
index fa025ce..26194ac 100644 (file)
@@ -207,6 +207,11 @@ static int gmc_v9_0_ecc_interrupt_state(struct amdgpu_device *adev,
 {
        u32 bits, i, tmp, reg;
 
+       /* Devices newer then VEGA10/12 shall have these programming
+            sequences performed by PSP BL */
+       if (adev->asic_type >= CHIP_VEGA20)
+               return 0;
+
        bits = 0x7f;
 
        switch (state) {
@@ -393,8 +398,10 @@ static void gmc_v9_0_set_irq_funcs(struct amdgpu_device *adev)
        adev->gmc.vm_fault.num_types = 1;
        adev->gmc.vm_fault.funcs = &gmc_v9_0_irq_funcs;
 
-       adev->gmc.ecc_irq.num_types = 1;
-       adev->gmc.ecc_irq.funcs = &gmc_v9_0_ecc_funcs;
+       if (!amdgpu_sriov_vf(adev)) {
+               adev->gmc.ecc_irq.num_types = 1;
+               adev->gmc.ecc_irq.funcs = &gmc_v9_0_ecc_funcs;
+       }
 }
 
 static uint32_t gmc_v9_0_get_invalidate_req(unsigned int vmid,
@@ -790,36 +797,6 @@ static bool gmc_v9_0_keep_stolen_memory(struct amdgpu_device *adev)
        }
 }
 
-static int gmc_v9_0_allocate_vm_inv_eng(struct amdgpu_device *adev)
-{
-       struct amdgpu_ring *ring;
-       unsigned vm_inv_engs[AMDGPU_MAX_VMHUBS] =
-               {GFXHUB_FREE_VM_INV_ENGS_BITMAP, MMHUB_FREE_VM_INV_ENGS_BITMAP,
-               GFXHUB_FREE_VM_INV_ENGS_BITMAP};
-       unsigned i;
-       unsigned vmhub, inv_eng;
-
-       for (i = 0; i < adev->num_rings; ++i) {
-               ring = adev->rings[i];
-               vmhub = ring->funcs->vmhub;
-
-               inv_eng = ffs(vm_inv_engs[vmhub]);
-               if (!inv_eng) {
-                       dev_err(adev->dev, "no VM inv eng for ring %s\n",
-                               ring->name);
-                       return -EINVAL;
-               }
-
-               ring->vm_inv_eng = inv_eng - 1;
-               vm_inv_engs[vmhub] &= ~(1 << ring->vm_inv_eng);
-
-               dev_info(adev->dev, "ring %s uses VM inv eng %u on hub %u\n",
-                        ring->name, ring->vm_inv_eng, ring->funcs->vmhub);
-       }
-
-       return 0;
-}
-
 static int gmc_v9_0_late_init(void *handle)
 {
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
@@ -828,7 +805,7 @@ static int gmc_v9_0_late_init(void *handle)
        if (!gmc_v9_0_keep_stolen_memory(adev))
                amdgpu_bo_late_init(adev);
 
-       r = gmc_v9_0_allocate_vm_inv_eng(adev);
+       r = amdgpu_gmc_allocate_vm_inv_eng(adev);
        if (r)
                return r;
        /* Check if ecc is available */
@@ -1112,11 +1089,13 @@ static int gmc_v9_0_sw_init(void *handle)
        if (r)
                return r;
 
-       /* interrupt sent to DF. */
-       r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_DF, 0,
-                       &adev->gmc.ecc_irq);
-       if (r)
-               return r;
+       if (!amdgpu_sriov_vf(adev)) {
+               /* interrupt sent to DF. */
+               r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_DF, 0,
+                                     &adev->gmc.ecc_irq);
+               if (r)
+                       return r;
+       }
 
        /* Set the internal MC address mask
         * This is the max address of the GPU's
@@ -1302,12 +1281,13 @@ static int gmc_v9_0_hw_init(void *handle)
        else
                value = true;
 
-       gfxhub_v1_0_set_fault_enable_default(adev, value);
-       if (adev->asic_type == CHIP_ARCTURUS)
-               mmhub_v9_4_set_fault_enable_default(adev, value);
-       else
-               mmhub_v1_0_set_fault_enable_default(adev, value);
-
+       if (!amdgpu_sriov_vf(adev)) {
+               gfxhub_v1_0_set_fault_enable_default(adev, value);
+               if (adev->asic_type == CHIP_ARCTURUS)
+                       mmhub_v9_4_set_fault_enable_default(adev, value);
+               else
+                       mmhub_v1_0_set_fault_enable_default(adev, value);
+       }
        for (i = 0; i < adev->num_vmhubs; ++i)
                gmc_v9_0_flush_gpu_tlb(adev, 0, i, 0);
 
index 49e8be7..e0585e8 100644 (file)
 #ifndef __GMC_V9_0_H__
 #define __GMC_V9_0_H__
 
-       /*
-        * The latest engine allocation on gfx9 is:
-        * Engine 2, 3: firmware
-        * Engine 0, 1, 4~16: amdgpu ring,
-        *                    subject to change when ring number changes
-        * Engine 17: Gart flushes
-        */
-#define GFXHUB_FREE_VM_INV_ENGS_BITMAP         0x1FFF3
-#define MMHUB_FREE_VM_INV_ENGS_BITMAP          0x1FFF3
-
 extern const struct amd_ip_funcs gmc_v9_0_ip_funcs;
 extern const struct amdgpu_ip_block_version gmc_v9_0_ip_block;
 #endif
index a141408..0debfd9 100644 (file)
@@ -25,6 +25,7 @@
 #include "amdgpu_jpeg.h"
 #include "soc15.h"
 #include "soc15d.h"
+#include "vcn_v1_0.h"
 
 #include "vcn/vcn_1_0_offset.h"
 #include "vcn/vcn_1_0_sh_mask.h"
@@ -561,7 +562,7 @@ static const struct amdgpu_ring_funcs jpeg_v1_0_decode_ring_vm_funcs = {
        .insert_start = jpeg_v1_0_decode_ring_insert_start,
        .insert_end = jpeg_v1_0_decode_ring_insert_end,
        .pad_ib = amdgpu_ring_generic_pad_ib,
-       .begin_use = amdgpu_vcn_ring_begin_use,
+       .begin_use = vcn_v1_0_ring_begin_use,
        .end_use = amdgpu_vcn_ring_end_use,
        .emit_wreg = jpeg_v1_0_decode_ring_emit_wreg,
        .emit_reg_wait = jpeg_v1_0_decode_ring_emit_reg_wait,
index d9301e8..5c42387 100644 (file)
@@ -128,45 +128,53 @@ static void mmhub_v9_4_init_system_aperture_regs(struct amdgpu_device *adev,
                            hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
                            adev->gmc.agp_start >> 24);
 
-       /* Program the system aperture low logical page number. */
-       WREG32_SOC15_OFFSET(MMHUB, 0,
-                           mmVMSHAREDVC0_MC_VM_SYSTEM_APERTURE_LOW_ADDR,
-                           hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
-                           min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
-       WREG32_SOC15_OFFSET(MMHUB, 0,
-                           mmVMSHAREDVC0_MC_VM_SYSTEM_APERTURE_HIGH_ADDR,
-                           hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
-                           max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);
+       if (!amdgpu_sriov_vf(adev)) {
+               /* Program the system aperture low logical page number. */
+               WREG32_SOC15_OFFSET(
+                       MMHUB, 0, mmVMSHAREDVC0_MC_VM_SYSTEM_APERTURE_LOW_ADDR,
+                       hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
+                       min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
+               WREG32_SOC15_OFFSET(
+                       MMHUB, 0, mmVMSHAREDVC0_MC_VM_SYSTEM_APERTURE_HIGH_ADDR,
+                       hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
+                       max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);
 
-       /* Set default page address. */
-       value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start +
-               adev->vm_manager.vram_base_offset;
-       WREG32_SOC15_OFFSET(MMHUB, 0,
+               /* Set default page address. */
+               value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start +
+                       adev->vm_manager.vram_base_offset;
+               WREG32_SOC15_OFFSET(
+                       MMHUB, 0,
                        mmVMSHAREDPF0_MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB,
                        hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
                        (u32)(value >> 12));
-       WREG32_SOC15_OFFSET(MMHUB, 0,
+               WREG32_SOC15_OFFSET(
+                       MMHUB, 0,
                        mmVMSHAREDPF0_MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB,
                        hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
                        (u32)(value >> 44));
 
-       /* Program "protection fault". */
-       WREG32_SOC15_OFFSET(MMHUB, 0,
-                           mmVML2PF0_VM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32,
-                           hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
-                           (u32)(adev->dummy_page_addr >> 12));
-       WREG32_SOC15_OFFSET(MMHUB, 0,
-                           mmVML2PF0_VM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32,
-                           hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
-                           (u32)((u64)adev->dummy_page_addr >> 44));
+               /* Program "protection fault". */
+               WREG32_SOC15_OFFSET(
+                       MMHUB, 0,
+                       mmVML2PF0_VM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32,
+                       hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
+                       (u32)(adev->dummy_page_addr >> 12));
+               WREG32_SOC15_OFFSET(
+                       MMHUB, 0,
+                       mmVML2PF0_VM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32,
+                       hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
+                       (u32)((u64)adev->dummy_page_addr >> 44));
 
-       tmp = RREG32_SOC15_OFFSET(MMHUB, 0,
-                                 mmVML2PF0_VM_L2_PROTECTION_FAULT_CNTL2,
-                                 hubid * MMHUB_INSTANCE_REGISTER_OFFSET);
-       tmp = REG_SET_FIELD(tmp, VML2PF0_VM_L2_PROTECTION_FAULT_CNTL2,
-                           ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY, 1);
-       WREG32_SOC15_OFFSET(MMHUB, 0, mmVML2PF0_VM_L2_PROTECTION_FAULT_CNTL2,
-                           hubid * MMHUB_INSTANCE_REGISTER_OFFSET, tmp);
+               tmp = RREG32_SOC15_OFFSET(
+                       MMHUB, 0, mmVML2PF0_VM_L2_PROTECTION_FAULT_CNTL2,
+                       hubid * MMHUB_INSTANCE_REGISTER_OFFSET);
+               tmp = REG_SET_FIELD(tmp, VML2PF0_VM_L2_PROTECTION_FAULT_CNTL2,
+                                   ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY, 1);
+               WREG32_SOC15_OFFSET(MMHUB, 0,
+                                   mmVML2PF0_VM_L2_PROTECTION_FAULT_CNTL2,
+                                   hubid * MMHUB_INSTANCE_REGISTER_OFFSET,
+                                   tmp);
+       }
 }
 
 static void mmhub_v9_4_init_tlb_regs(struct amdgpu_device *adev, int hubid)
@@ -368,30 +376,16 @@ int mmhub_v9_4_gart_enable(struct amdgpu_device *adev)
        int i;
 
        for (i = 0; i < MMHUB_NUM_INSTANCES; i++) {
-               if (amdgpu_sriov_vf(adev)) {
-                       /*
-                        * MC_VM_FB_LOCATION_BASE/TOP is NULL for VF, becuase
-                        * they are VF copy registers so vbios post doesn't
-                        * program them, for SRIOV driver need to program them
-                        */
-                       WREG32_SOC15_OFFSET(MMHUB, 0,
-                                    mmVMSHAREDVC0_MC_VM_FB_LOCATION_BASE,
-                                    i * MMHUB_INSTANCE_REGISTER_OFFSET,
-                                    adev->gmc.vram_start >> 24);
-                       WREG32_SOC15_OFFSET(MMHUB, 0,
-                                    mmVMSHAREDVC0_MC_VM_FB_LOCATION_TOP,
-                                    i * MMHUB_INSTANCE_REGISTER_OFFSET,
-                                    adev->gmc.vram_end >> 24);
-               }
-
                /* GART Enable. */
                mmhub_v9_4_init_gart_aperture_regs(adev, i);
                mmhub_v9_4_init_system_aperture_regs(adev, i);
                mmhub_v9_4_init_tlb_regs(adev, i);
-               mmhub_v9_4_init_cache_regs(adev, i);
+               if (!amdgpu_sriov_vf(adev))
+                       mmhub_v9_4_init_cache_regs(adev, i);
 
                mmhub_v9_4_enable_system_domain(adev, i);
-               mmhub_v9_4_disable_identity_aperture(adev, i);
+               if (!amdgpu_sriov_vf(adev))
+                       mmhub_v9_4_disable_identity_aperture(adev, i);
                mmhub_v9_4_setup_vmid_config(adev, i);
                mmhub_v9_4_program_invalidation(adev, i);
        }
index 8af0bdd..2095863 100644 (file)
@@ -47,6 +47,18 @@ struct mmsch_v1_0_init_header {
        uint32_t uvd_table_size;
 };
 
+struct mmsch_vf_eng_init_header {
+       uint32_t init_status;
+       uint32_t table_offset;
+       uint32_t table_size;
+};
+
+struct mmsch_v1_1_init_header {
+       uint32_t version;
+       uint32_t total_size;
+       struct mmsch_vf_eng_init_header eng[2];
+};
+
 struct mmsch_v1_0_cmd_direct_reg_header {
        uint32_t reg_offset   : 28;
        uint32_t command_type : 4;
index 43305af..5fd67e1 100644 (file)
@@ -250,7 +250,7 @@ static void xgpu_ai_mailbox_flr_work(struct work_struct *work)
         */
        locked = mutex_trylock(&adev->lock_reset);
        if (locked)
-               adev->in_gpu_reset = 1;
+               adev->in_gpu_reset = true;
 
        do {
                if (xgpu_ai_mailbox_peek_msg(adev) == IDH_FLR_NOTIFICATION_CMPL)
@@ -262,7 +262,7 @@ static void xgpu_ai_mailbox_flr_work(struct work_struct *work)
 
 flr_done:
        if (locked) {
-               adev->in_gpu_reset = 0;
+               adev->in_gpu_reset = false;
                mutex_unlock(&adev->lock_reset);
        }
 
index 0d8767e..237fa5e 100644 (file)
@@ -252,7 +252,7 @@ static void xgpu_nv_mailbox_flr_work(struct work_struct *work)
         */
        locked = mutex_trylock(&adev->lock_reset);
        if (locked)
-               adev->in_gpu_reset = 1;
+               adev->in_gpu_reset = true;
 
        do {
                if (xgpu_nv_mailbox_peek_msg(adev) == IDH_FLR_NOTIFICATION_CMPL)
@@ -264,12 +264,16 @@ static void xgpu_nv_mailbox_flr_work(struct work_struct *work)
 
 flr_done:
        if (locked) {
-               adev->in_gpu_reset = 0;
+               adev->in_gpu_reset = false;
                mutex_unlock(&adev->lock_reset);
        }
 
        /* Trigger recovery for world switch failure if no TDR */
-       if (amdgpu_device_should_recover_gpu(adev))
+       if (amdgpu_device_should_recover_gpu(adev)
+               && (adev->sdma_timeout == MAX_SCHEDULE_TIMEOUT ||
+               adev->gfx_timeout == MAX_SCHEDULE_TIMEOUT ||
+               adev->compute_timeout == MAX_SCHEDULE_TIMEOUT ||
+               adev->video_timeout == MAX_SCHEDULE_TIMEOUT))
                amdgpu_device_gpu_recover(adev, NULL);
 }
 
index 9af7356..f737ce4 100644 (file)
@@ -110,7 +110,6 @@ static uint32_t navi10_ih_rb_cntl(struct amdgpu_ih_ring *ih, uint32_t ih_rb_cntl
 static int navi10_ih_irq_init(struct amdgpu_device *adev)
 {
        struct amdgpu_ih_ring *ih = &adev->irq.ih;
-       int ret = 0;
        u32 ih_rb_cntl, ih_doorbell_rtpr, ih_chicken;
        u32 tmp;
 
@@ -179,7 +178,7 @@ static int navi10_ih_irq_init(struct amdgpu_device *adev)
        /* enable interrupts */
        navi10_ih_enable_interrupts(adev);
 
-       return ret;
+       return 0;
 }
 
 /**
index bb701db..65eb378 100644 (file)
@@ -339,7 +339,7 @@ static void nbio_v7_4_handle_ras_controller_intr_no_bifring(struct amdgpu_device
                /* ras_controller_int is dedicated for nbif ras error,
                 * not the global interrupt for sync flood
                 */
-               amdgpu_ras_reset_gpu(adev, true);
+               amdgpu_ras_reset_gpu(adev);
        }
 }
 
@@ -456,10 +456,8 @@ static int nbio_v7_4_init_ras_controller_interrupt (struct amdgpu_device *adev)
        r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_BIF,
                              NBIF_7_4__SRCID__RAS_CONTROLLER_INTERRUPT,
                              &adev->nbio.ras_controller_irq);
-       if (r)
-               return r;
 
-       return 0;
+       return r;
 }
 
 static int nbio_v7_4_init_ras_err_event_athub_interrupt (struct amdgpu_device *adev)
@@ -476,10 +474,8 @@ static int nbio_v7_4_init_ras_err_event_athub_interrupt (struct amdgpu_device *a
        r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_BIF,
                              NBIF_7_4__SRCID__ERREVENT_ATHUB_INTERRUPT,
                              &adev->nbio.ras_err_event_athub_irq);
-       if (r)
-               return r;
 
-       return 0;
+       return r;
 }
 
 #define smnPARITY_ERROR_STATUS_UNCORR_GRP2     0x13a20030
index 74a9fe8..36b6579 100644 (file)
@@ -242,6 +242,7 @@ enum psp_gfx_fw_type {
        GFX_FW_TYPE_SDMA5                           = 55,   /* SDMA5                    MI      */
        GFX_FW_TYPE_SDMA6                           = 56,   /* SDMA6                    MI      */
        GFX_FW_TYPE_SDMA7                           = 57,   /* SDMA7                    MI      */
+       GFX_FW_TYPE_VCN1                            = 58,   /* VCN1                     MI      */
        GFX_FW_TYPE_MAX
 };
 
index c66ca8c..a57f3d7 100644 (file)
@@ -233,6 +233,29 @@ out:
        return err;
 }
 
+int psp_v11_0_wait_for_bootloader(struct psp_context *psp)
+{
+       struct amdgpu_device *adev = psp->adev;
+
+       int ret;
+       int retry_loop;
+
+       for (retry_loop = 0; retry_loop < 10; retry_loop++) {
+               /* Wait for bootloader to signify that is
+                   ready having bit 31 of C2PMSG_35 set to 1 */
+               ret = psp_wait_for(psp,
+                                  SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
+                                  0x80000000,
+                                  0x80000000,
+                                  false);
+
+               if (ret == 0)
+                       return 0;
+       }
+
+       return ret;
+}
+
 static bool psp_v11_0_is_sos_alive(struct psp_context *psp)
 {
        struct amdgpu_device *adev = psp->adev;
@@ -258,9 +281,7 @@ static int psp_v11_0_bootloader_load_kdb(struct psp_context *psp)
                return 0;
        }
 
-       /* Wait for bootloader to signify that is ready having bit 31 of C2PMSG_35 set to 1 */
-       ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
-                          0x80000000, 0x80000000, false);
+       ret = psp_v11_0_wait_for_bootloader(psp);
        if (ret)
                return ret;
 
@@ -276,9 +297,7 @@ static int psp_v11_0_bootloader_load_kdb(struct psp_context *psp)
        WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_35,
               psp_gfxdrv_command_reg);
 
-       /* Wait for bootloader to signify that is ready having  bit 31 of C2PMSG_35 set to 1*/
-       ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
-                          0x80000000, 0x80000000, false);
+       ret = psp_v11_0_wait_for_bootloader(psp);
 
        return ret;
 }
@@ -298,9 +317,7 @@ static int psp_v11_0_bootloader_load_sysdrv(struct psp_context *psp)
                return 0;
        }
 
-       /* Wait for bootloader to signify that is ready having bit 31 of C2PMSG_35 set to 1 */
-       ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
-                          0x80000000, 0x80000000, false);
+       ret = psp_v11_0_wait_for_bootloader(psp);
        if (ret)
                return ret;
 
@@ -319,8 +336,7 @@ static int psp_v11_0_bootloader_load_sysdrv(struct psp_context *psp)
        /* there might be handshake issue with hardware which needs delay */
        mdelay(20);
 
-       ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
-                          0x80000000, 0x80000000, false);
+       ret = psp_v11_0_wait_for_bootloader(psp);
 
        return ret;
 }
@@ -337,9 +353,7 @@ static int psp_v11_0_bootloader_load_sos(struct psp_context *psp)
        if (psp_v11_0_is_sos_alive(psp))
                return 0;
 
-       /* Wait for bootloader to signify that is ready having bit 31 of C2PMSG_35 set to 1 */
-       ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
-                          0x80000000, 0x80000000, false);
+       ret = psp_v11_0_wait_for_bootloader(psp);
        if (ret)
                return ret;
 
index a101758..7d509a4 100644 (file)
@@ -255,7 +255,7 @@ static void sdma_v2_4_ring_emit_ib(struct amdgpu_ring *ring,
        unsigned vmid = AMDGPU_JOB_GET_VMID(job);
 
        /* IB packet must end on a 8 DW boundary */
-       sdma_v2_4_ring_insert_nop(ring, (10 - (lower_32_bits(ring->wptr) & 7)) % 8);
+       sdma_v2_4_ring_insert_nop(ring, (2 - lower_32_bits(ring->wptr)) & 7);
 
        amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_INDIRECT) |
                          SDMA_PKT_INDIRECT_HEADER_VMID(vmid & 0xf));
@@ -750,7 +750,7 @@ static void sdma_v2_4_ring_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib
        u32 pad_count;
        int i;
 
-       pad_count = (8 - (ib->length_dw & 0x7)) % 8;
+       pad_count = (-ib->length_dw) & 7;
        for (i = 0; i < pad_count; i++)
                if (sdma && sdma->burst_nop && (i == 0))
                        ib->ptr[ib->length_dw++] =
@@ -1260,16 +1260,14 @@ static const struct amdgpu_vm_pte_funcs sdma_v2_4_vm_pte_funcs = {
 
 static void sdma_v2_4_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-       struct drm_gpu_scheduler *sched;
        unsigned i;
 
        adev->vm_manager.vm_pte_funcs = &sdma_v2_4_vm_pte_funcs;
        for (i = 0; i < adev->sdma.num_instances; i++) {
-               sched = &adev->sdma.instance[i].ring.sched;
-               adev->vm_manager.vm_pte_rqs[i] =
-                       &sched->sched_rq[DRM_SCHED_PRIORITY_KERNEL];
+               adev->vm_manager.vm_pte_scheds[i] =
+                       &adev->sdma.instance[i].ring.sched;
        }
-       adev->vm_manager.vm_pte_num_rqs = adev->sdma.num_instances;
+       adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
 }
 
 const struct amdgpu_ip_block_version sdma_v2_4_ip_block =
index 5f4e2c6..b6109a9 100644 (file)
@@ -429,7 +429,7 @@ static void sdma_v3_0_ring_emit_ib(struct amdgpu_ring *ring,
        unsigned vmid = AMDGPU_JOB_GET_VMID(job);
 
        /* IB packet must end on a 8 DW boundary */
-       sdma_v3_0_ring_insert_nop(ring, (10 - (lower_32_bits(ring->wptr) & 7)) % 8);
+       sdma_v3_0_ring_insert_nop(ring, (2 - lower_32_bits(ring->wptr)) & 7);
 
        amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_INDIRECT) |
                          SDMA_PKT_INDIRECT_HEADER_VMID(vmid & 0xf));
@@ -1021,7 +1021,7 @@ static void sdma_v3_0_ring_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib
        u32 pad_count;
        int i;
 
-       pad_count = (8 - (ib->length_dw & 0x7)) % 8;
+       pad_count = (-ib->length_dw) & 7;
        for (i = 0; i < pad_count; i++)
                if (sdma && sdma->burst_nop && (i == 0))
                        ib->ptr[ib->length_dw++] =
@@ -1698,16 +1698,14 @@ static const struct amdgpu_vm_pte_funcs sdma_v3_0_vm_pte_funcs = {
 
 static void sdma_v3_0_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-       struct drm_gpu_scheduler *sched;
        unsigned i;
 
        adev->vm_manager.vm_pte_funcs = &sdma_v3_0_vm_pte_funcs;
        for (i = 0; i < adev->sdma.num_instances; i++) {
-               sched = &adev->sdma.instance[i].ring.sched;
-               adev->vm_manager.vm_pte_rqs[i] =
-                       &sched->sched_rq[DRM_SCHED_PRIORITY_KERNEL];
+               adev->vm_manager.vm_pte_scheds[i] =
+                        &adev->sdma.instance[i].ring.sched;
        }
-       adev->vm_manager.vm_pte_num_rqs = adev->sdma.num_instances;
+       adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
 }
 
 const struct amdgpu_ip_block_version sdma_v3_0_ip_block =
index 4ef4d31..ce0753a 100644 (file)
@@ -698,7 +698,7 @@ static void sdma_v4_0_ring_emit_ib(struct amdgpu_ring *ring,
        unsigned vmid = AMDGPU_JOB_GET_VMID(job);
 
        /* IB packet must end on a 8 DW boundary */
-       sdma_v4_0_ring_insert_nop(ring, (10 - (lower_32_bits(ring->wptr) & 7)) % 8);
+       sdma_v4_0_ring_insert_nop(ring, (2 - lower_32_bits(ring->wptr)) & 7);
 
        amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_INDIRECT) |
                          SDMA_PKT_INDIRECT_HEADER_VMID(vmid & 0xf));
@@ -1579,7 +1579,7 @@ static void sdma_v4_0_ring_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib
        u32 pad_count;
        int i;
 
-       pad_count = (8 - (ib->length_dw & 0x7)) % 8;
+       pad_count = (-ib->length_dw) & 7;
        for (i = 0; i < pad_count; i++)
                if (sdma && sdma->burst_nop && (i == 0))
                        ib->ptr[ib->length_dw++] =
@@ -2409,10 +2409,9 @@ static void sdma_v4_0_set_vm_pte_funcs(struct amdgpu_device *adev)
                        sched = &adev->sdma.instance[i].page.sched;
                else
                        sched = &adev->sdma.instance[i].ring.sched;
-               adev->vm_manager.vm_pte_rqs[i] =
-                       &sched->sched_rq[DRM_SCHED_PRIORITY_KERNEL];
+               adev->vm_manager.vm_pte_scheds[i] = sched;
        }
-       adev->vm_manager.vm_pte_num_rqs = adev->sdma.num_instances;
+       adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
 }
 
 const struct amdgpu_ip_block_version sdma_v4_0_ip_block = {
index f4ad299..4c6bf1f 100644 (file)
@@ -382,8 +382,15 @@ static void sdma_v5_0_ring_emit_ib(struct amdgpu_ring *ring,
        unsigned vmid = AMDGPU_JOB_GET_VMID(job);
        uint64_t csa_mc_addr = amdgpu_sdma_get_csa_mc_addr(ring, vmid);
 
-       /* IB packet must end on a 8 DW boundary */
-       sdma_v5_0_ring_insert_nop(ring, (10 - (lower_32_bits(ring->wptr) & 7)) % 8);
+       /* An IB packet must end on a 8 DW boundary--the next dword
+        * must be on a 8-dword boundary. Our IB packet below is 6
+        * dwords long, thus add x number of NOPs, such that, in
+        * modular arithmetic,
+        * wptr + 6 + x = 8k, k >= 0, which in C is,
+        * (wptr + 6 + x) % 8 = 0.
+        * The expression below, is a solution of x.
+        */
+       sdma_v5_0_ring_insert_nop(ring, (2 - lower_32_bits(ring->wptr)) & 7);
 
        amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_INDIRECT) |
                          SDMA_PKT_INDIRECT_HEADER_VMID(vmid & 0xf));
@@ -907,16 +914,9 @@ static int sdma_v5_0_ring_test_ring(struct amdgpu_ring *ring)
                        udelay(1);
        }
 
-       if (i < adev->usec_timeout) {
-               if (amdgpu_emu_mode == 1)
-                       DRM_INFO("ring test on %d succeeded in %d msecs\n", ring->idx, i);
-               else
-                       DRM_INFO("ring test on %d succeeded in %d usecs\n", ring->idx, i);
-       } else {
-               DRM_ERROR("amdgpu: ring %d test failed (0x%08X)\n",
-                         ring->idx, tmp);
-               r = -EINVAL;
-       }
+       if (i >= adev->usec_timeout)
+               r = -ETIMEDOUT;
+
        amdgpu_device_wb_free(adev, index);
 
        return r;
@@ -981,13 +981,10 @@ static int sdma_v5_0_ring_test_ib(struct amdgpu_ring *ring, long timeout)
                goto err1;
        }
        tmp = le32_to_cpu(adev->wb.wb[index]);
-       if (tmp == 0xDEADBEEF) {
-               DRM_INFO("ib test on ring %d succeeded\n", ring->idx);
+       if (tmp == 0xDEADBEEF)
                r = 0;
-       } else {
-               DRM_ERROR("amdgpu: ib test failed (0x%08X)\n", tmp);
+       else
                r = -EINVAL;
-       }
 
 err1:
        amdgpu_ib_free(adev, &ib, NULL);
@@ -1086,10 +1083,10 @@ static void sdma_v5_0_vm_set_pte_pde(struct amdgpu_ib *ib,
 }
 
 /**
- * sdma_v5_0_ring_pad_ib - pad the IB to the required number of dw
- *
+ * sdma_v5_0_ring_pad_ib - pad the IB
  * @ib: indirect buffer to fill with padding
  *
+ * Pad the IB with NOPs to a boundary multiple of 8.
  */
 static void sdma_v5_0_ring_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib)
 {
@@ -1097,7 +1094,7 @@ static void sdma_v5_0_ring_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib
        u32 pad_count;
        int i;
 
-       pad_count = (8 - (ib->length_dw & 0x7)) % 8;
+       pad_count = (-ib->length_dw) & 0x7;
        for (i = 0; i < pad_count; i++)
                if (sdma && sdma->burst_nop && (i == 0))
                        ib->ptr[ib->length_dw++] =
@@ -1721,17 +1718,15 @@ static const struct amdgpu_vm_pte_funcs sdma_v5_0_vm_pte_funcs = {
 
 static void sdma_v5_0_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-       struct drm_gpu_scheduler *sched;
        unsigned i;
 
        if (adev->vm_manager.vm_pte_funcs == NULL) {
                adev->vm_manager.vm_pte_funcs = &sdma_v5_0_vm_pte_funcs;
                for (i = 0; i < adev->sdma.num_instances; i++) {
-                       sched = &adev->sdma.instance[i].ring.sched;
-                       adev->vm_manager.vm_pte_rqs[i] =
-                               &sched->sched_rq[DRM_SCHED_PRIORITY_KERNEL];
+                       adev->vm_manager.vm_pte_scheds[i] =
+                               &adev->sdma.instance[i].ring.sched;
                }
-               adev->vm_manager.vm_pte_num_rqs = adev->sdma.num_instances;
+               adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
        }
 }
 
index bdda8b4..9aac9f9 100644 (file)
@@ -834,16 +834,14 @@ static const struct amdgpu_vm_pte_funcs si_dma_vm_pte_funcs = {
 
 static void si_dma_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-       struct drm_gpu_scheduler *sched;
        unsigned i;
 
        adev->vm_manager.vm_pte_funcs = &si_dma_vm_pte_funcs;
        for (i = 0; i < adev->sdma.num_instances; i++) {
-               sched = &adev->sdma.instance[i].ring.sched;
-               adev->vm_manager.vm_pte_rqs[i] =
-                       &sched->sched_rq[DRM_SCHED_PRIORITY_KERNEL];
+               adev->vm_manager.vm_pte_scheds[i] =
+                       &adev->sdma.instance[i].ring.sched;
        }
-       adev->vm_manager.vm_pte_num_rqs = adev->sdma.num_instances;
+       adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
 }
 
 const struct amdgpu_ip_block_version si_dma_ip_block =
index 5bd6ae7..714cf4d 100644 (file)
@@ -613,6 +613,7 @@ static bool soc15_supports_baco(struct amdgpu_device *adev)
        switch (adev->asic_type) {
        case CHIP_VEGA10:
        case CHIP_VEGA12:
+       case CHIP_ARCTURUS:
                soc15_asic_get_baco_capability(adev, &baco_support);
                break;
        case CHIP_VEGA20:
@@ -827,11 +828,15 @@ int soc15_set_ip_blocks(struct amdgpu_device *adev)
                        amdgpu_device_ip_block_add(adev, &dce_virtual_ip_block);
                amdgpu_device_ip_block_add(adev, &gfx_v9_0_ip_block);
                amdgpu_device_ip_block_add(adev, &sdma_v4_0_ip_block);
-               if (!amdgpu_sriov_vf(adev))
-                       amdgpu_device_ip_block_add(adev, &smu_v11_0_ip_block);
+               amdgpu_device_ip_block_add(adev, &smu_v11_0_ip_block);
 
-               if (unlikely(adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT))
-                       amdgpu_device_ip_block_add(adev, &vcn_v2_5_ip_block);
+               if (amdgpu_sriov_vf(adev)) {
+                       if (likely(adev->firmware.load_type == AMDGPU_FW_LOAD_PSP))
+                               amdgpu_device_ip_block_add(adev, &vcn_v2_5_ip_block);
+               } else {
+                       if (unlikely(adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT))
+                               amdgpu_device_ip_block_add(adev, &vcn_v2_5_ip_block);
+               }
                if (!amdgpu_sriov_vf(adev))
                        amdgpu_device_ip_block_add(adev, &jpeg_v2_5_ip_block);
                break;
index 515eb50..11e924d 100644 (file)
 #include "rsmu/rsmu_0_0_2_sh_mask.h"
 #include "umc/umc_6_1_1_offset.h"
 #include "umc/umc_6_1_1_sh_mask.h"
+#include "umc/umc_6_1_2_offset.h"
 
 #define smnMCA_UMC0_MCUMC_ADDRT0       0x50f10
 
-/* UMC 6_1_2 register offsets */
-#define mmUMCCH0_0_EccErrCntSel_ARCT                 0x0360
-#define mmUMCCH0_0_EccErrCntSel_ARCT_BASE_IDX        1
-#define mmUMCCH0_0_EccErrCnt_ARCT                    0x0361
-#define mmUMCCH0_0_EccErrCnt_ARCT_BASE_IDX           1
-#define mmMCA_UMC_UMC0_MCUMC_STATUST0_ARCT           0x03c2
-#define mmMCA_UMC_UMC0_MCUMC_STATUST0_ARCT_BASE_IDX  1
+#define UMC_6_INST_DIST                        0x40000
 
 /*
  * (addr / 256) * 8192, the higher 26 bits in ErrorAddr
  * is the index of 8KB block
  */
-#define ADDR_OF_8KB_BLOCK(addr)                (((addr) & ~0xffULL) << 5)
+#define ADDR_OF_8KB_BLOCK(addr)                        (((addr) & ~0xffULL) << 5)
 /* channel index is the index of 256B block */
 #define ADDR_OF_256B_BLOCK(channel_index)      ((channel_index) << 8)
 /* offset in 256B block */
 #define OFFSET_IN_256B_BLOCK(addr)             ((addr) & 0xffULL)
 
+#define LOOP_UMC_INST(umc_inst) for ((umc_inst) = 0; (umc_inst) < adev->umc.umc_inst_num; (umc_inst)++)
+#define LOOP_UMC_CH_INST(ch_inst) for ((ch_inst) = 0; (ch_inst) < adev->umc.channel_inst_num; (ch_inst)++)
+#define LOOP_UMC_INST_AND_CH(umc_inst, ch_inst) LOOP_UMC_INST((umc_inst)) LOOP_UMC_CH_INST((ch_inst))
+
 const uint32_t
        umc_v6_1_channel_idx_tbl[UMC_V6_1_UMC_INSTANCE_NUM][UMC_V6_1_CHANNEL_INSTANCE_NUM] = {
                {2, 18, 11, 27},        {4, 20, 13, 29},
@@ -57,41 +56,17 @@ const uint32_t
                {9, 25, 0, 16},         {15, 31, 6, 22}
 };
 
-static void umc_v6_1_enable_umc_index_mode(struct amdgpu_device *adev,
-                                          uint32_t umc_instance)
-{
-       uint32_t rsmu_umc_index;
-
-       rsmu_umc_index = RREG32_SOC15(RSMU, 0,
-                       mmRSMU_UMC_INDEX_REGISTER_NBIF_VG20_GPU);
-       rsmu_umc_index = REG_SET_FIELD(rsmu_umc_index,
-                       RSMU_UMC_INDEX_REGISTER_NBIF_VG20_GPU,
-                       RSMU_UMC_INDEX_MODE_EN, 1);
-       rsmu_umc_index = REG_SET_FIELD(rsmu_umc_index,
-                       RSMU_UMC_INDEX_REGISTER_NBIF_VG20_GPU,
-                       RSMU_UMC_INDEX_INSTANCE, umc_instance);
-       rsmu_umc_index = REG_SET_FIELD(rsmu_umc_index,
-                       RSMU_UMC_INDEX_REGISTER_NBIF_VG20_GPU,
-                       RSMU_UMC_INDEX_WREN, 1 << umc_instance);
-       WREG32_SOC15(RSMU, 0, mmRSMU_UMC_INDEX_REGISTER_NBIF_VG20_GPU,
-                               rsmu_umc_index);
-}
-
 static void umc_v6_1_disable_umc_index_mode(struct amdgpu_device *adev)
 {
        WREG32_FIELD15(RSMU, 0, RSMU_UMC_INDEX_REGISTER_NBIF_VG20_GPU,
                        RSMU_UMC_INDEX_MODE_EN, 0);
 }
 
-static uint32_t umc_v6_1_get_umc_inst(struct amdgpu_device *adev)
+static inline uint32_t get_umc_6_reg_offset(struct amdgpu_device *adev,
+                                           uint32_t umc_inst,
+                                           uint32_t ch_inst)
 {
-       uint32_t rsmu_umc_index;
-
-       rsmu_umc_index = RREG32_SOC15(RSMU, 0,
-                               mmRSMU_UMC_INDEX_REGISTER_NBIF_VG20_GPU);
-       return REG_GET_FIELD(rsmu_umc_index,
-                               RSMU_UMC_INDEX_REGISTER_NBIF_VG20_GPU,
-                               RSMU_UMC_INDEX_INSTANCE);
+       return adev->umc.channel_offs*ch_inst + UMC_6_INST_DIST*umc_inst;
 }
 
 static void umc_v6_1_query_correctable_error_count(struct amdgpu_device *adev,
@@ -105,7 +80,6 @@ static void umc_v6_1_query_correctable_error_count(struct amdgpu_device *adev,
 
        if (adev->asic_type == CHIP_ARCTURUS) {
                /* UMC 6_1_2 registers */
-
                ecc_err_cnt_sel_addr =
                        SOC15_REG_OFFSET(UMC, 0, mmUMCCH0_0_EccErrCntSel_ARCT);
                ecc_err_cnt_addr =
@@ -114,7 +88,6 @@ static void umc_v6_1_query_correctable_error_count(struct amdgpu_device *adev,
                        SOC15_REG_OFFSET(UMC, 0, mmMCA_UMC_UMC0_MCUMC_STATUST0_ARCT);
        } else {
                /* UMC 6_1_1 registers */
-
                ecc_err_cnt_sel_addr =
                        SOC15_REG_OFFSET(UMC, 0, mmUMCCH0_0_EccErrCntSel);
                ecc_err_cnt_addr =
@@ -124,31 +97,31 @@ static void umc_v6_1_query_correctable_error_count(struct amdgpu_device *adev,
        }
 
        /* select the lower chip and check the error count */
-       ecc_err_cnt_sel = RREG32(ecc_err_cnt_sel_addr + umc_reg_offset);
+       ecc_err_cnt_sel = RREG32_PCIE((ecc_err_cnt_sel_addr + umc_reg_offset) * 4);
        ecc_err_cnt_sel = REG_SET_FIELD(ecc_err_cnt_sel, UMCCH0_0_EccErrCntSel,
                                        EccErrCntCsSel, 0);
-       WREG32(ecc_err_cnt_sel_addr + umc_reg_offset, ecc_err_cnt_sel);
-       ecc_err_cnt = RREG32(ecc_err_cnt_addr + umc_reg_offset);
+       WREG32_PCIE((ecc_err_cnt_sel_addr + umc_reg_offset) * 4, ecc_err_cnt_sel);
+       ecc_err_cnt = RREG32_PCIE((ecc_err_cnt_addr + umc_reg_offset) * 4);
        *error_count +=
                (REG_GET_FIELD(ecc_err_cnt, UMCCH0_0_EccErrCnt, EccErrCnt) -
                 UMC_V6_1_CE_CNT_INIT);
        /* clear the lower chip err count */
-       WREG32(ecc_err_cnt_addr + umc_reg_offset, UMC_V6_1_CE_CNT_INIT);
+       WREG32_PCIE((ecc_err_cnt_addr + umc_reg_offset) * 4, UMC_V6_1_CE_CNT_INIT);
 
        /* select the higher chip and check the err counter */
        ecc_err_cnt_sel = REG_SET_FIELD(ecc_err_cnt_sel, UMCCH0_0_EccErrCntSel,
                                        EccErrCntCsSel, 1);
-       WREG32(ecc_err_cnt_sel_addr + umc_reg_offset, ecc_err_cnt_sel);
-       ecc_err_cnt = RREG32(ecc_err_cnt_addr + umc_reg_offset);
+       WREG32_PCIE((ecc_err_cnt_sel_addr + umc_reg_offset) * 4, ecc_err_cnt_sel);
+       ecc_err_cnt = RREG32_PCIE((ecc_err_cnt_addr + umc_reg_offset) * 4);
        *error_count +=
                (REG_GET_FIELD(ecc_err_cnt, UMCCH0_0_EccErrCnt, EccErrCnt) -
                 UMC_V6_1_CE_CNT_INIT);
        /* clear the higher chip err count */
-       WREG32(ecc_err_cnt_addr + umc_reg_offset, UMC_V6_1_CE_CNT_INIT);
+       WREG32_PCIE((ecc_err_cnt_addr + umc_reg_offset) * 4, UMC_V6_1_CE_CNT_INIT);
 
        /* check for SRAM correctable error
          MCUMC_STATUS is a 64 bit register */
-       mc_umc_status = RREG64_UMC(mc_umc_status_addr + umc_reg_offset);
+       mc_umc_status = RREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4);
        if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, ErrorCodeExt) == 6 &&
            REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, Val) == 1 &&
            REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, CECC) == 1)
@@ -164,18 +137,16 @@ static void umc_v6_1_querry_uncorrectable_error_count(struct amdgpu_device *adev
 
        if (adev->asic_type == CHIP_ARCTURUS) {
                /* UMC 6_1_2 registers */
-
                mc_umc_status_addr =
                        SOC15_REG_OFFSET(UMC, 0, mmMCA_UMC_UMC0_MCUMC_STATUST0_ARCT);
        } else {
                /* UMC 6_1_1 registers */
-
                mc_umc_status_addr =
                        SOC15_REG_OFFSET(UMC, 0, mmMCA_UMC_UMC0_MCUMC_STATUST0);
        }
 
        /* check the MCUMC_STATUS */
-       mc_umc_status = RREG64_UMC(mc_umc_status_addr + umc_reg_offset);
+       mc_umc_status = RREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4);
        if ((REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, Val) == 1) &&
            (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, Deferred) == 1 ||
            REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1 ||
@@ -185,38 +156,46 @@ static void umc_v6_1_querry_uncorrectable_error_count(struct amdgpu_device *adev
                *error_count += 1;
 }
 
-static void umc_v6_1_query_error_count(struct amdgpu_device *adev,
-                                          struct ras_err_data *err_data, uint32_t umc_reg_offset,
-                                          uint32_t channel_index)
-{
-       umc_v6_1_query_correctable_error_count(adev, umc_reg_offset,
-                                                  &(err_data->ce_count));
-       umc_v6_1_querry_uncorrectable_error_count(adev, umc_reg_offset,
-                                                 &(err_data->ue_count));
-}
-
 static void umc_v6_1_query_ras_error_count(struct amdgpu_device *adev,
                                           void *ras_error_status)
 {
-       amdgpu_umc_for_each_channel(umc_v6_1_query_error_count);
+       struct ras_err_data* err_data = (struct ras_err_data*)ras_error_status;
+
+       uint32_t umc_inst        = 0;
+       uint32_t ch_inst         = 0;
+       uint32_t umc_reg_offset  = 0;
+
+       LOOP_UMC_INST_AND_CH(umc_inst, ch_inst) {
+               umc_reg_offset = get_umc_6_reg_offset(adev,
+                                                     umc_inst,
+                                                     ch_inst);
+
+               umc_v6_1_query_correctable_error_count(adev,
+                                                      umc_reg_offset,
+                                                      &(err_data->ce_count));
+               umc_v6_1_querry_uncorrectable_error_count(adev,
+                                                         umc_reg_offset,
+                                                         &(err_data->ue_count));
+       }
 }
 
 static void umc_v6_1_query_error_address(struct amdgpu_device *adev,
                                         struct ras_err_data *err_data,
-                                        uint32_t umc_reg_offset, uint32_t channel_index)
+                                        uint32_t umc_reg_offset,
+                                        uint32_t ch_inst,
+                                        uint32_t umc_inst)
 {
        uint32_t lsb, mc_umc_status_addr;
        uint64_t mc_umc_status, err_addr, retired_page;
        struct eeprom_table_record *err_rec;
+       uint32_t channel_index = adev->umc.channel_idx_tbl[umc_inst * adev->umc.channel_inst_num + ch_inst];
 
        if (adev->asic_type == CHIP_ARCTURUS) {
                /* UMC 6_1_2 registers */
-
                mc_umc_status_addr =
                        SOC15_REG_OFFSET(UMC, 0, mmMCA_UMC_UMC0_MCUMC_STATUST0_ARCT);
        } else {
                /* UMC 6_1_1 registers */
-
                mc_umc_status_addr =
                        SOC15_REG_OFFSET(UMC, 0, mmMCA_UMC_UMC0_MCUMC_STATUST0);
        }
@@ -224,12 +203,12 @@ static void umc_v6_1_query_error_address(struct amdgpu_device *adev,
        /* skip error address process if -ENOMEM */
        if (!err_data->err_addr) {
                /* clear umc status */
-               WREG64_UMC(mc_umc_status_addr + umc_reg_offset, 0x0ULL);
+               WREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4, 0x0ULL);
                return;
        }
 
        err_rec = &err_data->err_addr[err_data->err_addr_cnt];
-       mc_umc_status = RREG64_UMC(mc_umc_status_addr + umc_reg_offset);
+       mc_umc_status = RREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4);
 
        /* calculate error address if ue/ce error is detected */
        if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, Val) == 1 &&
@@ -257,39 +236,53 @@ static void umc_v6_1_query_error_address(struct amdgpu_device *adev,
                        err_rec->err_type = AMDGPU_RAS_EEPROM_ERR_NON_RECOVERABLE;
                        err_rec->cu = 0;
                        err_rec->mem_channel = channel_index;
-                       err_rec->mcumc_id = umc_v6_1_get_umc_inst(adev);
+                       err_rec->mcumc_id = umc_inst;
 
                        err_data->err_addr_cnt++;
                }
        }
 
        /* clear umc status */
-       WREG64_UMC(mc_umc_status_addr + umc_reg_offset, 0x0ULL);
+       WREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4, 0x0ULL);
 }
 
 static void umc_v6_1_query_ras_error_address(struct amdgpu_device *adev,
                                             void *ras_error_status)
 {
-       amdgpu_umc_for_each_channel(umc_v6_1_query_error_address);
+       struct ras_err_data* err_data = (struct ras_err_data*)ras_error_status;
+
+       uint32_t umc_inst        = 0;
+       uint32_t ch_inst         = 0;
+       uint32_t umc_reg_offset  = 0;
+
+       LOOP_UMC_INST_AND_CH(umc_inst, ch_inst) {
+               umc_reg_offset = get_umc_6_reg_offset(adev,
+                                                     umc_inst,
+                                                     ch_inst);
+
+               umc_v6_1_query_error_address(adev,
+                                            err_data,
+                                            umc_reg_offset,
+                                            ch_inst,
+                                            umc_inst);
+       }
+
 }
 
 static void umc_v6_1_err_cnt_init_per_channel(struct amdgpu_device *adev,
-                                        struct ras_err_data *err_data,
-                                        uint32_t umc_reg_offset, uint32_t channel_index)
+                                             uint32_t umc_reg_offset)
 {
        uint32_t ecc_err_cnt_sel, ecc_err_cnt_sel_addr;
        uint32_t ecc_err_cnt_addr;
 
        if (adev->asic_type == CHIP_ARCTURUS) {
                /* UMC 6_1_2 registers */
-
                ecc_err_cnt_sel_addr =
                        SOC15_REG_OFFSET(UMC, 0, mmUMCCH0_0_EccErrCntSel_ARCT);
                ecc_err_cnt_addr =
                        SOC15_REG_OFFSET(UMC, 0, mmUMCCH0_0_EccErrCnt_ARCT);
        } else {
                /* UMC 6_1_1 registers */
-
                ecc_err_cnt_sel_addr =
                        SOC15_REG_OFFSET(UMC, 0, mmUMCCH0_0_EccErrCntSel);
                ecc_err_cnt_addr =
@@ -297,28 +290,38 @@ static void umc_v6_1_err_cnt_init_per_channel(struct amdgpu_device *adev,
        }
 
        /* select the lower chip and check the error count */
-       ecc_err_cnt_sel = RREG32(ecc_err_cnt_sel_addr + umc_reg_offset);
+       ecc_err_cnt_sel = RREG32_PCIE((ecc_err_cnt_sel_addr + umc_reg_offset) * 4);
        ecc_err_cnt_sel = REG_SET_FIELD(ecc_err_cnt_sel, UMCCH0_0_EccErrCntSel,
                                        EccErrCntCsSel, 0);
        /* set ce error interrupt type to APIC based interrupt */
        ecc_err_cnt_sel = REG_SET_FIELD(ecc_err_cnt_sel, UMCCH0_0_EccErrCntSel,
                                        EccErrInt, 0x1);
-       WREG32(ecc_err_cnt_sel_addr + umc_reg_offset, ecc_err_cnt_sel);
+       WREG32_PCIE((ecc_err_cnt_sel_addr + umc_reg_offset) * 4, ecc_err_cnt_sel);
        /* set error count to initial value */
-       WREG32(ecc_err_cnt_addr + umc_reg_offset, UMC_V6_1_CE_CNT_INIT);
+       WREG32_PCIE((ecc_err_cnt_addr + umc_reg_offset) * 4, UMC_V6_1_CE_CNT_INIT);
 
        /* select the higher chip and check the err counter */
        ecc_err_cnt_sel = REG_SET_FIELD(ecc_err_cnt_sel, UMCCH0_0_EccErrCntSel,
                                        EccErrCntCsSel, 1);
-       WREG32(ecc_err_cnt_sel_addr + umc_reg_offset, ecc_err_cnt_sel);
-       WREG32(ecc_err_cnt_addr + umc_reg_offset, UMC_V6_1_CE_CNT_INIT);
+       WREG32_PCIE((ecc_err_cnt_sel_addr + umc_reg_offset) * 4, ecc_err_cnt_sel);
+       WREG32_PCIE((ecc_err_cnt_addr + umc_reg_offset) * 4, UMC_V6_1_CE_CNT_INIT);
 }
 
 static void umc_v6_1_err_cnt_init(struct amdgpu_device *adev)
 {
-       void *ras_error_status = NULL;
+       uint32_t umc_inst        = 0;
+       uint32_t ch_inst         = 0;
+       uint32_t umc_reg_offset  = 0;
+
+       umc_v6_1_disable_umc_index_mode(adev);
 
-       amdgpu_umc_for_each_channel(umc_v6_1_err_cnt_init_per_channel);
+       LOOP_UMC_INST_AND_CH(umc_inst, ch_inst) {
+               umc_reg_offset = get_umc_6_reg_offset(adev,
+                                                     umc_inst,
+                                                     ch_inst);
+
+               umc_v6_1_err_cnt_init_per_channel(adev, umc_reg_offset);
+       }
 }
 
 const struct amdgpu_umc_funcs umc_v6_1_funcs = {
@@ -326,6 +329,4 @@ const struct amdgpu_umc_funcs umc_v6_1_funcs = {
        .ras_late_init = amdgpu_umc_ras_late_init,
        .query_ras_error_count = umc_v6_1_query_ras_error_count,
        .query_ras_error_address = umc_v6_1_query_ras_error_address,
-       .enable_umc_index_mode = umc_v6_1_enable_umc_index_mode,
-       .disable_umc_index_mode = umc_v6_1_disable_umc_index_mode,
 };
index 652cecc..3b025a3 100644 (file)
@@ -25,6 +25,7 @@
 
 #include "amdgpu.h"
 #include "amdgpu_vcn.h"
+#include "amdgpu_pm.h"
 #include "soc15.h"
 #include "soc15d.h"
 #include "soc15_common.h"
@@ -51,6 +52,8 @@ static int vcn_v1_0_set_powergating_state(void *handle, enum amd_powergating_sta
 static int vcn_v1_0_pause_dpg_mode(struct amdgpu_device *adev,
                                struct dpg_pause_state *new_state);
 
+static void vcn_v1_0_idle_work_handler(struct work_struct *work);
+
 /**
  * vcn_v1_0_early_init - set function pointers
  *
@@ -105,6 +108,9 @@ static int vcn_v1_0_sw_init(void *handle)
        if (r)
                return r;
 
+       /* Override the work func */
+       adev->vcn.idle_work.work.func = vcn_v1_0_idle_work_handler;
+
        if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
                const struct common_firmware_header *hdr;
                hdr = (const struct common_firmware_header *)adev->vcn.fw->data;
@@ -1758,6 +1764,86 @@ static int vcn_v1_0_set_powergating_state(void *handle,
        return ret;
 }
 
+static void vcn_v1_0_idle_work_handler(struct work_struct *work)
+{
+       struct amdgpu_device *adev =
+               container_of(work, struct amdgpu_device, vcn.idle_work.work);
+       unsigned int fences = 0, i;
+
+       for (i = 0; i < adev->vcn.num_enc_rings; ++i)
+               fences += amdgpu_fence_count_emitted(&adev->vcn.inst->ring_enc[i]);
+
+       if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) {
+               struct dpg_pause_state new_state;
+
+               if (fences)
+                       new_state.fw_based = VCN_DPG_STATE__PAUSE;
+               else
+                       new_state.fw_based = VCN_DPG_STATE__UNPAUSE;
+
+               if (amdgpu_fence_count_emitted(&adev->jpeg.inst->ring_dec))
+                       new_state.jpeg = VCN_DPG_STATE__PAUSE;
+               else
+                       new_state.jpeg = VCN_DPG_STATE__UNPAUSE;
+
+               adev->vcn.pause_dpg_mode(adev, &new_state);
+       }
+
+       fences += amdgpu_fence_count_emitted(&adev->jpeg.inst->ring_dec);
+       fences += amdgpu_fence_count_emitted(&adev->vcn.inst->ring_dec);
+
+       if (fences == 0) {
+               amdgpu_gfx_off_ctrl(adev, true);
+               if (adev->pm.dpm_enabled)
+                       amdgpu_dpm_enable_uvd(adev, false);
+               else
+                       amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCN,
+                              AMD_PG_STATE_GATE);
+       } else {
+               schedule_delayed_work(&adev->vcn.idle_work, VCN_IDLE_TIMEOUT);
+       }
+}
+
+void vcn_v1_0_ring_begin_use(struct amdgpu_ring *ring)
+{
+       struct amdgpu_device *adev = ring->adev;
+       bool set_clocks = !cancel_delayed_work_sync(&adev->vcn.idle_work);
+
+       if (set_clocks) {
+               amdgpu_gfx_off_ctrl(adev, false);
+               if (adev->pm.dpm_enabled)
+                       amdgpu_dpm_enable_uvd(adev, true);
+               else
+                       amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCN,
+                              AMD_PG_STATE_UNGATE);
+       }
+
+       if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) {
+               struct dpg_pause_state new_state;
+               unsigned int fences = 0, i;
+
+               for (i = 0; i < adev->vcn.num_enc_rings; ++i)
+                       fences += amdgpu_fence_count_emitted(&adev->vcn.inst->ring_enc[i]);
+
+               if (fences)
+                       new_state.fw_based = VCN_DPG_STATE__PAUSE;
+               else
+                       new_state.fw_based = VCN_DPG_STATE__UNPAUSE;
+
+               if (amdgpu_fence_count_emitted(&adev->jpeg.inst->ring_dec))
+                       new_state.jpeg = VCN_DPG_STATE__PAUSE;
+               else
+                       new_state.jpeg = VCN_DPG_STATE__UNPAUSE;
+
+               if (ring->funcs->type == AMDGPU_RING_TYPE_VCN_ENC)
+                       new_state.fw_based = VCN_DPG_STATE__PAUSE;
+               else if (ring->funcs->type == AMDGPU_RING_TYPE_VCN_JPEG)
+                       new_state.jpeg = VCN_DPG_STATE__PAUSE;
+
+               adev->vcn.pause_dpg_mode(adev, &new_state);
+       }
+}
+
 static const struct amd_ip_funcs vcn_v1_0_ip_funcs = {
        .name = "vcn_v1_0",
        .early_init = vcn_v1_0_early_init,
@@ -1804,7 +1890,7 @@ static const struct amdgpu_ring_funcs vcn_v1_0_dec_ring_vm_funcs = {
        .insert_start = vcn_v1_0_dec_ring_insert_start,
        .insert_end = vcn_v1_0_dec_ring_insert_end,
        .pad_ib = amdgpu_ring_generic_pad_ib,
-       .begin_use = amdgpu_vcn_ring_begin_use,
+       .begin_use = vcn_v1_0_ring_begin_use,
        .end_use = amdgpu_vcn_ring_end_use,
        .emit_wreg = vcn_v1_0_dec_ring_emit_wreg,
        .emit_reg_wait = vcn_v1_0_dec_ring_emit_reg_wait,
@@ -1836,7 +1922,7 @@ static const struct amdgpu_ring_funcs vcn_v1_0_enc_ring_vm_funcs = {
        .insert_nop = amdgpu_ring_insert_nop,
        .insert_end = vcn_v1_0_enc_ring_insert_end,
        .pad_ib = amdgpu_ring_generic_pad_ib,
-       .begin_use = amdgpu_vcn_ring_begin_use,
+       .begin_use = vcn_v1_0_ring_begin_use,
        .end_use = amdgpu_vcn_ring_end_use,
        .emit_wreg = vcn_v1_0_enc_ring_emit_wreg,
        .emit_reg_wait = vcn_v1_0_enc_ring_emit_reg_wait,
index 2a497a7..f67d739 100644 (file)
@@ -24,6 +24,8 @@
 #ifndef __VCN_V1_0_H__
 #define __VCN_V1_0_H__
 
+void vcn_v1_0_ring_begin_use(struct amdgpu_ring *ring);
+
 extern const struct amdgpu_ip_block_version vcn_v1_0_ip_block;
 
 #endif
index f67fca3..4ea8e20 100644 (file)
@@ -29,6 +29,7 @@
 #include "soc15.h"
 #include "soc15d.h"
 #include "vcn_v2_0.h"
+#include "mmsch_v1_0.h"
 
 #include "vcn/vcn_2_5_offset.h"
 #include "vcn/vcn_2_5_sh_mask.h"
@@ -54,6 +55,7 @@ static void vcn_v2_5_set_enc_ring_funcs(struct amdgpu_device *adev);
 static void vcn_v2_5_set_irq_funcs(struct amdgpu_device *adev);
 static int vcn_v2_5_set_powergating_state(void *handle,
                                enum amd_powergating_state state);
+static int vcn_v2_5_sriov_start(struct amdgpu_device *adev);
 
 static int amdgpu_ih_clientid_vcns[] = {
        SOC15_IH_CLIENTID_VCN,
@@ -88,7 +90,13 @@ static int vcn_v2_5_early_init(void *handle)
        } else
                adev->vcn.num_vcn_inst = 1;
 
-       adev->vcn.num_enc_rings = 2;
+       if (amdgpu_sriov_vf(adev)) {
+               adev->vcn.num_vcn_inst = 2;
+               adev->vcn.harvest_config = 0;
+               adev->vcn.num_enc_rings = 1;
+       } else {
+               adev->vcn.num_enc_rings = 2;
+       }
 
        vcn_v2_5_set_dec_ring_funcs(adev);
        vcn_v2_5_set_enc_ring_funcs(adev);
@@ -176,7 +184,9 @@ static int vcn_v2_5_sw_init(void *handle)
 
                ring = &adev->vcn.inst[j].ring_dec;
                ring->use_doorbell = true;
-               ring->doorbell_index = (adev->doorbell_index.vcn.vcn_ring0_1 << 1) + 8*j;
+
+               ring->doorbell_index = (adev->doorbell_index.vcn.vcn_ring0_1 << 1) +
+                               (amdgpu_sriov_vf(adev) ? 2*j : 8*j);
                sprintf(ring->name, "vcn_dec_%d", j);
                r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.inst[j].irq, 0);
                if (r)
@@ -185,7 +195,10 @@ static int vcn_v2_5_sw_init(void *handle)
                for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
                        ring = &adev->vcn.inst[j].ring_enc[i];
                        ring->use_doorbell = true;
-                       ring->doorbell_index = (adev->doorbell_index.vcn.vcn_ring0_1 << 1) + 2 + i + 8*j;
+
+                       ring->doorbell_index = (adev->doorbell_index.vcn.vcn_ring0_1 << 1) +
+                                       (amdgpu_sriov_vf(adev) ? (1 + i + 2*j) : (2 + i + 8*j));
+
                        sprintf(ring->name, "vcn_enc_%d.%d", j, i);
                        r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.inst[j].irq, 0);
                        if (r)
@@ -193,6 +206,12 @@ static int vcn_v2_5_sw_init(void *handle)
                }
        }
 
+       if (amdgpu_sriov_vf(adev)) {
+               r = amdgpu_virt_alloc_mm_table(adev);
+               if (r)
+                       return r;
+       }
+
        return 0;
 }
 
@@ -208,6 +227,9 @@ static int vcn_v2_5_sw_fini(void *handle)
        int r;
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
+       if (amdgpu_sriov_vf(adev))
+               amdgpu_virt_free_mm_table(adev);
+
        r = amdgpu_vcn_suspend(adev);
        if (r)
                return r;
@@ -228,25 +250,37 @@ static int vcn_v2_5_hw_init(void *handle)
 {
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
        struct amdgpu_ring *ring;
-       int i, j, r;
+       int i, j, r = 0;
+
+       if (amdgpu_sriov_vf(adev))
+               r = vcn_v2_5_sriov_start(adev);
 
        for (j = 0; j < adev->vcn.num_vcn_inst; ++j) {
                if (adev->vcn.harvest_config & (1 << j))
                        continue;
-               ring = &adev->vcn.inst[j].ring_dec;
 
-               adev->nbio.funcs->vcn_doorbell_range(adev, ring->use_doorbell,
-                                                    ring->doorbell_index, j);
+               if (amdgpu_sriov_vf(adev)) {
+                       adev->vcn.inst[j].ring_enc[0].sched.ready = true;
+                       adev->vcn.inst[j].ring_enc[1].sched.ready = false;
+                       adev->vcn.inst[j].ring_enc[2].sched.ready = false;
+                       adev->vcn.inst[j].ring_dec.sched.ready = true;
+               } else {
 
-               r = amdgpu_ring_test_helper(ring);
-               if (r)
-                       goto done;
+                       ring = &adev->vcn.inst[j].ring_dec;
+
+                       adev->nbio.funcs->vcn_doorbell_range(adev, ring->use_doorbell,
+                                                    ring->doorbell_index, j);
 
-               for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
-                       ring = &adev->vcn.inst[j].ring_enc[i];
                        r = amdgpu_ring_test_helper(ring);
                        if (r)
                                goto done;
+
+                       for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
+                               ring = &adev->vcn.inst[j].ring_enc[i];
+                               r = amdgpu_ring_test_helper(ring);
+                               if (r)
+                                       goto done;
+                       }
                }
        }
 
@@ -741,6 +775,204 @@ static int vcn_v2_5_start(struct amdgpu_device *adev)
        return 0;
 }
 
+static int vcn_v2_5_mmsch_start(struct amdgpu_device *adev,
+                               struct amdgpu_mm_table *table)
+{
+       uint32_t data = 0, loop = 0, size = 0;
+       uint64_t addr = table->gpu_addr;
+       struct mmsch_v1_1_init_header *header = NULL;;
+
+       header = (struct mmsch_v1_1_init_header *)table->cpu_addr;
+       size = header->total_size;
+
+       /*
+        * 1, write to vce_mmsch_vf_ctx_addr_lo/hi register with GPU mc addr of
+        *  memory descriptor location
+        */
+       WREG32_SOC15(UVD, 0, mmMMSCH_VF_CTX_ADDR_LO, lower_32_bits(addr));
+       WREG32_SOC15(UVD, 0, mmMMSCH_VF_CTX_ADDR_HI, upper_32_bits(addr));
+
+       /* 2, update vmid of descriptor */
+       data = RREG32_SOC15(UVD, 0, mmMMSCH_VF_VMID);
+       data &= ~MMSCH_VF_VMID__VF_CTX_VMID_MASK;
+       /* use domain0 for MM scheduler */
+       data |= (0 << MMSCH_VF_VMID__VF_CTX_VMID__SHIFT);
+       WREG32_SOC15(UVD, 0, mmMMSCH_VF_VMID, data);
+
+       /* 3, notify mmsch about the size of this descriptor */
+       WREG32_SOC15(UVD, 0, mmMMSCH_VF_CTX_SIZE, size);
+
+       /* 4, set resp to zero */
+       WREG32_SOC15(UVD, 0, mmMMSCH_VF_MAILBOX_RESP, 0);
+
+       /*
+        * 5, kick off the initialization and wait until
+        * VCE_MMSCH_VF_MAILBOX_RESP becomes non-zero
+        */
+       WREG32_SOC15(UVD, 0, mmMMSCH_VF_MAILBOX_HOST, 0x10000001);
+
+       data = RREG32_SOC15(UVD, 0, mmMMSCH_VF_MAILBOX_RESP);
+       loop = 10;
+       while ((data & 0x10000002) != 0x10000002) {
+               udelay(100);
+               data = RREG32_SOC15(UVD, 0, mmMMSCH_VF_MAILBOX_RESP);
+               loop--;
+               if (!loop)
+                       break;
+       }
+
+       if (!loop) {
+               dev_err(adev->dev,
+                       "failed to init MMSCH, mmMMSCH_VF_MAILBOX_RESP = %x\n",
+                       data);
+               return -EBUSY;
+       }
+
+       return 0;
+}
+
+static int vcn_v2_5_sriov_start(struct amdgpu_device *adev)
+{
+       struct amdgpu_ring *ring;
+       uint32_t offset, size, tmp, i, rb_bufsz;
+       uint32_t table_size = 0;
+       struct mmsch_v1_0_cmd_direct_write direct_wt = { { 0 } };
+       struct mmsch_v1_0_cmd_direct_read_modify_write direct_rd_mod_wt = { { 0 } };
+       struct mmsch_v1_0_cmd_direct_polling direct_poll = { { 0 } };
+       struct mmsch_v1_0_cmd_end end = { { 0 } };
+       uint32_t *init_table = adev->virt.mm_table.cpu_addr;
+       struct mmsch_v1_1_init_header *header = (struct mmsch_v1_1_init_header *)init_table;
+
+       direct_wt.cmd_header.command_type = MMSCH_COMMAND__DIRECT_REG_WRITE;
+       direct_rd_mod_wt.cmd_header.command_type = MMSCH_COMMAND__DIRECT_REG_READ_MODIFY_WRITE;
+       direct_poll.cmd_header.command_type = MMSCH_COMMAND__DIRECT_REG_POLLING;
+       end.cmd_header.command_type = MMSCH_COMMAND__END;
+
+       header->version = MMSCH_VERSION;
+       header->total_size = sizeof(struct mmsch_v1_1_init_header) >> 2;
+       init_table += header->total_size;
+
+       for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+               header->eng[i].table_offset = header->total_size;
+               header->eng[i].init_status = 0;
+               header->eng[i].table_size = 0;
+
+               table_size = 0;
+
+               MMSCH_V1_0_INSERT_DIRECT_RD_MOD_WT(
+                       SOC15_REG_OFFSET(UVD, i, mmUVD_STATUS),
+                       ~UVD_STATUS__UVD_BUSY, UVD_STATUS__UVD_BUSY);
+
+               size = AMDGPU_GPU_PAGE_ALIGN(adev->vcn.fw->size + 4);
+               /* mc resume*/
+               if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+                       MMSCH_V1_0_INSERT_DIRECT_WT(
+                               SOC15_REG_OFFSET(UVD, i,
+                                       mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW),
+                               adev->firmware.ucode[AMDGPU_UCODE_ID_VCN + i].tmr_mc_addr_lo);
+                       MMSCH_V1_0_INSERT_DIRECT_WT(
+                               SOC15_REG_OFFSET(UVD, i,
+                                       mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH),
+                               adev->firmware.ucode[AMDGPU_UCODE_ID_VCN + i].tmr_mc_addr_hi);
+                       offset = 0;
+                       MMSCH_V1_0_INSERT_DIRECT_WT(
+                               SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_OFFSET0), 0);
+               } else {
+                       MMSCH_V1_0_INSERT_DIRECT_WT(
+                               SOC15_REG_OFFSET(UVD, i,
+                                       mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW),
+                               lower_32_bits(adev->vcn.inst[i].gpu_addr));
+                       MMSCH_V1_0_INSERT_DIRECT_WT(
+                               SOC15_REG_OFFSET(UVD, i,
+                                       mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH),
+                               upper_32_bits(adev->vcn.inst[i].gpu_addr));
+                       offset = size;
+                       MMSCH_V1_0_INSERT_DIRECT_WT(
+                               SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_OFFSET0),
+                               AMDGPU_UVD_FIRMWARE_OFFSET >> 3);
+               }
+
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_SIZE0),
+                       size);
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i,
+                               mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW),
+                       lower_32_bits(adev->vcn.inst[i].gpu_addr + offset));
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i,
+                               mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH),
+                       upper_32_bits(adev->vcn.inst[i].gpu_addr + offset));
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_OFFSET1),
+                       0);
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_SIZE1),
+                       AMDGPU_VCN_STACK_SIZE);
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i,
+                               mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW),
+                       lower_32_bits(adev->vcn.inst[i].gpu_addr + offset +
+                               AMDGPU_VCN_STACK_SIZE));
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i,
+                               mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH),
+                       upper_32_bits(adev->vcn.inst[i].gpu_addr + offset +
+                               AMDGPU_VCN_STACK_SIZE));
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_OFFSET2),
+                       0);
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_SIZE2),
+                       AMDGPU_VCN_CONTEXT_SIZE);
+
+               ring = &adev->vcn.inst[i].ring_enc[0];
+               ring->wptr = 0;
+
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i, mmUVD_RB_BASE_LO),
+                       lower_32_bits(ring->gpu_addr));
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i, mmUVD_RB_BASE_HI),
+                       upper_32_bits(ring->gpu_addr));
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i, mmUVD_RB_SIZE),
+                       ring->ring_size / 4);
+
+               ring = &adev->vcn.inst[i].ring_dec;
+               ring->wptr = 0;
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i,
+                               mmUVD_LMI_RBC_RB_64BIT_BAR_LOW),
+                       lower_32_bits(ring->gpu_addr));
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i,
+                               mmUVD_LMI_RBC_RB_64BIT_BAR_HIGH),
+                       upper_32_bits(ring->gpu_addr));
+
+               /* force RBC into idle state */
+               rb_bufsz = order_base_2(ring->ring_size);
+               tmp = REG_SET_FIELD(0, UVD_RBC_RB_CNTL, RB_BUFSZ, rb_bufsz);
+               tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_BLKSZ, 1);
+               tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_NO_FETCH, 1);
+               tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_NO_UPDATE, 1);
+               tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_RPTR_WR_EN, 1);
+               MMSCH_V1_0_INSERT_DIRECT_WT(
+                       SOC15_REG_OFFSET(UVD, i, mmUVD_RBC_RB_CNTL), tmp);
+
+               /* add end packet */
+               memcpy((void *)init_table, &end, sizeof(struct mmsch_v1_0_cmd_end));
+               table_size += sizeof(struct mmsch_v1_0_cmd_end) / 4;
+               init_table += sizeof(struct mmsch_v1_0_cmd_end) / 4;
+
+               /* refine header */
+               header->eng[i].table_size = table_size;
+               header->total_size += table_size;
+       }
+
+       return vcn_v2_5_mmsch_start(adev, &adev->virt.mm_table);
+}
+
 static int vcn_v2_5_stop(struct amdgpu_device *adev)
 {
        uint32_t tmp;
@@ -1048,6 +1280,9 @@ static int vcn_v2_5_set_clockgating_state(void *handle,
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
        bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
 
+       if (amdgpu_sriov_vf(adev))
+               return 0;
+
        if (enable) {
                if (vcn_v2_5_is_idle(handle))
                        return -EBUSY;
@@ -1065,6 +1300,9 @@ static int vcn_v2_5_set_powergating_state(void *handle,
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
        int ret;
 
+       if (amdgpu_sriov_vf(adev))
+               return 0;
+
        if(state == adev->vcn.cur_state)
                return 0;
 
index 5cb7e23..d9e3310 100644 (file)
@@ -234,16 +234,9 @@ static int vega10_ih_irq_init(struct amdgpu_device *adev)
        WREG32_SOC15(OSSSYS, 0, mmIH_RB_BASE_HI, (ih->gpu_addr >> 40) & 0xff);
 
        ih_rb_cntl = RREG32_SOC15(OSSSYS, 0, mmIH_RB_CNTL);
-       ih_chicken = RREG32_SOC15(OSSSYS, 0, mmIH_CHICKEN);
        ih_rb_cntl = vega10_ih_rb_cntl(ih, ih_rb_cntl);
-       if (adev->irq.ih.use_bus_addr) {
-               ih_chicken = REG_SET_FIELD(ih_chicken, IH_CHICKEN, MC_SPACE_GPA_ENABLE, 1);
-       } else {
-               ih_chicken = REG_SET_FIELD(ih_chicken, IH_CHICKEN, MC_SPACE_FBPA_ENABLE, 1);
-       }
        ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, RPTR_REARM,
                                   !!adev->irq.msi_enabled);
-
        if (amdgpu_sriov_vf(adev)) {
                if (psp_reg_program(&adev->psp, PSP_REG_IH_RB_CNTL, ih_rb_cntl)) {
                        DRM_ERROR("PSP program IH_RB_CNTL failed!\n");
@@ -253,10 +246,19 @@ static int vega10_ih_irq_init(struct amdgpu_device *adev)
                WREG32_SOC15(OSSSYS, 0, mmIH_RB_CNTL, ih_rb_cntl);
        }
 
-       if ((adev->asic_type == CHIP_ARCTURUS
-               && adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT)
-               || adev->asic_type == CHIP_RENOIR)
+       if ((adev->asic_type == CHIP_ARCTURUS &&
+            adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) ||
+           adev->asic_type == CHIP_RENOIR) {
+               ih_chicken = RREG32_SOC15(OSSSYS, 0, mmIH_CHICKEN);
+               if (adev->irq.ih.use_bus_addr) {
+                       ih_chicken = REG_SET_FIELD(ih_chicken, IH_CHICKEN,
+                                                  MC_SPACE_GPA_ENABLE, 1);
+               } else {
+                       ih_chicken = REG_SET_FIELD(ih_chicken, IH_CHICKEN,
+                                                  MC_SPACE_FBPA_ENABLE, 1);
+               }
                WREG32_SOC15(OSSSYS, 0, mmIH_CHICKEN, ih_chicken);
+       }
 
        /* set the writeback address whether it's enabled or not */
        WREG32_SOC15(OSSSYS, 0, mmIH_RB_WPTR_ADDR_LO,
index b6ba069..3f0300e 100644 (file)
@@ -42,6 +42,7 @@
 
 static long kfd_ioctl(struct file *, unsigned int, unsigned long);
 static int kfd_open(struct inode *, struct file *);
+static int kfd_release(struct inode *, struct file *);
 static int kfd_mmap(struct file *, struct vm_area_struct *);
 
 static const char kfd_dev_name[] = "kfd";
@@ -51,6 +52,7 @@ static const struct file_operations kfd_fops = {
        .unlocked_ioctl = kfd_ioctl,
        .compat_ioctl = compat_ptr_ioctl,
        .open = kfd_open,
+       .release = kfd_release,
        .mmap = kfd_mmap,
 };
 
@@ -124,8 +126,13 @@ static int kfd_open(struct inode *inode, struct file *filep)
        if (IS_ERR(process))
                return PTR_ERR(process);
 
-       if (kfd_is_locked())
+       if (kfd_is_locked()) {
+               kfd_unref_process(process);
                return -EAGAIN;
+       }
+
+       /* filep now owns the reference returned by kfd_create_process */
+       filep->private_data = process;
 
        dev_dbg(kfd_device, "process %d opened, compat mode (32 bit) - %d\n",
                process->pasid, process->is_32bit_user_mode);
@@ -133,6 +140,16 @@ static int kfd_open(struct inode *inode, struct file *filep)
        return 0;
 }
 
+static int kfd_release(struct inode *inode, struct file *filep)
+{
+       struct kfd_process *process = filep->private_data;
+
+       if (process)
+               kfd_unref_process(process);
+
+       return 0;
+}
+
 static int kfd_ioctl_get_version(struct file *filep, struct kfd_process *p,
                                        void *data)
 {
@@ -1801,9 +1818,14 @@ static long kfd_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 
        dev_dbg(kfd_device, "ioctl cmd 0x%x (#0x%x), arg 0x%lx\n", cmd, nr, arg);
 
-       process = kfd_get_process(current);
-       if (IS_ERR(process)) {
-               dev_dbg(kfd_device, "no process\n");
+       /* Get the process struct from the filep. Only the process
+        * that opened /dev/kfd can use the file descriptor. Child
+        * processes need to create their own KFD device context.
+        */
+       process = filep->private_data;
+       if (process->lead_thread != current->group_leader) {
+               dev_dbg(kfd_device, "Using KFD FD in wrong process\n");
+               retcode = -EBADF;
                goto err_i1;
        }
 
index 15c5230..511712c 100644 (file)
@@ -93,7 +93,7 @@ void kfd_debugfs_init(void)
                            kfd_debugfs_hqds_by_device, &kfd_debugfs_fops);
        debugfs_create_file("rls", S_IFREG | 0444, debugfs_root,
                            kfd_debugfs_rls_by_device, &kfd_debugfs_fops);
-       debugfs_create_file("hang_hws", S_IFREG | 0644, debugfs_root,
+       debugfs_create_file("hang_hws", S_IFREG | 0200, debugfs_root,
                            NULL, &kfd_debugfs_hang_hws_fops);
 }
 
index 209bfc8..2a9e401 100644 (file)
@@ -728,6 +728,9 @@ int kgd2kfd_pre_reset(struct kfd_dev *kfd)
 {
        if (!kfd->init_complete)
                return 0;
+
+       kfd->dqm->ops.pre_reset(kfd->dqm);
+
        kgd2kfd_suspend(kfd);
 
        kfd_signal_reset_event(kfd);
@@ -822,6 +825,21 @@ dqm_start_error:
        return err;
 }
 
+static inline void kfd_queue_work(struct workqueue_struct *wq,
+                                 struct work_struct *work)
+{
+       int cpu, new_cpu;
+
+       cpu = new_cpu = smp_processor_id();
+       do {
+               new_cpu = cpumask_next(new_cpu, cpu_online_mask) % nr_cpu_ids;
+               if (cpu_to_node(new_cpu) == numa_node_id())
+                       break;
+       } while (cpu != new_cpu);
+
+       queue_work_on(new_cpu, wq, work);
+}
+
 /* This is called directly from KGD at ISR. */
 void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
 {
@@ -844,7 +862,7 @@ void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
                                   patched_ihre, &is_patched)
            && enqueue_ih_ring_entry(kfd,
                                     is_patched ? patched_ihre : ih_ring_entry))
-               queue_work(kfd->ih_wq, &kfd->interrupt_work);
+               kfd_queue_work(kfd->ih_wq, &kfd->interrupt_work);
 
        spin_unlock_irqrestore(&kfd->interrupt_lock, flags);
 }
index f7f6df4..d7eb6ac 100644 (file)
@@ -930,7 +930,6 @@ static void uninitialize(struct device_queue_manager *dqm)
        for (i = 0 ; i < KFD_MQD_TYPE_MAX ; i++)
                kfree(dqm->mqd_mgrs[i]);
        mutex_destroy(&dqm->lock_hidden);
-       kfd_gtt_sa_free(dqm->dev, dqm->pipeline_mem);
 }
 
 static int start_nocpsch(struct device_queue_manager *dqm)
@@ -947,12 +946,19 @@ static int start_nocpsch(struct device_queue_manager *dqm)
 static int stop_nocpsch(struct device_queue_manager *dqm)
 {
        if (dqm->dev->device_info->asic_family == CHIP_HAWAII)
-               pm_uninit(&dqm->packets);
+               pm_uninit(&dqm->packets, false);
        dqm->sched_running = false;
 
        return 0;
 }
 
+static void pre_reset(struct device_queue_manager *dqm)
+{
+       dqm_lock(dqm);
+       dqm->is_resetting = true;
+       dqm_unlock(dqm);
+}
+
 static int allocate_sdma_queue(struct device_queue_manager *dqm,
                                struct queue *q)
 {
@@ -1100,6 +1106,7 @@ static int start_cpsch(struct device_queue_manager *dqm)
        dqm_lock(dqm);
        /* clear hang status when driver try to start the hw scheduler */
        dqm->is_hws_hang = false;
+       dqm->is_resetting = false;
        dqm->sched_running = true;
        execute_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
        dqm_unlock(dqm);
@@ -1107,20 +1114,24 @@ static int start_cpsch(struct device_queue_manager *dqm)
        return 0;
 fail_allocate_vidmem:
 fail_set_sched_resources:
-       pm_uninit(&dqm->packets);
+       pm_uninit(&dqm->packets, false);
 fail_packet_manager_init:
        return retval;
 }
 
 static int stop_cpsch(struct device_queue_manager *dqm)
 {
+       bool hanging;
+
        dqm_lock(dqm);
-       unmap_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES, 0);
+       if (!dqm->is_hws_hang)
+               unmap_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES, 0);
+       hanging = dqm->is_hws_hang || dqm->is_resetting;
        dqm->sched_running = false;
        dqm_unlock(dqm);
 
        kfd_gtt_sa_free(dqm->dev, dqm->fence_mem);
-       pm_uninit(&dqm->packets);
+       pm_uninit(&dqm->packets, hanging);
 
        return 0;
 }
@@ -1352,8 +1363,17 @@ static int unmap_queues_cpsch(struct device_queue_manager *dqm,
        /* should be timed out */
        retval = amdkfd_fence_wait_timeout(dqm->fence_addr, KFD_FENCE_COMPLETED,
                                queue_preemption_timeout_ms);
-       if (retval)
+       if (retval) {
+               pr_err("The cp might be in an unrecoverable state due to an unsuccessful queues preemption\n");
+               dqm->is_hws_hang = true;
+               /* It's possible we're detecting a HWS hang in the
+                * middle of a GPU reset. No need to schedule another
+                * reset in this case.
+                */
+               if (!dqm->is_resetting)
+                       schedule_work(&dqm->hw_exception_work);
                return retval;
+       }
 
        pm_release_ib(&dqm->packets);
        dqm->active_runlist = false;
@@ -1371,12 +1391,8 @@ static int execute_queues_cpsch(struct device_queue_manager *dqm,
        if (dqm->is_hws_hang)
                return -EIO;
        retval = unmap_queues_cpsch(dqm, filter, filter_param);
-       if (retval) {
-               pr_err("The cp might be in an unrecoverable state due to an unsuccessful queues preemption\n");
-               dqm->is_hws_hang = true;
-               schedule_work(&dqm->hw_exception_work);
+       if (retval)
                return retval;
-       }
 
        return map_queues_cpsch(dqm);
 }
@@ -1770,6 +1786,7 @@ struct device_queue_manager *device_queue_manager_init(struct kfd_dev *dev)
                dqm->ops.initialize = initialize_cpsch;
                dqm->ops.start = start_cpsch;
                dqm->ops.stop = stop_cpsch;
+               dqm->ops.pre_reset = pre_reset;
                dqm->ops.destroy_queue = destroy_queue_cpsch;
                dqm->ops.update_queue = update_queue;
                dqm->ops.register_process = register_process;
@@ -1788,6 +1805,7 @@ struct device_queue_manager *device_queue_manager_init(struct kfd_dev *dev)
                /* initialize dqm for no cp scheduling */
                dqm->ops.start = start_nocpsch;
                dqm->ops.stop = stop_nocpsch;
+               dqm->ops.pre_reset = pre_reset;
                dqm->ops.create_queue = create_queue_nocpsch;
                dqm->ops.destroy_queue = destroy_queue_nocpsch;
                dqm->ops.update_queue = update_queue;
index a8c37e6..871d3b6 100644 (file)
@@ -104,6 +104,7 @@ struct device_queue_manager_ops {
        int     (*initialize)(struct device_queue_manager *dqm);
        int     (*start)(struct device_queue_manager *dqm);
        int     (*stop)(struct device_queue_manager *dqm);
+       void    (*pre_reset)(struct device_queue_manager *dqm);
        void    (*uninitialize)(struct device_queue_manager *dqm);
        int     (*create_kernel_queue)(struct device_queue_manager *dqm,
                                        struct kernel_queue *kq,
@@ -190,7 +191,6 @@ struct device_queue_manager {
        /* the pasid mapping for each kfd vmid */
        uint16_t                vmid_pasid[VMID_NUM];
        uint64_t                pipelines_addr;
-       struct kfd_mem_obj      *pipeline_mem;
        uint64_t                fence_gpu_addr;
        unsigned int            *fence_addr;
        struct kfd_mem_obj      *fence_mem;
@@ -199,6 +199,7 @@ struct device_queue_manager {
 
        /* hw exception  */
        bool                    is_hws_hang;
+       bool                    is_resetting;
        struct work_struct      hw_exception_work;
        struct kfd_mem_obj      hiq_sdma_mqd;
        bool                    sched_running;
index 2d56dc5..bae7064 100644 (file)
@@ -195,9 +195,9 @@ err_get_kernel_doorbell:
 }
 
 /* Uninitialize a kernel queue and free all its memory usages. */
-static void kq_uninitialize(struct kernel_queue *kq)
+static void kq_uninitialize(struct kernel_queue *kq, bool hanging)
 {
-       if (kq->queue->properties.type == KFD_QUEUE_TYPE_HIQ)
+       if (kq->queue->properties.type == KFD_QUEUE_TYPE_HIQ && !hanging)
                kq->mqd_mgr->destroy_mqd(kq->mqd_mgr,
                                        kq->queue->mqd,
                                        KFD_PREEMPT_TYPE_WAVEFRONT_RESET,
@@ -337,9 +337,9 @@ struct kernel_queue *kernel_queue_init(struct kfd_dev *dev,
        return NULL;
 }
 
-void kernel_queue_uninit(struct kernel_queue *kq)
+void kernel_queue_uninit(struct kernel_queue *kq, bool hanging)
 {
-       kq_uninitialize(kq);
+       kq_uninitialize(kq, hanging);
        kfree(kq);
 }
 
index 6cabed0..dc406e6 100644 (file)
@@ -264,10 +264,10 @@ int pm_init(struct packet_manager *pm, struct device_queue_manager *dqm)
        return 0;
 }
 
-void pm_uninit(struct packet_manager *pm)
+void pm_uninit(struct packet_manager *pm, bool hanging)
 {
        mutex_destroy(&pm->lock);
-       kernel_queue_uninit(pm->priv_queue);
+       kernel_queue_uninit(pm->priv_queue, hanging);
 }
 
 int pm_send_set_resources(struct packet_manager *pm,
index fc61b5e..6af1b58 100644 (file)
@@ -883,7 +883,7 @@ struct device_queue_manager *device_queue_manager_init(struct kfd_dev *dev);
 void device_queue_manager_uninit(struct device_queue_manager *dqm);
 struct kernel_queue *kernel_queue_init(struct kfd_dev *dev,
                                        enum kfd_queue_type type);
-void kernel_queue_uninit(struct kernel_queue *kq);
+void kernel_queue_uninit(struct kernel_queue *kq, bool hanging);
 int kfd_process_vm_fault(struct device_queue_manager *dqm, unsigned int pasid);
 
 /* Process Queue Manager */
@@ -972,7 +972,7 @@ extern const struct packet_manager_funcs kfd_vi_pm_funcs;
 extern const struct packet_manager_funcs kfd_v9_pm_funcs;
 
 int pm_init(struct packet_manager *pm, struct device_queue_manager *dqm);
-void pm_uninit(struct packet_manager *pm);
+void pm_uninit(struct packet_manager *pm, bool hanging);
 int pm_send_set_resources(struct packet_manager *pm,
                                struct scheduling_resources *res);
 int pm_send_runlist(struct packet_manager *pm, struct list_head *dqm_queues);
index 8276601..536a153 100644 (file)
@@ -324,6 +324,8 @@ struct kfd_process *kfd_create_process(struct file *filep)
                                        (int)process->lead_thread->pid);
        }
 out:
+       if (!IS_ERR(process))
+               kref_get(&process->ref);
        mutex_unlock(&kfd_processes_mutex);
 
        return process;
index 1152490..31fcd1b 100644 (file)
@@ -374,7 +374,7 @@ int pqm_destroy_queue(struct process_queue_manager *pqm, unsigned int qid)
                /* destroy kernel queue (DIQ) */
                dqm = pqn->kq->dev->dqm;
                dqm->ops.destroy_kernel_queue(dqm, pqn->kq, &pdd->qpd);
-               kernel_queue_uninit(pqn->kq);
+               kernel_queue_uninit(pqn->kq, false);
        }
 
        if (pqn->q) {
index 69bd062..203c823 100644 (file)
@@ -486,6 +486,10 @@ static ssize_t node_show(struct kobject *kobj, struct attribute *attr,
                        dev->node_props.num_sdma_engines);
        sysfs_show_32bit_prop(buffer, "num_sdma_xgmi_engines",
                        dev->node_props.num_sdma_xgmi_engines);
+       sysfs_show_32bit_prop(buffer, "num_sdma_queues_per_engine",
+                       dev->node_props.num_sdma_queues_per_engine);
+       sysfs_show_32bit_prop(buffer, "num_cp_queues",
+                       dev->node_props.num_cp_queues);
 
        if (dev->gpu) {
                log_max_watch_addr =
@@ -1309,9 +1313,12 @@ int kfd_topology_add_device(struct kfd_dev *gpu)
        dev->node_props.num_sdma_engines = gpu->device_info->num_sdma_engines;
        dev->node_props.num_sdma_xgmi_engines =
                                gpu->device_info->num_xgmi_sdma_engines;
+       dev->node_props.num_sdma_queues_per_engine =
+                               gpu->device_info->num_sdma_queues_per_engine;
        dev->node_props.num_gws = (hws_gws_support &&
                dev->gpu->dqm->sched_policy != KFD_SCHED_POLICY_NO_HWS) ?
                amdgpu_amdkfd_get_num_gws(dev->gpu->kgd) : 0;
+       dev->node_props.num_cp_queues = get_queues_num(dev->gpu->dqm);
 
        kfd_fill_mem_clk_max_info(dev);
        kfd_fill_iolink_non_crat_info(dev);
index 15843e0..74e9b16 100644 (file)
@@ -81,6 +81,8 @@ struct kfd_node_properties {
        int32_t  drm_render_minor;
        uint32_t num_sdma_engines;
        uint32_t num_sdma_xgmi_engines;
+       uint32_t num_sdma_queues_per_engine;
+       uint32_t num_cp_queues;
        char name[KFD_TOPOLOGY_PUBLIC_NAME_SIZE];
 };
 
index 096db86..87858bc 100644 (file)
@@ -6,7 +6,7 @@ config DRM_AMD_DC
        bool "AMD DC - Enable new display engine"
        default y
        select SND_HDA_COMPONENT if SND_HDA_CORE
-       select DRM_AMD_DC_DCN if X86 && !(KCOV_INSTRUMENT_ALL && KCOV_ENABLE_COMPARISONS)
+       select DRM_AMD_DC_DCN if (X86 || PPC64) && !(KCOV_INSTRUMENT_ALL && KCOV_ENABLE_COMPARISONS)
        help
          Choose this option if you want to use the new display engine
          support for AMDGPU. This adds required support for Vega and
index f2db400..76673c7 100644 (file)
@@ -98,6 +98,12 @@ MODULE_FIRMWARE(FIRMWARE_RENOIR_DMUB);
 #define FIRMWARE_RAVEN_DMCU            "amdgpu/raven_dmcu.bin"
 MODULE_FIRMWARE(FIRMWARE_RAVEN_DMCU);
 
+/* Number of bytes in PSP header for firmware. */
+#define PSP_HEADER_BYTES 0x100
+
+/* Number of bytes in PSP footer for firmware. */
+#define PSP_FOOTER_BYTES 0x100
+
 /**
  * DOC: overview
  *
@@ -741,28 +747,27 @@ void amdgpu_dm_audio_eld_notify(struct amdgpu_device *adev, int pin)
 
 static int dm_dmub_hw_init(struct amdgpu_device *adev)
 {
-       const unsigned int psp_header_bytes = 0x100;
-       const unsigned int psp_footer_bytes = 0x100;
        const struct dmcub_firmware_header_v1_0 *hdr;
        struct dmub_srv *dmub_srv = adev->dm.dmub_srv;
+       struct dmub_srv_fb_info *fb_info = adev->dm.dmub_fb_info;
        const struct firmware *dmub_fw = adev->dm.dmub_fw;
        struct dmcu *dmcu = adev->dm.dc->res_pool->dmcu;
        struct abm *abm = adev->dm.dc->res_pool->abm;
-       struct dmub_srv_region_params region_params;
-       struct dmub_srv_region_info region_info;
-       struct dmub_srv_fb_params fb_params;
-       struct dmub_srv_fb_info fb_info;
        struct dmub_srv_hw_params hw_params;
        enum dmub_status status;
        const unsigned char *fw_inst_const, *fw_bss_data;
-       uint32_t i;
-       int r;
+       uint32_t i, fw_inst_const_size, fw_bss_data_size;
        bool has_hw_support;
 
        if (!dmub_srv)
                /* DMUB isn't supported on the ASIC. */
                return 0;
 
+       if (!fb_info) {
+               DRM_ERROR("No framebuffer info for DMUB service.\n");
+               return -EINVAL;
+       }
+
        if (!dmub_fw) {
                /* Firmware required for DMUB support. */
                DRM_ERROR("No firmware provided for DMUB.\n");
@@ -782,60 +787,36 @@ static int dm_dmub_hw_init(struct amdgpu_device *adev)
 
        hdr = (const struct dmcub_firmware_header_v1_0 *)dmub_fw->data;
 
-       /* Calculate the size of all the regions for the DMUB service. */
-       memset(&region_params, 0, sizeof(region_params));
-
-       region_params.inst_const_size = le32_to_cpu(hdr->inst_const_bytes) -
-                                       psp_header_bytes - psp_footer_bytes;
-       region_params.bss_data_size = le32_to_cpu(hdr->bss_data_bytes);
-       region_params.vbios_size = adev->dm.dc->ctx->dc_bios->bios_size;
-
-       status = dmub_srv_calc_region_info(dmub_srv, &region_params,
-                                          &region_info);
-
-       if (status != DMUB_STATUS_OK) {
-               DRM_ERROR("Error calculating DMUB region info: %d\n", status);
-               return -EINVAL;
-       }
-
-       /*
-        * Allocate a framebuffer based on the total size of all the regions.
-        * TODO: Move this into GART.
-        */
-       r = amdgpu_bo_create_kernel(adev, region_info.fb_size, PAGE_SIZE,
-                                   AMDGPU_GEM_DOMAIN_VRAM, &adev->dm.dmub_bo,
-                                   &adev->dm.dmub_bo_gpu_addr,
-                                   &adev->dm.dmub_bo_cpu_addr);
-       if (r)
-               return r;
-
-       /* Rebase the regions on the framebuffer address. */
-       memset(&fb_params, 0, sizeof(fb_params));
-       fb_params.cpu_addr = adev->dm.dmub_bo_cpu_addr;
-       fb_params.gpu_addr = adev->dm.dmub_bo_gpu_addr;
-       fb_params.region_info = &region_info;
-
-       status = dmub_srv_calc_fb_info(dmub_srv, &fb_params, &fb_info);
-       if (status != DMUB_STATUS_OK) {
-               DRM_ERROR("Error calculating DMUB FB info: %d\n", status);
-               return -EINVAL;
-       }
-
        fw_inst_const = dmub_fw->data +
                        le32_to_cpu(hdr->header.ucode_array_offset_bytes) +
-                       psp_header_bytes;
+                       PSP_HEADER_BYTES;
 
        fw_bss_data = dmub_fw->data +
                      le32_to_cpu(hdr->header.ucode_array_offset_bytes) +
                      le32_to_cpu(hdr->inst_const_bytes);
 
        /* Copy firmware and bios info into FB memory. */
-       memcpy(fb_info.fb[DMUB_WINDOW_0_INST_CONST].cpu_addr, fw_inst_const,
-              region_params.inst_const_size);
-       memcpy(fb_info.fb[DMUB_WINDOW_2_BSS_DATA].cpu_addr, fw_bss_data,
-              region_params.bss_data_size);
-       memcpy(fb_info.fb[DMUB_WINDOW_3_VBIOS].cpu_addr,
-              adev->dm.dc->ctx->dc_bios->bios, region_params.vbios_size);
+       fw_inst_const_size = le32_to_cpu(hdr->inst_const_bytes) -
+                            PSP_HEADER_BYTES - PSP_FOOTER_BYTES;
+
+       fw_bss_data_size = le32_to_cpu(hdr->bss_data_bytes);
+
+       memcpy(fb_info->fb[DMUB_WINDOW_0_INST_CONST].cpu_addr, fw_inst_const,
+              fw_inst_const_size);
+       memcpy(fb_info->fb[DMUB_WINDOW_2_BSS_DATA].cpu_addr, fw_bss_data,
+              fw_bss_data_size);
+       memcpy(fb_info->fb[DMUB_WINDOW_3_VBIOS].cpu_addr, adev->bios,
+              adev->bios_size);
+
+       /* Reset regions that need to be reset. */
+       memset(fb_info->fb[DMUB_WINDOW_4_MAILBOX].cpu_addr, 0,
+       fb_info->fb[DMUB_WINDOW_4_MAILBOX].size);
+
+       memset(fb_info->fb[DMUB_WINDOW_5_TRACEBUFF].cpu_addr, 0,
+              fb_info->fb[DMUB_WINDOW_5_TRACEBUFF].size);
+
+       memset(fb_info->fb[DMUB_WINDOW_6_FW_STATE].cpu_addr, 0,
+              fb_info->fb[DMUB_WINDOW_6_FW_STATE].size);
 
        /* Initialize hardware. */
        memset(&hw_params, 0, sizeof(hw_params));
@@ -845,8 +826,8 @@ static int dm_dmub_hw_init(struct amdgpu_device *adev)
        if (dmcu)
                hw_params.psp_version = dmcu->psp_version;
 
-       for (i = 0; i < fb_info.num_fb; ++i)
-               hw_params.fb[i] = &fb_info.fb[i];
+       for (i = 0; i < fb_info->num_fb; ++i)
+               hw_params.fb[i] = &fb_info->fb[i];
 
        status = dmub_srv_hw_init(dmub_srv, &hw_params);
        if (status != DMUB_STATUS_OK) {
@@ -1174,6 +1155,11 @@ static void amdgpu_dm_dmub_reg_write(void *ctx, uint32_t address,
 static int dm_dmub_sw_init(struct amdgpu_device *adev)
 {
        struct dmub_srv_create_params create_params;
+       struct dmub_srv_region_params region_params;
+       struct dmub_srv_region_info region_info;
+       struct dmub_srv_fb_params fb_params;
+       struct dmub_srv_fb_info *fb_info;
+       struct dmub_srv *dmub_srv;
        const struct dmcub_firmware_header_v1_0 *hdr;
        const char *fw_name_dmub;
        enum dmub_asic dmub_asic;
@@ -1191,24 +1177,6 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
                return 0;
        }
 
-       adev->dm.dmub_srv = kzalloc(sizeof(*adev->dm.dmub_srv), GFP_KERNEL);
-       if (!adev->dm.dmub_srv) {
-               DRM_ERROR("Failed to allocate DMUB service!\n");
-               return -ENOMEM;
-       }
-
-       memset(&create_params, 0, sizeof(create_params));
-       create_params.user_ctx = adev;
-       create_params.funcs.reg_read = amdgpu_dm_dmub_reg_read;
-       create_params.funcs.reg_write = amdgpu_dm_dmub_reg_write;
-       create_params.asic = dmub_asic;
-
-       status = dmub_srv_create(adev->dm.dmub_srv, &create_params);
-       if (status != DMUB_STATUS_OK) {
-               DRM_ERROR("Error creating DMUB service: %d\n", status);
-               return -EINVAL;
-       }
-
        r = request_firmware_direct(&adev->dm.dmub_fw, fw_name_dmub, adev->dev);
        if (r) {
                DRM_ERROR("DMUB firmware loading failed: %d\n", r);
@@ -1238,6 +1206,80 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
        DRM_INFO("Loading DMUB firmware via PSP: version=0x%08X\n",
                 adev->dm.dmcub_fw_version);
 
+       adev->dm.dmub_srv = kzalloc(sizeof(*adev->dm.dmub_srv), GFP_KERNEL);
+       dmub_srv = adev->dm.dmub_srv;
+
+       if (!dmub_srv) {
+               DRM_ERROR("Failed to allocate DMUB service!\n");
+               return -ENOMEM;
+       }
+
+       memset(&create_params, 0, sizeof(create_params));
+       create_params.user_ctx = adev;
+       create_params.funcs.reg_read = amdgpu_dm_dmub_reg_read;
+       create_params.funcs.reg_write = amdgpu_dm_dmub_reg_write;
+       create_params.asic = dmub_asic;
+
+       /* Create the DMUB service. */
+       status = dmub_srv_create(dmub_srv, &create_params);
+       if (status != DMUB_STATUS_OK) {
+               DRM_ERROR("Error creating DMUB service: %d\n", status);
+               return -EINVAL;
+       }
+
+       /* Calculate the size of all the regions for the DMUB service. */
+       memset(&region_params, 0, sizeof(region_params));
+
+       region_params.inst_const_size = le32_to_cpu(hdr->inst_const_bytes) -
+                                       PSP_HEADER_BYTES - PSP_FOOTER_BYTES;
+       region_params.bss_data_size = le32_to_cpu(hdr->bss_data_bytes);
+       region_params.vbios_size = adev->bios_size;
+       region_params.fw_bss_data =
+               adev->dm.dmub_fw->data +
+               le32_to_cpu(hdr->header.ucode_array_offset_bytes) +
+               le32_to_cpu(hdr->inst_const_bytes);
+
+       status = dmub_srv_calc_region_info(dmub_srv, &region_params,
+                                          &region_info);
+
+       if (status != DMUB_STATUS_OK) {
+               DRM_ERROR("Error calculating DMUB region info: %d\n", status);
+               return -EINVAL;
+       }
+
+       /*
+        * Allocate a framebuffer based on the total size of all the regions.
+        * TODO: Move this into GART.
+        */
+       r = amdgpu_bo_create_kernel(adev, region_info.fb_size, PAGE_SIZE,
+                                   AMDGPU_GEM_DOMAIN_VRAM, &adev->dm.dmub_bo,
+                                   &adev->dm.dmub_bo_gpu_addr,
+                                   &adev->dm.dmub_bo_cpu_addr);
+       if (r)
+               return r;
+
+       /* Rebase the regions on the framebuffer address. */
+       memset(&fb_params, 0, sizeof(fb_params));
+       fb_params.cpu_addr = adev->dm.dmub_bo_cpu_addr;
+       fb_params.gpu_addr = adev->dm.dmub_bo_gpu_addr;
+       fb_params.region_info = &region_info;
+
+       adev->dm.dmub_fb_info =
+               kzalloc(sizeof(*adev->dm.dmub_fb_info), GFP_KERNEL);
+       fb_info = adev->dm.dmub_fb_info;
+
+       if (!fb_info) {
+               DRM_ERROR(
+                       "Failed to allocate framebuffer info for DMUB service!\n");
+               return -ENOMEM;
+       }
+
+       status = dmub_srv_calc_fb_info(dmub_srv, &fb_params, fb_info);
+       if (status != DMUB_STATUS_OK) {
+               DRM_ERROR("Error calculating DMUB FB info: %d\n", status);
+               return -EINVAL;
+       }
+
        return 0;
 }
 
@@ -1257,6 +1299,9 @@ static int dm_sw_fini(void *handle)
 {
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
+       kfree(adev->dm.dmub_fb_info);
+       adev->dm.dmub_fb_info = NULL;
+
        if (adev->dm.dmub_srv) {
                dmub_srv_destroy(adev->dm.dmub_srv);
                adev->dm.dmub_srv = NULL;
@@ -1559,7 +1604,7 @@ static int dm_resume(void *handle)
        struct dm_plane_state *dm_new_plane_state;
        struct dm_atomic_state *dm_state = to_dm_atomic_state(dm->atomic_obj.state);
        enum dc_connection_type new_connection_type = dc_connection_none;
-       int i;
+       int i, r;
 
        /* Recreate dc_state - DC invalidates it when setting power state to S3. */
        dc_release_state(dm_state->context);
@@ -1567,6 +1612,11 @@ static int dm_resume(void *handle)
        /* TODO: Remove dc_state->dccg, use dc->dccg directly. */
        dc_resource_state_construct(dm->dc, dm_state->context);
 
+       /* Before powering on DC we need to re-initialize DMUB. */
+       r = dm_dmub_hw_init(adev);
+       if (r)
+               DRM_ERROR("DMUB interface failed to initialize: status=%d\n", r);
+
        /* power on hardware */
        dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);
 
@@ -3654,27 +3704,21 @@ get_output_color_space(const struct dc_crtc_timing *dc_crtc_timing)
        return color_space;
 }
 
-static void reduce_mode_colour_depth(struct dc_crtc_timing *timing_out)
-{
-       if (timing_out->display_color_depth <= COLOR_DEPTH_888)
-               return;
-
-       timing_out->display_color_depth--;
-}
-
-static void adjust_colour_depth_from_display_info(struct dc_crtc_timing *timing_out,
-                                               const struct drm_display_info *info)
+static bool adjust_colour_depth_from_display_info(
+       struct dc_crtc_timing *timing_out,
+       const struct drm_display_info *info)
 {
+       enum dc_color_depth depth = timing_out->display_color_depth;
        int normalized_clk;
-       if (timing_out->display_color_depth <= COLOR_DEPTH_888)
-               return;
        do {
                normalized_clk = timing_out->pix_clk_100hz / 10;
                /* YCbCr 4:2:0 requires additional adjustment of 1/2 */
                if (timing_out->pixel_encoding == PIXEL_ENCODING_YCBCR420)
                        normalized_clk /= 2;
                /* Adjusting pix clock following on HDMI spec based on colour depth */
-               switch (timing_out->display_color_depth) {
+               switch (depth) {
+               case COLOR_DEPTH_888:
+                       break;
                case COLOR_DEPTH_101010:
                        normalized_clk = (normalized_clk * 30) / 24;
                        break;
@@ -3685,14 +3729,15 @@ static void adjust_colour_depth_from_display_info(struct dc_crtc_timing *timing_
                        normalized_clk = (normalized_clk * 48) / 24;
                        break;
                default:
-                       return;
+                       /* The above depths are the only ones valid for HDMI. */
+                       return false;
                }
-               if (normalized_clk <= info->max_tmds_clock)
-                       return;
-               reduce_mode_colour_depth(timing_out);
-
-       } while (timing_out->display_color_depth > COLOR_DEPTH_888);
-
+               if (normalized_clk <= info->max_tmds_clock) {
+                       timing_out->display_color_depth = depth;
+                       return true;
+               }
+       } while (--depth > COLOR_DEPTH_666);
+       return false;
 }
 
 static void fill_stream_properties_from_drm_display_mode(
@@ -3773,8 +3818,14 @@ static void fill_stream_properties_from_drm_display_mode(
 
        stream->out_transfer_func->type = TF_TYPE_PREDEFINED;
        stream->out_transfer_func->tf = TRANSFER_FUNCTION_SRGB;
-       if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
-               adjust_colour_depth_from_display_info(timing_out, info);
+       if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A) {
+               if (!adjust_colour_depth_from_display_info(timing_out, info) &&
+                   drm_mode_is_420_also(info, mode_in) &&
+                   timing_out->pixel_encoding != PIXEL_ENCODING_YCBCR420) {
+                       timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR420;
+                       adjust_colour_depth_from_display_info(timing_out, info);
+               }
+       }
 }
 
 static void fill_audio_info(struct audio_info *audio_info,
@@ -4025,7 +4076,8 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
 
        if (aconnector->dc_link && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT) {
 #if defined(CONFIG_DRM_AMD_DC_DCN)
-               dc_dsc_parse_dsc_dpcd(aconnector->dc_link->dpcd_caps.dsc_caps.dsc_basic_caps.raw,
+               dc_dsc_parse_dsc_dpcd(aconnector->dc_link->ctx->dc,
+                                     aconnector->dc_link->dpcd_caps.dsc_caps.dsc_basic_caps.raw,
                                      aconnector->dc_link->dpcd_caps.dsc_caps.dsc_ext_caps.raw,
                                      &dsc_caps);
 #endif
@@ -4881,12 +4933,13 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
                                                                    is_y420);
                bpp = convert_dc_color_depth_into_bpc(color_depth) * 3;
                clock = adjusted_mode->clock;
-               dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp);
+               dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false);
        }
        dm_new_connector_state->vcpi_slots = drm_dp_atomic_find_vcpi_slots(state,
                                                                           mst_mgr,
                                                                           mst_port,
-                                                                          dm_new_connector_state->pbn);
+                                                                          dm_new_connector_state->pbn,
+                                                                          0);
        if (dm_new_connector_state->vcpi_slots < 0) {
                DRM_DEBUG_ATOMIC("failed finding vcpi slots: %d\n", (int)dm_new_connector_state->vcpi_slots);
                return dm_new_connector_state->vcpi_slots;
@@ -4899,6 +4952,71 @@ const struct drm_encoder_helper_funcs amdgpu_dm_encoder_helper_funcs = {
        .atomic_check = dm_encoder_helper_atomic_check
 };
 
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+static int dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state,
+                                           struct dc_state *dc_state)
+{
+       struct dc_stream_state *stream = NULL;
+       struct drm_connector *connector;
+       struct drm_connector_state *new_con_state, *old_con_state;
+       struct amdgpu_dm_connector *aconnector;
+       struct dm_connector_state *dm_conn_state;
+       int i, j, clock, bpp;
+       int vcpi, pbn_div, pbn = 0;
+
+       for_each_oldnew_connector_in_state(state, connector, old_con_state, new_con_state, i) {
+
+               aconnector = to_amdgpu_dm_connector(connector);
+
+               if (!aconnector->port)
+                       continue;
+
+               if (!new_con_state || !new_con_state->crtc)
+                       continue;
+
+               dm_conn_state = to_dm_connector_state(new_con_state);
+
+               for (j = 0; j < dc_state->stream_count; j++) {
+                       stream = dc_state->streams[j];
+                       if (!stream)
+                               continue;
+
+                       if ((struct amdgpu_dm_connector*)stream->dm_stream_context == aconnector)
+                               break;
+
+                       stream = NULL;
+               }
+
+               if (!stream)
+                       continue;
+
+               if (stream->timing.flags.DSC != 1) {
+                       drm_dp_mst_atomic_enable_dsc(state,
+                                                    aconnector->port,
+                                                    dm_conn_state->pbn,
+                                                    0,
+                                                    false);
+                       continue;
+               }
+
+               pbn_div = dm_mst_get_pbn_divider(stream->link);
+               bpp = stream->timing.dsc_cfg.bits_per_pixel;
+               clock = stream->timing.pix_clk_100hz / 10;
+               pbn = drm_dp_calc_pbn_mode(clock, bpp, true);
+               vcpi = drm_dp_mst_atomic_enable_dsc(state,
+                                                   aconnector->port,
+                                                   pbn, pbn_div,
+                                                   true);
+               if (vcpi < 0)
+                       return vcpi;
+
+               dm_conn_state->pbn = pbn;
+               dm_conn_state->vcpi_slots = vcpi;
+       }
+       return 0;
+}
+#endif
+
 static void dm_drm_plane_reset(struct drm_plane *plane)
 {
        struct dm_plane_state *amdgpu_state = NULL;
@@ -5561,9 +5679,9 @@ void amdgpu_dm_connector_init_helper(struct amdgpu_display_manager *dm,
 
        drm_connector_attach_max_bpc_property(&aconnector->base, 8, 16);
 
-       /* This defaults to the max in the range, but we want 8bpc. */
-       aconnector->base.state->max_bpc = 8;
-       aconnector->base.state->max_requested_bpc = 8;
+       /* This defaults to the max in the range, but we want 8bpc for non-edp. */
+       aconnector->base.state->max_bpc = (connector_type == DRM_MODE_CONNECTOR_eDP) ? 16 : 8;
+       aconnector->base.state->max_requested_bpc = aconnector->base.state->max_bpc;
 
        if (connector_type == DRM_MODE_CONNECTOR_eDP &&
            dc_is_dmcu_initialized(adev->dm.dc)) {
@@ -7777,6 +7895,29 @@ cleanup:
        return ret;
 }
 
+static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm_crtc *crtc)
+{
+       struct drm_connector *connector;
+       struct drm_connector_state *conn_state;
+       struct amdgpu_dm_connector *aconnector = NULL;
+       int i;
+       for_each_new_connector_in_state(state, connector, conn_state, i) {
+               if (conn_state->crtc != crtc)
+                       continue;
+
+               aconnector = to_amdgpu_dm_connector(connector);
+               if (!aconnector->port || !aconnector->mst_port)
+                       aconnector = NULL;
+               else
+                       break;
+       }
+
+       if (!aconnector)
+               return 0;
+
+       return drm_dp_mst_add_affected_dsc_crtcs(state, &aconnector->mst_port->mst_mgr);
+}
+
 /**
  * amdgpu_dm_atomic_check() - Atomic check implementation for AMDgpu DM.
  * @dev: The DRM device
@@ -7829,6 +7970,16 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
        if (ret)
                goto fail;
 
+       if (adev->asic_type >= CHIP_NAVI10) {
+               for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
+                       if (drm_atomic_crtc_needs_modeset(new_crtc_state)) {
+                               ret = add_affected_mst_dsc_crtcs(state, crtc);
+                               if (ret)
+                                       goto fail;
+                       }
+               }
+       }
+
        for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
                if (!drm_atomic_crtc_needs_modeset(new_crtc_state) &&
                    !new_crtc_state->color_mgmt_changed &&
@@ -7932,11 +8083,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
        if (ret)
                goto fail;
 
-       /* Perform validation of MST topology in the state*/
-       ret = drm_dp_mst_atomic_check(state);
-       if (ret)
-               goto fail;
-
        if (state->legacy_cursor_update) {
                /*
                 * This is a fast cursor update coming from the plane update
@@ -8005,6 +8151,15 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
                if (ret)
                        goto fail;
 
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+               if (!compute_mst_dsc_configs_for_state(state, dm_state->context))
+                       goto fail;
+
+               ret = dm_update_mst_vcpi_slots_for_dsc(state, dm_state->context);
+               if (ret)
+                       goto fail;
+#endif
+
                if (dc_validate_global_state(dc, dm_state->context, false) != DC_OK) {
                        ret = -EINVAL;
                        goto fail;
@@ -8033,6 +8188,10 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
                                dc_retain_state(old_dm_state->context);
                }
        }
+       /* Perform validation of MST topology in the state*/
+       ret = drm_dp_mst_atomic_check(state);
+       if (ret)
+               goto fail;
 
        /* Store the overall update type for use later in atomic check. */
        for_each_new_crtc_in_state (state, crtc, new_crtc_state, i) {
index a8fc90a..7ea9acb 100644 (file)
@@ -133,6 +133,13 @@ struct amdgpu_display_manager {
        struct dmub_srv *dmub_srv;
 
        /**
+        * @dmub_fb_info:
+        *
+        * Framebuffer regions for the DMUB.
+        */
+       struct dmub_srv_fb_info *dmub_fb_info;
+
+       /**
         * @dmub_fw:
         *
         * DMUB firmware, required on hardware that has DMUB support.
@@ -323,6 +330,7 @@ struct amdgpu_dm_connector {
        struct drm_dp_mst_port *port;
        struct amdgpu_dm_connector *mst_port;
        struct amdgpu_encoder *mst_encoder;
+       struct drm_dp_aux *dsc_aux;
 
        /* TODO see if we can merge with ddc_bus or make a dm_connector */
        struct amdgpu_i2c_adapter *i2c;
index 66f266a..069b7a6 100644 (file)
@@ -37,6 +37,7 @@
 #include "dc.h"
 #include "amdgpu_dm.h"
 #include "amdgpu_dm_irq.h"
+#include "amdgpu_dm_mst_types.h"
 
 #include "dm_helpers.h"
 
@@ -516,8 +517,24 @@ bool dm_helpers_dp_write_dsc_enable(
 )
 {
        uint8_t enable_dsc = enable ? 1 : 0;
+       struct amdgpu_dm_connector *aconnector;
+
+       if (!stream)
+               return false;
+
+       if (stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
+               aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
+
+               if (!aconnector->dsc_aux)
+                       return false;
+
+               return (drm_dp_dpcd_write(aconnector->dsc_aux, DP_DSC_ENABLE, &enable_dsc, 1) >= 0);
+       }
+
+       if (stream->signal == SIGNAL_TYPE_DISPLAY_PORT)
+               return dm_helpers_dp_write_dpcd(ctx, stream->link, DP_DSC_ENABLE, &enable_dsc, 1);
 
-       return dm_helpers_dp_write_dpcd(ctx, stream->sink->link, DP_DSC_ENABLE, &enable_dsc, 1);
+       return false;
 }
 
 bool dm_helpers_is_dp_sink_present(struct dc_link *link)
index 64445c4..cbcf504 100644 (file)
@@ -111,17 +111,12 @@ static void init_handler_common_data(struct amdgpu_dm_irq_handler_data *hcd,
  */
 static void dm_irq_work_func(struct work_struct *work)
 {
-       struct list_head *entry;
        struct irq_list_head *irq_list_head =
                container_of(work, struct irq_list_head, work);
        struct list_head *handler_list = &irq_list_head->head;
        struct amdgpu_dm_irq_handler_data *handler_data;
 
-       list_for_each(entry, handler_list) {
-               handler_data = list_entry(entry,
-                                         struct amdgpu_dm_irq_handler_data,
-                                         list);
-
+       list_for_each_entry(handler_data, handler_list, list) {
                DRM_DEBUG_KMS("DM_IRQ: work_func: for dal_src=%d\n",
                                handler_data->irq_source);
 
@@ -528,19 +523,13 @@ static void amdgpu_dm_irq_immediate_work(struct amdgpu_device *adev,
                                         enum dc_irq_source irq_source)
 {
        struct amdgpu_dm_irq_handler_data *handler_data;
-       struct list_head *entry;
        unsigned long irq_table_flags;
 
        DM_IRQ_TABLE_LOCK(adev, irq_table_flags);
 
-       list_for_each(
-               entry,
-               &adev->dm.irq_handler_list_high_tab[irq_source]) {
-
-               handler_data = list_entry(entry,
-                                         struct amdgpu_dm_irq_handler_data,
-                                         list);
-
+       list_for_each_entry(handler_data,
+                           &adev->dm.irq_handler_list_high_tab[irq_source],
+                           list) {
                /* Call a subcomponent which registered for immediate
                 * interrupt notification */
                handler_data->handler(handler_data->handler_arg);
index 81367c8..52fb207 100644 (file)
@@ -25,6 +25,7 @@
 
 #include <linux/version.h>
 #include <drm/drm_atomic_helper.h>
+#include <drm/drm_dp_mst_helper.h>
 #include "dm_services.h"
 #include "amdgpu.h"
 #include "amdgpu_dm.h"
 #if defined(CONFIG_DEBUG_FS)
 #include "amdgpu_dm_debugfs.h"
 #endif
+
+
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+#include "dc/dcn20/dcn20_resource.h"
+#endif
+
 /* #define TRACE_DPCD */
 
 #ifdef TRACE_DPCD
@@ -180,6 +187,30 @@ static const struct drm_connector_funcs dm_dp_mst_connector_funcs = {
        .early_unregister = amdgpu_dm_mst_connector_early_unregister,
 };
 
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnector)
+{
+       struct dc_sink *dc_sink = aconnector->dc_sink;
+       struct drm_dp_mst_port *port = aconnector->port;
+       u8 dsc_caps[16] = { 0 };
+
+       aconnector->dsc_aux = drm_dp_mst_dsc_aux_for_port(port);
+
+       if (!aconnector->dsc_aux)
+               return false;
+
+       if (drm_dp_dpcd_read(aconnector->dsc_aux, DP_DSC_SUPPORT, dsc_caps, 16) < 0)
+               return false;
+
+       if (!dc_dsc_parse_dsc_dpcd(aconnector->dc_link->ctx->dc,
+                                  dsc_caps, NULL,
+                                  &dc_sink->sink_dsc_caps.dsc_dec_caps))
+               return false;
+
+       return true;
+}
+#endif
+
 static int dm_dp_mst_get_modes(struct drm_connector *connector)
 {
        struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
@@ -222,10 +253,16 @@ static int dm_dp_mst_get_modes(struct drm_connector *connector)
                /* dc_link_add_remote_sink returns a new reference */
                aconnector->dc_sink = dc_sink;
 
-               if (aconnector->dc_sink)
+               if (aconnector->dc_sink) {
                        amdgpu_dm_update_freesync_caps(
                                        connector, aconnector->edid);
 
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+                       if (!validate_dsc_caps_on_connector(aconnector))
+                               memset(&aconnector->dc_sink->sink_dsc_caps,
+                                      0, sizeof(aconnector->dc_sink->sink_dsc_caps));
+#endif
+               }
        }
 
        drm_connector_update_edid_property(
@@ -466,3 +503,384 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
                aconnector->connector_id);
 }
 
+int dm_mst_get_pbn_divider(struct dc_link *link)
+{
+       if (!link)
+               return 0;
+
+       return dc_link_bandwidth_kbps(link,
+                       dc_link_get_link_cap(link)) / (8 * 1000 * 54);
+}
+
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+
+struct dsc_mst_fairness_params {
+       struct dc_crtc_timing *timing;
+       struct dc_sink *sink;
+       struct dc_dsc_bw_range bw_range;
+       bool compression_possible;
+       struct drm_dp_mst_port *port;
+};
+
+struct dsc_mst_fairness_vars {
+       int pbn;
+       bool dsc_enabled;
+       int bpp_x16;
+};
+
+static int kbps_to_peak_pbn(int kbps)
+{
+       u64 peak_kbps = kbps;
+
+       peak_kbps *= 1006;
+       peak_kbps = div_u64(peak_kbps, 1000);
+       return (int) DIV_ROUND_UP(peak_kbps * 64, (54 * 8 * 1000));
+}
+
+static void set_dsc_configs_from_fairness_vars(struct dsc_mst_fairness_params *params,
+               struct dsc_mst_fairness_vars *vars,
+               int count)
+{
+       int i;
+
+       for (i = 0; i < count; i++) {
+               memset(&params[i].timing->dsc_cfg, 0, sizeof(params[i].timing->dsc_cfg));
+               if (vars[i].dsc_enabled && dc_dsc_compute_config(
+                                       params[i].sink->ctx->dc->res_pool->dscs[0],
+                                       &params[i].sink->sink_dsc_caps.dsc_dec_caps,
+                                       params[i].sink->ctx->dc->debug.dsc_min_slice_height_override,
+                                       0,
+                                       params[i].timing,
+                                       &params[i].timing->dsc_cfg)) {
+                       params[i].timing->flags.DSC = 1;
+                       params[i].timing->dsc_cfg.bits_per_pixel = vars[i].bpp_x16;
+               } else {
+                       params[i].timing->flags.DSC = 0;
+               }
+       }
+}
+
+static int bpp_x16_from_pbn(struct dsc_mst_fairness_params param, int pbn)
+{
+       struct dc_dsc_config dsc_config;
+       u64 kbps;
+
+       kbps = div_u64((u64)pbn * 994 * 8 * 54, 64);
+       dc_dsc_compute_config(
+                       param.sink->ctx->dc->res_pool->dscs[0],
+                       &param.sink->sink_dsc_caps.dsc_dec_caps,
+                       param.sink->ctx->dc->debug.dsc_min_slice_height_override,
+                       (int) kbps, param.timing, &dsc_config);
+
+       return dsc_config.bits_per_pixel;
+}
+
+static void increase_dsc_bpp(struct drm_atomic_state *state,
+                            struct dc_link *dc_link,
+                            struct dsc_mst_fairness_params *params,
+                            struct dsc_mst_fairness_vars *vars,
+                            int count)
+{
+       int i;
+       bool bpp_increased[MAX_PIPES];
+       int initial_slack[MAX_PIPES];
+       int min_initial_slack;
+       int next_index;
+       int remaining_to_increase = 0;
+       int pbn_per_timeslot;
+       int link_timeslots_used;
+       int fair_pbn_alloc;
+
+       for (i = 0; i < count; i++) {
+               if (vars[i].dsc_enabled) {
+                       initial_slack[i] = kbps_to_peak_pbn(params[i].bw_range.max_kbps) - vars[i].pbn;
+                       bpp_increased[i] = false;
+                       remaining_to_increase += 1;
+               } else {
+                       initial_slack[i] = 0;
+                       bpp_increased[i] = true;
+               }
+       }
+
+       pbn_per_timeslot = dc_link_bandwidth_kbps(dc_link,
+                       dc_link_get_link_cap(dc_link)) / (8 * 1000 * 54);
+
+       while (remaining_to_increase) {
+               next_index = -1;
+               min_initial_slack = -1;
+               for (i = 0; i < count; i++) {
+                       if (!bpp_increased[i]) {
+                               if (min_initial_slack == -1 || min_initial_slack > initial_slack[i]) {
+                                       min_initial_slack = initial_slack[i];
+                                       next_index = i;
+                               }
+                       }
+               }
+
+               if (next_index == -1)
+                       break;
+
+               link_timeslots_used = 0;
+
+               for (i = 0; i < count; i++)
+                       link_timeslots_used += DIV_ROUND_UP(vars[i].pbn, pbn_per_timeslot);
+
+               fair_pbn_alloc = (63 - link_timeslots_used) / remaining_to_increase * pbn_per_timeslot;
+
+               if (initial_slack[next_index] > fair_pbn_alloc) {
+                       vars[next_index].pbn += fair_pbn_alloc;
+                       if (drm_dp_atomic_find_vcpi_slots(state,
+                                                         params[next_index].port->mgr,
+                                                         params[next_index].port,
+                                                         vars[next_index].pbn,\
+                                                         dm_mst_get_pbn_divider(dc_link)) < 0)
+                               return;
+                       if (!drm_dp_mst_atomic_check(state)) {
+                               vars[next_index].bpp_x16 = bpp_x16_from_pbn(params[next_index], vars[next_index].pbn);
+                       } else {
+                               vars[next_index].pbn -= fair_pbn_alloc;
+                               if (drm_dp_atomic_find_vcpi_slots(state,
+                                                                 params[next_index].port->mgr,
+                                                                 params[next_index].port,
+                                                                 vars[next_index].pbn,
+                                                                 dm_mst_get_pbn_divider(dc_link)) < 0)
+                                       return;
+                       }
+               } else {
+                       vars[next_index].pbn += initial_slack[next_index];
+                       if (drm_dp_atomic_find_vcpi_slots(state,
+                                                         params[next_index].port->mgr,
+                                                         params[next_index].port,
+                                                         vars[next_index].pbn,
+                                                         dm_mst_get_pbn_divider(dc_link)) < 0)
+                               return;
+                       if (!drm_dp_mst_atomic_check(state)) {
+                               vars[next_index].bpp_x16 = params[next_index].bw_range.max_target_bpp_x16;
+                       } else {
+                               vars[next_index].pbn -= initial_slack[next_index];
+                               if (drm_dp_atomic_find_vcpi_slots(state,
+                                                                 params[next_index].port->mgr,
+                                                                 params[next_index].port,
+                                                                 vars[next_index].pbn,
+                                                                 dm_mst_get_pbn_divider(dc_link)) < 0)
+                                       return;
+                       }
+               }
+
+               bpp_increased[next_index] = true;
+               remaining_to_increase--;
+       }
+}
+
+static void try_disable_dsc(struct drm_atomic_state *state,
+                           struct dc_link *dc_link,
+                           struct dsc_mst_fairness_params *params,
+                           struct dsc_mst_fairness_vars *vars,
+                           int count)
+{
+       int i;
+       bool tried[MAX_PIPES];
+       int kbps_increase[MAX_PIPES];
+       int max_kbps_increase;
+       int next_index;
+       int remaining_to_try = 0;
+
+       for (i = 0; i < count; i++) {
+               if (vars[i].dsc_enabled && vars[i].bpp_x16 == params[i].bw_range.max_target_bpp_x16) {
+                       kbps_increase[i] = params[i].bw_range.stream_kbps - params[i].bw_range.max_kbps;
+                       tried[i] = false;
+                       remaining_to_try += 1;
+               } else {
+                       kbps_increase[i] = 0;
+                       tried[i] = true;
+               }
+       }
+
+       while (remaining_to_try) {
+               next_index = -1;
+               max_kbps_increase = -1;
+               for (i = 0; i < count; i++) {
+                       if (!tried[i]) {
+                               if (max_kbps_increase == -1 || max_kbps_increase < kbps_increase[i]) {
+                                       max_kbps_increase = kbps_increase[i];
+                                       next_index = i;
+                               }
+                       }
+               }
+
+               if (next_index == -1)
+                       break;
+
+               vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps);
+               if (drm_dp_atomic_find_vcpi_slots(state,
+                                                 params[next_index].port->mgr,
+                                                 params[next_index].port,
+                                                 vars[next_index].pbn,
+                                                 0) < 0)
+                       return;
+
+               if (!drm_dp_mst_atomic_check(state)) {
+                       vars[next_index].dsc_enabled = false;
+                       vars[next_index].bpp_x16 = 0;
+               } else {
+                       vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.max_kbps);
+                       if (drm_dp_atomic_find_vcpi_slots(state,
+                                                         params[next_index].port->mgr,
+                                                         params[next_index].port,
+                                                         vars[next_index].pbn,
+                                                         dm_mst_get_pbn_divider(dc_link)) < 0)
+                               return;
+               }
+
+               tried[next_index] = true;
+               remaining_to_try--;
+       }
+}
+
+static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+                                            struct dc_state *dc_state,
+                                            struct dc_link *dc_link)
+{
+       int i;
+       struct dc_stream_state *stream;
+       struct dsc_mst_fairness_params params[MAX_PIPES];
+       struct dsc_mst_fairness_vars vars[MAX_PIPES];
+       struct amdgpu_dm_connector *aconnector;
+       int count = 0;
+
+       memset(params, 0, sizeof(params));
+
+       /* Set up params */
+       for (i = 0; i < dc_state->stream_count; i++) {
+               struct dc_dsc_policy dsc_policy = {0};
+
+               stream = dc_state->streams[i];
+
+               if (stream->link != dc_link)
+                       continue;
+
+               stream->timing.flags.DSC = 0;
+
+               params[count].timing = &stream->timing;
+               params[count].sink = stream->sink;
+               aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
+               params[count].port = aconnector->port;
+               params[count].compression_possible = stream->sink->sink_dsc_caps.dsc_dec_caps.is_dsc_supported;
+               dc_dsc_get_policy_for_timing(params[count].timing, &dsc_policy);
+               if (!dc_dsc_compute_bandwidth_range(
+                               stream->sink->ctx->dc->res_pool->dscs[0],
+                               stream->sink->ctx->dc->debug.dsc_min_slice_height_override,
+                               dsc_policy.min_target_bpp,
+                               dsc_policy.max_target_bpp,
+                               &stream->sink->sink_dsc_caps.dsc_dec_caps,
+                               &stream->timing, &params[count].bw_range))
+                       params[count].bw_range.stream_kbps = dc_bandwidth_in_kbps_from_timing(&stream->timing);
+
+               count++;
+       }
+       /* Try no compression */
+       for (i = 0; i < count; i++) {
+               vars[i].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps);
+               vars[i].dsc_enabled = false;
+               vars[i].bpp_x16 = 0;
+               if (drm_dp_atomic_find_vcpi_slots(state,
+                                                params[i].port->mgr,
+                                                params[i].port,
+                                                vars[i].pbn,
+                                                0) < 0)
+                       return false;
+       }
+       if (!drm_dp_mst_atomic_check(state)) {
+               set_dsc_configs_from_fairness_vars(params, vars, count);
+               return true;
+       }
+
+       /* Try max compression */
+       for (i = 0; i < count; i++) {
+               if (params[i].compression_possible) {
+                       vars[i].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps);
+                       vars[i].dsc_enabled = true;
+                       vars[i].bpp_x16 = params[i].bw_range.min_target_bpp_x16;
+                       if (drm_dp_atomic_find_vcpi_slots(state,
+                                                         params[i].port->mgr,
+                                                         params[i].port,
+                                                         vars[i].pbn,
+                                                         dm_mst_get_pbn_divider(dc_link)) < 0)
+                               return false;
+               } else {
+                       vars[i].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps);
+                       vars[i].dsc_enabled = false;
+                       vars[i].bpp_x16 = 0;
+                       if (drm_dp_atomic_find_vcpi_slots(state,
+                                                         params[i].port->mgr,
+                                                         params[i].port,
+                                                         vars[i].pbn,
+                                                         0) < 0)
+                               return false;
+               }
+       }
+       if (drm_dp_mst_atomic_check(state))
+               return false;
+
+       /* Optimize degree of compression */
+       increase_dsc_bpp(state, dc_link, params, vars, count);
+
+       try_disable_dsc(state, dc_link, params, vars, count);
+
+       set_dsc_configs_from_fairness_vars(params, vars, count);
+
+       return true;
+}
+
+bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
+                                      struct dc_state *dc_state)
+{
+       int i, j;
+       struct dc_stream_state *stream;
+       bool computed_streams[MAX_PIPES];
+       struct amdgpu_dm_connector *aconnector;
+
+       for (i = 0; i < dc_state->stream_count; i++)
+               computed_streams[i] = false;
+
+       for (i = 0; i < dc_state->stream_count; i++) {
+               stream = dc_state->streams[i];
+
+               if (stream->signal != SIGNAL_TYPE_DISPLAY_PORT_MST)
+                       continue;
+
+               aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
+
+               if (!aconnector || !aconnector->dc_sink)
+                       continue;
+
+               if (!aconnector->dc_sink->sink_dsc_caps.dsc_dec_caps.is_dsc_supported)
+                       continue;
+
+               if (computed_streams[i])
+                       continue;
+
+               mutex_lock(&aconnector->mst_mgr.lock);
+               if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link)) {
+                       mutex_unlock(&aconnector->mst_mgr.lock);
+                       return false;
+               }
+               mutex_unlock(&aconnector->mst_mgr.lock);
+
+               for (j = 0; j < dc_state->stream_count; j++) {
+                       if (dc_state->streams[j]->link == stream->link)
+                               computed_streams[j] = true;
+               }
+       }
+
+       for (i = 0; i < dc_state->stream_count; i++) {
+               stream = dc_state->streams[i];
+
+               if (stream->timing.flags.DSC == 1)
+                       dcn20_add_dsc_to_stream_resource(stream->ctx->dc, dc_state, stream);
+       }
+
+       return true;
+}
+
+#endif
index 2da851b..d6813ce 100644 (file)
 struct amdgpu_display_manager;
 struct amdgpu_dm_connector;
 
+int dm_mst_get_pbn_divider(struct dc_link *link);
+
 void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
                                       struct amdgpu_dm_connector *aconnector);
 
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
+                                      struct dc_state *dc_state);
+#endif
+
 #endif
index d0714a3..4674aca 100644 (file)
@@ -1,5 +1,6 @@
 #
 # Copyright 2017 Advanced Micro Devices, Inc.
+# Copyright 2019 Raptor Engineering, LLC
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the "Software"),
 # It calculates Bandwidth and Watermarks values for HW programming
 #
 
+ifdef CONFIG_X86
 calcs_ccflags := -mhard-float -msse
+endif
+
+ifdef CONFIG_PPC64
+calcs_ccflags := -mhard-float -maltivec
+endif
 
 ifdef CONFIG_CC_IS_GCC
 ifeq ($(call cc-ifversion, -lt, 0701, y), y)
@@ -32,6 +39,7 @@ IS_OLD_GCC = 1
 endif
 endif
 
+ifdef CONFIG_X86
 ifdef IS_OLD_GCC
 # Stack alignment mismatch, proceed with caution.
 # GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3
@@ -40,6 +48,7 @@ calcs_ccflags += -mpreferred-stack-boundary=4
 else
 calcs_ccflags += -msse2
 endif
+endif
 
 CFLAGS_$(AMDDALPATH)/dc/calcs/dcn_calcs.o := $(calcs_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/calcs/dcn_calc_auto.o := $(calcs_ccflags)
index a1d4925..5d081c4 100644 (file)
@@ -154,14 +154,14 @@ static void calculate_bandwidth(
 
 
 
-       if (data->d0_underlay_mode == bw_def_none) { d0_underlay_enable = 0; }
-       else {
-               d0_underlay_enable = 1;
-       }
-       if (data->d1_underlay_mode == bw_def_none) { d1_underlay_enable = 0; }
-       else {
-               d1_underlay_enable = 1;
-       }
+       if (data->d0_underlay_mode == bw_def_none)
+               d0_underlay_enable = false;
+       else
+               d0_underlay_enable = true;
+       if (data->d1_underlay_mode == bw_def_none)
+               d1_underlay_enable = false;
+       else
+               d1_underlay_enable = true;
        data->number_of_underlay_surfaces = d0_underlay_enable + d1_underlay_enable;
        switch (data->underlay_surface_type) {
        case bw_def_420:
@@ -286,8 +286,8 @@ static void calculate_bandwidth(
        data->cursor_width_pixels[2] = bw_int_to_fixed(0);
        data->cursor_width_pixels[3] = bw_int_to_fixed(0);
        /* graphics surface parameters from spreadsheet*/
-       fbc_enabled = 0;
-       lpt_enabled = 0;
+       fbc_enabled = false;
+       lpt_enabled = false;
        for (i = 4; i <= maximum_number_of_surfaces - 3; i++) {
                if (i < data->number_of_displays + 4) {
                        if (i == 4 && data->d0_underlay_mode == bw_def_underlay_only) {
@@ -338,9 +338,9 @@ static void calculate_bandwidth(
                        data->access_one_channel_only[i] = 0;
                }
                if (data->fbc_en[i] == 1) {
-                       fbc_enabled = 1;
+                       fbc_enabled = true;
                        if (data->lpt_en[i] == 1) {
-                               lpt_enabled = 1;
+                               lpt_enabled = true;
                        }
                }
                data->cursor_width_pixels[i] = bw_int_to_fixed(vbios->cursor_width);
index a4ddd65..e6c2234 100644 (file)
@@ -1,5 +1,6 @@
 /*
  * Copyright 2017 Advanced Micro Devices, Inc.
+ * Copyright 2019 Raptor Engineering, LLC
  *
  * Permission is hereby granted, free of charge, to any person obtaining a
  * copy of this software and associated documentation files (the "Software"),
@@ -622,7 +623,7 @@ static bool dcn_bw_apply_registry_override(struct dc *dc)
 {
        bool updated = false;
 
-       kernel_fpu_begin();
+       DC_FP_START();
        if ((int)(dc->dcn_soc->sr_exit_time * 1000) != dc->debug.sr_exit_time_ns
                        && dc->debug.sr_exit_time_ns) {
                updated = true;
@@ -658,7 +659,7 @@ static bool dcn_bw_apply_registry_override(struct dc *dc)
                dc->dcn_soc->dram_clock_change_latency =
                                dc->debug.dram_clock_change_latency_ns / 1000.0;
        }
-       kernel_fpu_end();
+       DC_FP_END();
 
        return updated;
 }
@@ -738,7 +739,7 @@ bool dcn_validate_bandwidth(
                dcn_bw_sync_calcs_and_dml(dc);
 
        memset(v, 0, sizeof(*v));
-       kernel_fpu_begin();
+       DC_FP_START();
 
        v->sr_exit_time = dc->dcn_soc->sr_exit_time;
        v->sr_enter_plus_exit_time = dc->dcn_soc->sr_enter_plus_exit_time;
@@ -1271,7 +1272,7 @@ bool dcn_validate_bandwidth(
        bw_limit = dc->dcn_soc->percent_disp_bw_limit * v->fabric_and_dram_bandwidth_vmax0p9;
        bw_limit_pass = (v->total_data_read_bandwidth / 1000.0) < bw_limit;
 
-       kernel_fpu_end();
+       DC_FP_END();
 
        PERFORMANCE_TRACE_END();
        BW_VAL_TRACE_FINISH();
@@ -1439,7 +1440,7 @@ void dcn_bw_update_from_pplib(struct dc *dc)
        res = dm_pp_get_clock_levels_by_type_with_voltage(
                        ctx, DM_PP_CLOCK_TYPE_FCLK, &fclks);
 
-       kernel_fpu_begin();
+       DC_FP_START();
 
        if (res)
                res = verify_clock_values(&fclks);
@@ -1459,12 +1460,12 @@ void dcn_bw_update_from_pplib(struct dc *dc)
        } else
                BREAK_TO_DEBUGGER();
 
-       kernel_fpu_end();
+       DC_FP_END();
 
        res = dm_pp_get_clock_levels_by_type_with_voltage(
                        ctx, DM_PP_CLOCK_TYPE_DCFCLK, &dcfclks);
 
-       kernel_fpu_begin();
+       DC_FP_START();
 
        if (res)
                res = verify_clock_values(&dcfclks);
@@ -1477,7 +1478,7 @@ void dcn_bw_update_from_pplib(struct dc *dc)
        } else
                BREAK_TO_DEBUGGER();
 
-       kernel_fpu_end();
+       DC_FP_END();
 }
 
 void dcn_bw_notify_pplib_of_wm_ranges(struct dc *dc)
@@ -1492,11 +1493,11 @@ void dcn_bw_notify_pplib_of_wm_ranges(struct dc *dc)
        if (!pp || !pp->set_wm_ranges)
                return;
 
-       kernel_fpu_begin();
+       DC_FP_START();
        min_fclk_khz = dc->dcn_soc->fabric_and_dram_bandwidth_vmin0p65 * 1000000 / 32;
        min_dcfclk_khz = dc->dcn_soc->dcfclkv_min0p65 * 1000;
        socclk_khz = dc->dcn_soc->socclk * 1000;
-       kernel_fpu_end();
+       DC_FP_END();
 
        /* Now notify PPLib/SMU about which Watermarks sets they should select
         * depending on DPM state they are in. And update BW MGR GFX Engine and
@@ -1547,7 +1548,7 @@ void dcn_bw_notify_pplib_of_wm_ranges(struct dc *dc)
 
 void dcn_bw_sync_calcs_and_dml(struct dc *dc)
 {
-       kernel_fpu_begin();
+       DC_FP_START();
        DC_LOG_BANDWIDTH_CALCS("sr_exit_time: %f ns\n"
                        "sr_enter_plus_exit_time: %f ns\n"
                        "urgent_latency: %f ns\n"
@@ -1736,5 +1737,5 @@ void dcn_bw_sync_calcs_and_dml(struct dc *dc)
        dc->dml.ip.bug_forcing_LC_req_same_size_fixed =
                dc->dcn_ip->bug_forcing_luma_and_chroma_request_to_same_size_fixed == dcn_bw_yes;
        dc->dml.ip.dcfclk_cstate_latency = dc->dcn_ip->dcfclk_cstate_latency;
-       kernel_fpu_end();
+       DC_FP_END();
 }
index 25d7b7c..495f01e 100644 (file)
@@ -27,6 +27,7 @@
 #include "clk_mgr_internal.h"
 
 #include "dce100/dce_clk_mgr.h"
+#include "dcn20_clk_mgr.h"
 #include "reg_helper.h"
 #include "core_types.h"
 #include "dm_helpers.h"
@@ -100,13 +101,13 @@ uint32_t dentist_get_did_from_divider(int divider)
 }
 
 void dcn20_update_clocks_update_dpp_dto(struct clk_mgr_internal *clk_mgr,
-               struct dc_state *context)
+               struct dc_state *context, bool safe_to_lower)
 {
        int i;
 
        clk_mgr->dccg->ref_dppclk = clk_mgr->base.clks.dppclk_khz;
        for (i = 0; i < clk_mgr->base.ctx->dc->res_pool->pipe_count; i++) {
-               int dpp_inst, dppclk_khz;
+               int dpp_inst, dppclk_khz, prev_dppclk_khz;
 
                /* Loop index will match dpp->inst if resource exists,
                 * and we want to avoid dependency on dpp object
@@ -114,8 +115,12 @@ void dcn20_update_clocks_update_dpp_dto(struct clk_mgr_internal *clk_mgr,
                dpp_inst = i;
                dppclk_khz = context->res_ctx.pipe_ctx[i].plane_res.bw.dppclk_khz;
 
-               clk_mgr->dccg->funcs->update_dpp_dto(
-                               clk_mgr->dccg, dpp_inst, dppclk_khz);
+               prev_dppclk_khz = clk_mgr->base.ctx->dc->current_state->res_ctx.pipe_ctx[i].plane_res.bw.dppclk_khz;
+
+               if (safe_to_lower || prev_dppclk_khz < dppclk_khz) {
+                       clk_mgr->dccg->funcs->update_dpp_dto(
+                                                       clk_mgr->dccg, dpp_inst, dppclk_khz);
+               }
        }
 }
 
@@ -161,6 +166,9 @@ void dcn2_update_clocks(struct clk_mgr *clk_mgr_base,
                dc->debug.force_clock_mode & 0x1) {
                //this is from resume or boot up, if forced_clock cfg option used, we bypass program dispclk and DPPCLK, but need set them for S3.
                force_reset = true;
+
+               dcn2_read_clocks_from_hw_dentist(clk_mgr_base);
+
                //force_clock_mode 0x1:  force reset the clock even it is the same clock as long as it is in Passive level.
        }
        display_count = clk_mgr_helper_get_active_display_cnt(dc, context);
@@ -240,7 +248,7 @@ void dcn2_update_clocks(struct clk_mgr *clk_mgr_base,
        if (dc->config.forced_clocks == false || (force_reset && safe_to_lower)) {
                if (dpp_clock_lowered) {
                        // if clock is being lowered, increase DTO before lowering refclk
-                       dcn20_update_clocks_update_dpp_dto(clk_mgr, context);
+                       dcn20_update_clocks_update_dpp_dto(clk_mgr, context, safe_to_lower);
                        dcn20_update_clocks_update_dentist(clk_mgr);
                } else {
                        // if clock is being raised, increase refclk before lowering DTO
@@ -248,7 +256,7 @@ void dcn2_update_clocks(struct clk_mgr *clk_mgr_base,
                                dcn20_update_clocks_update_dentist(clk_mgr);
                        // always update dtos unless clock is lowered and not safe to lower
                        if (new_clocks->dppclk_khz >= dc->current_state->bw_ctx.bw.dcn.clk.dppclk_khz)
-                               dcn20_update_clocks_update_dpp_dto(clk_mgr, context);
+                               dcn20_update_clocks_update_dpp_dto(clk_mgr, context, safe_to_lower);
                }
        }
 
@@ -339,6 +347,32 @@ void dcn2_enable_pme_wa(struct clk_mgr *clk_mgr_base)
        }
 }
 
+
+void dcn2_read_clocks_from_hw_dentist(struct clk_mgr *clk_mgr_base)
+{
+       struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+       uint32_t dispclk_wdivider;
+       uint32_t dppclk_wdivider;
+       int disp_divider;
+       int dpp_divider;
+
+       REG_GET(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_WDIVIDER, &dispclk_wdivider);
+       REG_GET(DENTIST_DISPCLK_CNTL, DENTIST_DPPCLK_WDIVIDER, &dppclk_wdivider);
+
+       disp_divider = dentist_get_divider_from_did(dispclk_wdivider);
+       dpp_divider = dentist_get_divider_from_did(dispclk_wdivider);
+
+       if (disp_divider && dpp_divider) {
+               /* Calculate the current DFS clock, in kHz.*/
+               clk_mgr_base->clks.dispclk_khz = (DENTIST_DIVIDER_RANGE_SCALE_FACTOR
+                       * clk_mgr->base.dentist_vco_freq_khz) / disp_divider;
+
+               clk_mgr_base->clks.dppclk_khz = (DENTIST_DIVIDER_RANGE_SCALE_FACTOR
+                               * clk_mgr->base.dentist_vco_freq_khz) / dpp_divider;
+       }
+
+}
+
 void dcn2_get_clock(struct clk_mgr *clk_mgr,
                struct dc_state *context,
                        enum dc_clock_type clock_type,
index c9fd824..0b9c045 100644 (file)
@@ -34,7 +34,7 @@ void dcn2_update_clocks_fpga(struct clk_mgr *clk_mgr,
                        struct dc_state *context,
                        bool safe_to_lower);
 void dcn20_update_clocks_update_dpp_dto(struct clk_mgr_internal *clk_mgr,
-               struct dc_state *context);
+               struct dc_state *context, bool safe_to_lower);
 
 void dcn2_init_clocks(struct clk_mgr *clk_mgr);
 
@@ -51,4 +51,8 @@ void dcn2_get_clock(struct clk_mgr *clk_mgr,
                        struct dc_clock_config *clock_cfg);
 
 void dcn20_update_clocks_update_dentist(struct clk_mgr_internal *clk_mgr);
+
+void dcn2_read_clocks_from_hw_dentist(struct clk_mgr *clk_mgr_base);
+
+
 #endif //__DCN20_CLK_MGR_H__
index de51ef1..ffed720 100644 (file)
@@ -164,16 +164,16 @@ void rn_update_clocks(struct clk_mgr *clk_mgr_base,
        }
 
        if (dpp_clock_lowered) {
-               // if clock is being lowered, increase DTO before lowering refclk
-               dcn20_update_clocks_update_dpp_dto(clk_mgr, context);
+               // increase per DPP DTO before lowering global dppclk
+               dcn20_update_clocks_update_dpp_dto(clk_mgr, context, safe_to_lower);
                rn_vbios_smu_set_dppclk(clk_mgr, clk_mgr_base->clks.dppclk_khz);
        } else {
-               // if clock is being raised, increase refclk before lowering DTO
+               // increase global DPPCLK before lowering per DPP DTO
                if (update_dppclk || update_dispclk)
                        rn_vbios_smu_set_dppclk(clk_mgr, clk_mgr_base->clks.dppclk_khz);
                // always update dtos unless clock is lowered and not safe to lower
                if (new_clocks->dppclk_khz >= dc->current_state->bw_ctx.bw.dcn.clk.dppclk_khz)
-                       dcn20_update_clocks_update_dpp_dto(clk_mgr, context);
+                       dcn20_update_clocks_update_dpp_dto(clk_mgr, context, safe_to_lower);
        }
 
        if (update_dispclk &&
@@ -409,7 +409,7 @@ void build_watermark_ranges(struct clk_bw_params *bw_params, struct pp_smu_wm_ra
                        continue;
 
                ranges->reader_wm_sets[num_valid_sets].wm_inst = bw_params->wm_table.entries[i].wm_inst;
-               ranges->reader_wm_sets[num_valid_sets].wm_type = bw_params->wm_table.entries[i].wm_type;;
+               ranges->reader_wm_sets[num_valid_sets].wm_type = bw_params->wm_table.entries[i].wm_type;
                /* We will not select WM based on dcfclk, so leave it as unconstrained */
                ranges->reader_wm_sets[num_valid_sets].min_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MIN;
                ranges->reader_wm_sets[num_valid_sets].max_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
index 39fe38c..3d89904 100644 (file)
@@ -66,6 +66,9 @@
 
 #include "dce/dce_i2c.h"
 
+#define CTX \
+       dc->ctx
+
 #define DC_LOGGER \
        dc->ctx->logger
 
@@ -579,6 +582,40 @@ static void dc_destruct(struct dc *dc)
 
 }
 
+static bool dc_construct_ctx(struct dc *dc,
+               const struct dc_init_data *init_params)
+{
+       struct dc_context *dc_ctx;
+       enum dce_version dc_version = DCE_VERSION_UNKNOWN;
+
+       dc_ctx = kzalloc(sizeof(*dc_ctx), GFP_KERNEL);
+       if (!dc_ctx)
+               return false;
+
+       dc_ctx->cgs_device = init_params->cgs_device;
+       dc_ctx->driver_context = init_params->driver;
+       dc_ctx->dc = dc;
+       dc_ctx->asic_id = init_params->asic_id;
+       dc_ctx->dc_sink_id_count = 0;
+       dc_ctx->dc_stream_id_count = 0;
+       dc_ctx->dce_environment = init_params->dce_environment;
+
+       /* Create logger */
+
+       dc_version = resource_parse_asic_id(init_params->asic_id);
+       dc_ctx->dce_version = dc_version;
+
+       dc_ctx->perf_trace = dc_perf_trace_create();
+       if (!dc_ctx->perf_trace) {
+               ASSERT_CRITICAL(false);
+               return false;
+       }
+
+       dc->ctx = dc_ctx;
+
+       return true;
+}
+
 static bool dc_construct(struct dc *dc,
                const struct dc_init_data *init_params)
 {
@@ -590,7 +627,6 @@ static bool dc_construct(struct dc *dc,
        struct dcn_ip_params *dcn_ip;
 #endif
 
-       enum dce_version dc_version = DCE_VERSION_UNKNOWN;
        dc->config = init_params->flags;
 
        // Allocate memory for the vm_helper
@@ -636,26 +672,12 @@ static bool dc_construct(struct dc *dc,
        dc->soc_bounding_box = init_params->soc_bounding_box;
 #endif
 
-       dc_ctx = kzalloc(sizeof(*dc_ctx), GFP_KERNEL);
-       if (!dc_ctx) {
+       if (!dc_construct_ctx(dc, init_params)) {
                dm_error("%s: failed to create ctx\n", __func__);
                goto fail;
        }
 
-       dc_ctx->cgs_device = init_params->cgs_device;
-       dc_ctx->driver_context = init_params->driver;
-       dc_ctx->dc = dc;
-       dc_ctx->asic_id = init_params->asic_id;
-       dc_ctx->dc_sink_id_count = 0;
-       dc_ctx->dc_stream_id_count = 0;
-       dc->ctx = dc_ctx;
-
-       /* Create logger */
-
-       dc_ctx->dce_environment = init_params->dce_environment;
-
-       dc_version = resource_parse_asic_id(init_params->asic_id);
-       dc_ctx->dce_version = dc_version;
+        dc_ctx = dc->ctx;
 
        /* Resource should construct all asic specific resources.
         * This should be the only place where we need to parse the asic id
@@ -670,7 +692,7 @@ static bool dc_construct(struct dc *dc,
                bp_init_data.bios = init_params->asic_id.atombios_base_address;
 
                dc_ctx->dc_bios = dal_bios_parser_create(
-                               &bp_init_data, dc_version);
+                               &bp_init_data, dc_ctx->dce_version);
 
                if (!dc_ctx->dc_bios) {
                        ASSERT_CRITICAL(false);
@@ -678,17 +700,13 @@ static bool dc_construct(struct dc *dc,
                }
 
                dc_ctx->created_bios = true;
-               }
-
-       dc_ctx->perf_trace = dc_perf_trace_create();
-       if (!dc_ctx->perf_trace) {
-               ASSERT_CRITICAL(false);
-               goto fail;
        }
 
+
+
        /* Create GPIO service */
        dc_ctx->gpio_service = dal_gpio_service_create(
-                       dc_version,
+                       dc_ctx->dce_version,
                        dc_ctx->dce_environment,
                        dc_ctx);
 
@@ -697,7 +715,7 @@ static bool dc_construct(struct dc *dc,
                goto fail;
        }
 
-       dc->res_pool = dc_create_resource_pool(dc, init_params, dc_version);
+       dc->res_pool = dc_create_resource_pool(dc, init_params, dc_ctx->dce_version);
        if (!dc->res_pool)
                goto fail;
 
@@ -728,8 +746,6 @@ static bool dc_construct(struct dc *dc,
        return true;
 
 fail:
-
-       dc_destruct(dc);
        return false;
 }
 
@@ -783,6 +799,33 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
        dc_release_state(current_ctx);
 }
 
+static void wait_for_no_pipes_pending(struct dc *dc, struct dc_state *context)
+{
+       int i;
+       int count = 0;
+       struct pipe_ctx *pipe;
+       PERF_TRACE();
+       for (i = 0; i < MAX_PIPES; i++) {
+               pipe = &context->res_ctx.pipe_ctx[i];
+
+               if (!pipe->plane_state)
+                       continue;
+
+               /* Timeout 100 ms */
+               while (count < 100000) {
+                       /* Must set to false to start with, due to OR in update function */
+                       pipe->plane_state->status.is_flip_pending = false;
+                       dc->hwss.update_pending_status(pipe);
+                       if (!pipe->plane_state->status.is_flip_pending)
+                               break;
+                       udelay(1);
+                       count++;
+               }
+               ASSERT(!pipe->plane_state->status.is_flip_pending);
+       }
+       PERF_TRACE();
+}
+
 /*******************************************************************************
  * Public functions
  ******************************************************************************/
@@ -795,28 +838,38 @@ struct dc *dc_create(const struct dc_init_data *init_params)
        if (NULL == dc)
                goto alloc_fail;
 
-       if (false == dc_construct(dc, init_params))
-               goto construct_fail;
+       if (init_params->dce_environment == DCE_ENV_VIRTUAL_HW) {
+               if (false == dc_construct_ctx(dc, init_params)) {
+                       dc_destruct(dc);
+                       goto construct_fail;
+               }
+       } else {
+               if (false == dc_construct(dc, init_params)) {
+                       dc_destruct(dc);
+                       goto construct_fail;
+               }
+
+               full_pipe_count = dc->res_pool->pipe_count;
+               if (dc->res_pool->underlay_pipe_index != NO_UNDERLAY_PIPE)
+                       full_pipe_count--;
+               dc->caps.max_streams = min(
+                               full_pipe_count,
+                               dc->res_pool->stream_enc_count);
 
-       full_pipe_count = dc->res_pool->pipe_count;
-       if (dc->res_pool->underlay_pipe_index != NO_UNDERLAY_PIPE)
-               full_pipe_count--;
-       dc->caps.max_streams = min(
-                       full_pipe_count,
-                       dc->res_pool->stream_enc_count);
+               dc->optimize_seamless_boot_streams = 0;
+               dc->caps.max_links = dc->link_count;
+               dc->caps.max_audios = dc->res_pool->audio_count;
+               dc->caps.linear_pitch_alignment = 64;
 
-       dc->caps.max_links = dc->link_count;
-       dc->caps.max_audios = dc->res_pool->audio_count;
-       dc->caps.linear_pitch_alignment = 64;
+               dc->caps.max_dp_protocol_version = DP_VERSION_1_4;
 
-       dc->caps.max_dp_protocol_version = DP_VERSION_1_4;
+               if (dc->res_pool->dmcu != NULL)
+                       dc->versions.dmcu_version = dc->res_pool->dmcu->dmcu_version;
+       }
 
        /* Populate versioning information */
        dc->versions.dc_ver = DC_VER;
 
-       if (dc->res_pool->dmcu != NULL)
-               dc->versions.dmcu_version = dc->res_pool->dmcu->dmcu_version;
-
        dc->build_id = DC_BUILD_ID;
 
        DC_LOG_DC("Display Core initialized\n");
@@ -834,7 +887,8 @@ alloc_fail:
 
 void dc_hardware_init(struct dc *dc)
 {
-       dc->hwss.init_hw(dc);
+       if (dc->ctx->dce_environment != DCE_ENV_VIRTUAL_HW)
+               dc->hwss.init_hw(dc);
 }
 
 void dc_init_callbacks(struct dc *dc,
@@ -1148,10 +1202,10 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
 
        for (i = 0; i < context->stream_count; i++) {
                if (context->streams[i]->apply_seamless_boot_optimization)
-                       dc->optimize_seamless_boot = true;
+                       dc->optimize_seamless_boot_streams++;
        }
 
-       if (!dc->optimize_seamless_boot)
+       if (dc->optimize_seamless_boot_streams == 0)
                dc->hwss.prepare_bandwidth(dc, context);
 
        /* re-program planes for existing stream, in case we need to
@@ -1224,9 +1278,12 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
 
        dc_enable_stereo(dc, context, dc_streams, context->stream_count);
 
-       if (!dc->optimize_seamless_boot)
-                       /* pplib is notified if disp_num changed */
-                       dc->hwss.optimize_bandwidth(dc, context);
+       if (dc->optimize_seamless_boot_streams == 0) {
+               /* Must wait for no flips to be pending before doing optimize bw */
+               wait_for_no_pipes_pending(dc, context);
+               /* pplib is notified if disp_num changed */
+               dc->hwss.optimize_bandwidth(dc, context);
+       }
 
        for (i = 0; i < context->stream_count; i++)
                context->streams[i]->mode_changed = false;
@@ -1267,7 +1324,7 @@ bool dc_post_update_surfaces_to_stream(struct dc *dc)
        int i;
        struct dc_state *context = dc->current_state;
 
-       if (!dc->optimized_required || dc->optimize_seamless_boot)
+       if (!dc->optimized_required || dc->optimize_seamless_boot_streams > 0)
                return true;
 
        post_surface_trace(dc);
@@ -1543,7 +1600,7 @@ static enum surface_update_type get_scaling_info_update_type(
 
                update_flags->bits.scaling_change = 1;
                if (u->scaling_info->src_rect.width > u->surface->src_rect.width
-                               && u->scaling_info->src_rect.height > u->surface->src_rect.height)
+                               || u->scaling_info->src_rect.height > u->surface->src_rect.height)
                        /* Making src rect bigger requires a bandwidth change */
                        update_flags->bits.clock_change = 1;
        }
@@ -1557,11 +1614,11 @@ static enum surface_update_type get_scaling_info_update_type(
                update_flags->bits.position_change = 1;
 
        if (update_flags->bits.clock_change
-                       || update_flags->bits.bandwidth_change)
+                       || update_flags->bits.bandwidth_change
+                       || update_flags->bits.scaling_change)
                return UPDATE_TYPE_FULL;
 
-       if (update_flags->bits.scaling_change
-                       || update_flags->bits.position_change)
+       if (update_flags->bits.position_change)
                return UPDATE_TYPE_MED;
 
        return UPDATE_TYPE_FAST;
@@ -2051,7 +2108,7 @@ static void commit_planes_do_stream_update(struct dc *dc,
 
                                        dc->hwss.optimize_bandwidth(dc, dc->current_state);
                                } else {
-                                       if (!dc->optimize_seamless_boot)
+                                       if (dc->optimize_seamless_boot_streams == 0)
                                                dc->hwss.prepare_bandwidth(dc, dc->current_state);
 
                                        core_link_enable_stream(dc->current_state, pipe_ctx);
@@ -2092,7 +2149,7 @@ static void commit_planes_for_stream(struct dc *dc,
        int i, j;
        struct pipe_ctx *top_pipe_to_program = NULL;
 
-       if (dc->optimize_seamless_boot && surface_count > 0) {
+       if (dc->optimize_seamless_boot_streams > 0 && surface_count > 0) {
                /* Optimize seamless boot flag keeps clocks and watermarks high until
                 * first flip. After first flip, optimization is required to lower
                 * bandwidth. Important to note that it is expected UEFI will
@@ -2101,12 +2158,14 @@ static void commit_planes_for_stream(struct dc *dc,
                 */
                if (stream->apply_seamless_boot_optimization) {
                        stream->apply_seamless_boot_optimization = false;
-                       dc->optimize_seamless_boot = false;
-                       dc->optimized_required = true;
+                       dc->optimize_seamless_boot_streams--;
+
+                       if (dc->optimize_seamless_boot_streams == 0)
+                               dc->optimized_required = true;
                }
        }
 
-       if (update_type == UPDATE_TYPE_FULL && !dc->optimize_seamless_boot) {
+       if (update_type == UPDATE_TYPE_FULL && dc->optimize_seamless_boot_streams == 0) {
                dc->hwss.prepare_bandwidth(dc, context);
                context_clock_trace(dc, context);
        }
index c2c136b..a49c10d 100644 (file)
@@ -590,7 +590,7 @@ bool dal_ddc_submit_aux_command(struct ddc_service *ddc,
                struct aux_payload *payload)
 {
        uint32_t retrieved = 0;
-       bool ret = 0;
+       bool ret = false;
 
        if (!ddc)
                return false;
index 42aa889..38b0f43 100644 (file)
@@ -2854,10 +2854,12 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, union hpd_irq_data *out_hpd
        /* For now we only handle 'Downstream port status' case.
         * If we got sink count changed it means
         * Downstream port status changed,
-        * then DM should call DC to do the detection. */
-       if (hpd_rx_irq_check_link_loss_status(
-               link,
-               &hpd_irq_dpcd_data)) {
+        * then DM should call DC to do the detection.
+        * NOTE: Do not handle link loss on eDP since it is internal link*/
+       if ((link->connector_signal != SIGNAL_TYPE_EDP) &&
+               hpd_rx_irq_check_link_loss_status(
+                       link,
+                       &hpd_irq_dpcd_data)) {
                /* Connectivity log: link loss */
                CONN_DATA_LINK_LOSS(link,
                                        hpd_irq_dpcd_data.raw,
index 548aac0..ddb8550 100644 (file)
@@ -173,15 +173,20 @@ bool edp_receiver_ready_T9(struct dc_link *link)
 }
 bool edp_receiver_ready_T7(struct dc_link *link)
 {
-       unsigned int tries = 0;
        unsigned char sinkstatus = 0;
        unsigned char edpRev = 0;
        enum dc_status result = DC_OK;
 
+       /* use absolute time stamp to constrain max T7*/
+       unsigned long long enter_timestamp = 0;
+       unsigned long long finish_timestamp = 0;
+       unsigned long long time_taken_in_ns = 0;
+
        result = core_link_read_dpcd(link, DP_EDP_DPCD_REV, &edpRev, sizeof(edpRev));
        if (result == DC_OK && edpRev < DP_EDP_12)
                return true;
        /* start from eDP version 1.2, SINK_STAUS indicate the sink is ready.*/
+       enter_timestamp = dm_get_timestamp(link->ctx);
        do {
                sinkstatus = 0;
                result = core_link_read_dpcd(link, DP_SINK_STATUS, &sinkstatus, sizeof(sinkstatus));
@@ -189,8 +194,10 @@ bool edp_receiver_ready_T7(struct dc_link *link)
                        break;
                if (result != DC_OK)
                        break;
-               udelay(25); //MAx T7 is 50ms
-       } while (++tries < 300);
+               udelay(25);
+               finish_timestamp = dm_get_timestamp(link->ctx);
+               time_taken_in_ns = dm_get_elapse_time_in_ns(link->ctx, finish_timestamp, enter_timestamp);
+       } while (time_taken_in_ns < 50 * 1000000); //MAx T7 is 50ms
 
        if (link->local_sink->edid_caps.panel_patch.extra_t7_ms > 0)
                udelay(link->local_sink->edid_caps.panel_patch.extra_t7_ms * 1000);
@@ -518,6 +525,9 @@ bool dp_set_dsc_pps_sdp(struct pipe_ctx *pipe_ctx, bool enable)
                struct dsc_config dsc_cfg;
                uint8_t dsc_packed_pps[128];
 
+               memset(&dsc_cfg, 0, sizeof(dsc_cfg));
+               memset(dsc_packed_pps, 0, 128);
+
                /* Enable DSC hw block */
                dsc_cfg.pic_width = stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right;
                dsc_cfg.pic_height = stream->timing.v_addressable + stream->timing.v_border_top + stream->timing.v_border_bottom;
index 0c19de6..64a0e08 100644 (file)
@@ -940,30 +940,43 @@ static void calculate_inits_and_adj_vp(struct pipe_ctx *pipe_ctx)
 
 }
 
-static void calculate_integer_scaling(struct pipe_ctx *pipe_ctx)
+/*
+ * When handling 270 rotation in mixed SLS mode, we have
+ * stream->timing.h_border_left that is non zero.  If we are doing
+ * pipe-splitting, this h_border_left value gets added to recout.x and when it
+ * calls calculate_inits_and_adj_vp() and
+ * adjust_vp_and_init_for_seamless_clip(), it can cause viewport.height for a
+ * pipe to be incorrect.
+ *
+ * To fix this, instead of using stream->timing.h_border_left, we can use
+ * stream->dst.x to represent the border instead.  So we will set h_border_left
+ * to 0 and shift the appropriate amount in stream->dst.x.  We will then
+ * perform all calculations in resource_build_scaling_params() based on this
+ * and then restore the h_border_left and stream->dst.x to their original
+ * values.
+ *
+ * shift_border_left_to_dst() will shift the amount of h_border_left to
+ * stream->dst.x and set h_border_left to 0.  restore_border_left_from_dst()
+ * will restore h_border_left and stream->dst.x back to their original values
+ * We also need to make sure pipe_ctx->plane_res.scl_data.h_active uses the
+ * original h_border_left value in its calculation.
+ */
+int shift_border_left_to_dst(struct pipe_ctx *pipe_ctx)
 {
-       unsigned int integer_multiple = 1;
-
-       if (pipe_ctx->plane_state->scaling_quality.integer_scaling) {
-               // calculate maximum # of replication of src onto addressable
-               integer_multiple = min(
-                               pipe_ctx->stream->timing.h_addressable / pipe_ctx->stream->src.width,
-                               pipe_ctx->stream->timing.v_addressable  / pipe_ctx->stream->src.height);
+       int store_h_border_left = pipe_ctx->stream->timing.h_border_left;
 
-               //scale dst
-               pipe_ctx->stream->dst.width  = integer_multiple * pipe_ctx->stream->src.width;
-               pipe_ctx->stream->dst.height = integer_multiple * pipe_ctx->stream->src.height;
-
-               //center dst onto addressable
-               pipe_ctx->stream->dst.x = (pipe_ctx->stream->timing.h_addressable - pipe_ctx->stream->dst.width)/2;
-               pipe_ctx->stream->dst.y = (pipe_ctx->stream->timing.v_addressable - pipe_ctx->stream->dst.height)/2;
-
-               //We are guaranteed that we are scaling in integer ratio
-               pipe_ctx->plane_state->scaling_quality.v_taps = 1;
-               pipe_ctx->plane_state->scaling_quality.h_taps = 1;
-               pipe_ctx->plane_state->scaling_quality.v_taps_c = 1;
-               pipe_ctx->plane_state->scaling_quality.h_taps_c = 1;
+       if (store_h_border_left) {
+               pipe_ctx->stream->timing.h_border_left = 0;
+               pipe_ctx->stream->dst.x += store_h_border_left;
        }
+       return store_h_border_left;
+}
+
+void restore_border_left_from_dst(struct pipe_ctx *pipe_ctx,
+                                  int store_h_border_left)
+{
+       pipe_ctx->stream->dst.x -= store_h_border_left;
+       pipe_ctx->stream->timing.h_border_left = store_h_border_left;
 }
 
 bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
@@ -971,6 +984,7 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
        const struct dc_plane_state *plane_state = pipe_ctx->plane_state;
        struct dc_crtc_timing *timing = &pipe_ctx->stream->timing;
        bool res = false;
+       int store_h_border_left = shift_border_left_to_dst(pipe_ctx);
        DC_LOGGER_INIT(pipe_ctx->stream->ctx->logger);
        /* Important: scaling ratio calculation requires pixel format,
         * lb depth calculation requires recout and taps require scaling ratios.
@@ -979,14 +993,18 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
        pipe_ctx->plane_res.scl_data.format = convert_pixel_format_to_dalsurface(
                        pipe_ctx->plane_state->format);
 
-       calculate_integer_scaling(pipe_ctx);
-
        calculate_scaling_ratios(pipe_ctx);
 
        calculate_viewport(pipe_ctx);
 
-       if (pipe_ctx->plane_res.scl_data.viewport.height < 16 || pipe_ctx->plane_res.scl_data.viewport.width < 16)
+       if (pipe_ctx->plane_res.scl_data.viewport.height < 16 ||
+               pipe_ctx->plane_res.scl_data.viewport.width < 16) {
+               if (store_h_border_left) {
+                       restore_border_left_from_dst(pipe_ctx,
+                               store_h_border_left);
+               }
                return false;
+       }
 
        calculate_recout(pipe_ctx);
 
@@ -999,8 +1017,10 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
        pipe_ctx->plane_res.scl_data.recout.x += timing->h_border_left;
        pipe_ctx->plane_res.scl_data.recout.y += timing->v_border_top;
 
-       pipe_ctx->plane_res.scl_data.h_active = timing->h_addressable + timing->h_border_left + timing->h_border_right;
-       pipe_ctx->plane_res.scl_data.v_active = timing->v_addressable + timing->v_border_top + timing->v_border_bottom;
+       pipe_ctx->plane_res.scl_data.h_active = timing->h_addressable +
+               store_h_border_left + timing->h_border_right;
+       pipe_ctx->plane_res.scl_data.v_active = timing->v_addressable +
+               timing->v_border_top + timing->v_border_bottom;
 
        /* Taps calculations */
        if (pipe_ctx->plane_res.xfm != NULL)
@@ -1047,6 +1067,9 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
                                plane_state->dst_rect.x,
                                plane_state->dst_rect.y);
 
+       if (store_h_border_left)
+               restore_border_left_from_dst(pipe_ctx, store_h_border_left);
+
        return res;
 }
 
@@ -1894,8 +1917,26 @@ static int acquire_resource_from_hw_enabled_state(
                pipe_ctx->plane_res.dpp = pool->dpps[tg_inst];
                pipe_ctx->stream_res.opp = pool->opps[tg_inst];
 
-               if (pool->dpps[tg_inst])
+               if (pool->dpps[tg_inst]) {
                        pipe_ctx->plane_res.mpcc_inst = pool->dpps[tg_inst]->inst;
+
+                       // Read DPP->MPCC->OPP Pipe from HW State
+                       if (pool->mpc->funcs->read_mpcc_state) {
+                               struct mpcc_state s = {0};
+
+                               pool->mpc->funcs->read_mpcc_state(pool->mpc, pipe_ctx->plane_res.mpcc_inst, &s);
+
+                               if (s.dpp_id < MAX_MPCC)
+                                       pool->mpc->mpcc_array[pipe_ctx->plane_res.mpcc_inst].dpp_id = s.dpp_id;
+
+                               if (s.bot_mpcc_id < MAX_MPCC)
+                                       pool->mpc->mpcc_array[pipe_ctx->plane_res.mpcc_inst].mpcc_bot =
+                                                       &pool->mpc->mpcc_array[s.bot_mpcc_id];
+
+                               if (s.opp_id < MAX_OPP)
+                                       pipe_ctx->stream_res.opp->mpc_tree_params.opp_id = s.opp_id;
+                       }
+               }
                pipe_ctx->pipe_idx = tg_inst;
 
                pipe_ctx->stream = stream;
@@ -2281,7 +2322,7 @@ static void set_avi_info_frame(
                if (color_space == COLOR_SPACE_SRGB ||
                        color_space == COLOR_SPACE_2020_RGB_FULLRANGE) {
                        hdmi_info.bits.Q0_Q1   = RGB_QUANTIZATION_FULL_RANGE;
-                       hdmi_info.bits.YQ0_YQ1 = YYC_QUANTIZATION_FULL_RANGE;
+                       hdmi_info.bits.YQ0_YQ1 = YYC_QUANTIZATION_LIMITED_RANGE;
                } else if (color_space == COLOR_SPACE_SRGB_LIMITED ||
                                        color_space == COLOR_SPACE_2020_RGB_LIMITEDRANGE) {
                        hdmi_info.bits.Q0_Q1   = RGB_QUANTIZATION_LIMITED_RANGE;
@@ -2811,3 +2852,51 @@ unsigned int resource_pixel_format_to_bpp(enum surface_pixel_format format)
                return -1;
        }
 }
+static unsigned int get_max_audio_sample_rate(struct audio_mode *modes)
+{
+       if (modes) {
+               if (modes->sample_rates.rate.RATE_192)
+                       return 192000;
+               if (modes->sample_rates.rate.RATE_176_4)
+                       return 176400;
+               if (modes->sample_rates.rate.RATE_96)
+                       return 96000;
+               if (modes->sample_rates.rate.RATE_88_2)
+                       return 88200;
+               if (modes->sample_rates.rate.RATE_48)
+                       return 48000;
+               if (modes->sample_rates.rate.RATE_44_1)
+                       return 44100;
+               if (modes->sample_rates.rate.RATE_32)
+                       return 32000;
+       }
+       /*original logic when no audio info*/
+       return 441000;
+}
+
+void get_audio_check(struct audio_info *aud_modes,
+       struct audio_check *audio_chk)
+{
+       unsigned int i;
+       unsigned int max_sample_rate = 0;
+
+       if (aud_modes) {
+               audio_chk->audio_packet_type = 0x2;/*audio sample packet AP = .25 for layout0, 1 for layout1*/
+
+               audio_chk->max_audiosample_rate = 0;
+               for (i = 0; i < aud_modes->mode_count; i++) {
+                       max_sample_rate = get_max_audio_sample_rate(&aud_modes->modes[i]);
+                       if (audio_chk->max_audiosample_rate < max_sample_rate)
+                               audio_chk->max_audiosample_rate = max_sample_rate;
+                       /*dts takes the same as type 2: AP = 0.25*/
+               }
+               /*check which one take more bandwidth*/
+               if (audio_chk->max_audiosample_rate > 192000)
+                       audio_chk->audio_packet_type = 0x9;/*AP =1*/
+               audio_chk->acat = 0;/*not support*/
+       }
+}
+
+
+
+
index b43a4b1..6ddbb00 100644 (file)
@@ -406,25 +406,30 @@ bool dc_stream_add_writeback(struct dc *dc,
                stream->writeback_info[stream->num_wb_info++] = *wb_info;
        }
 
-       if (!dc->hwss.update_bandwidth(dc, dc->current_state)) {
-               dm_error("DC: update_bandwidth failed!\n");
-               return false;
-       }
-
-       /* enable writeback */
        if (dc->hwss.enable_writeback) {
                struct dc_stream_status *stream_status = dc_stream_get_status(stream);
                struct dwbc *dwb = dc->res_pool->dwbc[wb_info->dwb_pipe_inst];
+               dwb->otg_inst = stream_status->primary_otg_inst;
+       }
+       if (IS_DIAG_DC(dc->ctx->dce_environment)) {
+               if (!dc->hwss.update_bandwidth(dc, dc->current_state)) {
+                       dm_error("DC: update_bandwidth failed!\n");
+                       return false;
+               }
 
-               if (dwb->funcs->is_enabled(dwb)) {
-                       /* writeback pipe already enabled, only need to update */
-                       dc->hwss.update_writeback(dc, stream_status, wb_info, dc->current_state);
-               } else {
-                       /* Enable writeback pipe from scratch*/
-                       dc->hwss.enable_writeback(dc, stream_status, wb_info, dc->current_state);
+               /* enable writeback */
+               if (dc->hwss.enable_writeback) {
+                       struct dwbc *dwb = dc->res_pool->dwbc[wb_info->dwb_pipe_inst];
+
+                       if (dwb->funcs->is_enabled(dwb)) {
+                               /* writeback pipe already enabled, only need to update */
+                               dc->hwss.update_writeback(dc, wb_info, dc->current_state);
+                       } else {
+                               /* Enable writeback pipe from scratch*/
+                               dc->hwss.enable_writeback(dc, wb_info, dc->current_state);
+                       }
                }
        }
-
        return true;
 }
 
@@ -463,19 +468,29 @@ bool dc_stream_remove_writeback(struct dc *dc,
        }
        stream->num_wb_info = j;
 
-       /* recalculate and apply DML parameters */
-       if (!dc->hwss.update_bandwidth(dc, dc->current_state)) {
-               dm_error("DC: update_bandwidth failed!\n");
-               return false;
-       }
-
-       /* disable writeback */
-       if (dc->hwss.disable_writeback)
-               dc->hwss.disable_writeback(dc, dwb_pipe_inst);
+       if (IS_DIAG_DC(dc->ctx->dce_environment)) {
+               /* recalculate and apply DML parameters */
+               if (!dc->hwss.update_bandwidth(dc, dc->current_state)) {
+                       dm_error("DC: update_bandwidth failed!\n");
+                       return false;
+               }
 
+               /* disable writeback */
+               if (dc->hwss.disable_writeback)
+                       dc->hwss.disable_writeback(dc, dwb_pipe_inst);
+       }
        return true;
 }
 
+bool dc_stream_warmup_writeback(struct dc *dc,
+               int num_dwb,
+               struct dc_writeback_info *wb_info)
+{
+       if (dc->hwss.mmhubbub_warmup)
+               return dc->hwss.mmhubbub_warmup(dc, num_dwb, wb_info);
+       else
+               return false;
+}
 uint32_t dc_stream_get_vblank_counter(const struct dc_stream_state *stream)
 {
        uint8_t i;
index c246390..0390043 100644 (file)
@@ -39,7 +39,7 @@
 #include "inc/hw/dmcu.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.2.62"
+#define DC_VER "3.2.64"
 
 #define MAX_SURFACES 3
 #define MAX_PLANES 6
@@ -367,6 +367,7 @@ struct dc_debug_options {
        bool disable_hubp_power_gate;
        bool disable_dsc_power_gate;
        int dsc_min_slice_height_override;
+       int dsc_bpp_increment_div;
        bool native422_support;
        bool disable_pplib_wm_range;
        enum wm_report_mode pplib_wm_report_mode;
@@ -513,7 +514,7 @@ struct dc {
        bool optimized_required;
 
        /* Require to maintain clocks and bandwidth for UEFI enabled HW */
-       bool optimize_seamless_boot;
+       int optimize_seamless_boot_streams;
 
        /* FBC compressor */
        struct compressor *fbc_compressor;
index 8ec0981..3800340 100644 (file)
@@ -53,7 +53,8 @@ struct dc_dsc_policy {
        uint32_t min_target_bpp;
 };
 
-bool dc_dsc_parse_dsc_dpcd(const uint8_t *dpcd_dsc_basic_data,
+bool dc_dsc_parse_dsc_dpcd(const struct dc *dc,
+               const uint8_t *dpcd_dsc_basic_data,
                const uint8_t *dpcd_dsc_ext_data,
                struct dsc_dec_dpcd_caps *dsc_sink_caps);
 
@@ -77,4 +78,6 @@ bool dc_dsc_compute_config(
 void dc_dsc_get_policy_for_timing(const struct dc_crtc_timing *timing,
                struct dc_dsc_policy *policy);
 
+void dc_dsc_policy_set_max_target_bpp_limit(uint32_t limit);
+
 #endif
index 1ff79f7..f420aea 100644 (file)
@@ -133,6 +133,7 @@ struct dc_link {
        struct link_flags {
                bool dp_keep_receiver_powered;
                bool dp_skip_DID2;
+               bool dp_skip_reset_segment;
        } wa_flags;
        struct link_mst_stream_allocation_table mst_stream_alloc_table;
 
index 3ea5432..37c10db 100644 (file)
@@ -344,10 +344,17 @@ bool dc_add_all_planes_for_stream(
 bool dc_stream_add_writeback(struct dc *dc,
                struct dc_stream_state *stream,
                struct dc_writeback_info *wb_info);
+
 bool dc_stream_remove_writeback(struct dc *dc,
                struct dc_stream_state *stream,
                uint32_t dwb_pipe_inst);
+
+bool dc_stream_warmup_writeback(struct dc *dc,
+               int num_dwb,
+               struct dc_writeback_info *wb_info);
+
 bool dc_stream_dmdata_status_done(struct dc *dc, struct dc_stream_state *stream);
+
 bool dc_stream_set_dynamic_metadata(struct dc *dc,
                struct dc_stream_state *stream,
                struct dc_dmdata_attributes *dmdata_attr);
index 2b92bfa..b1a372c 100644 (file)
@@ -60,7 +60,12 @@ enum dce_environment {
        DCE_ENV_FPGA_MAXIMUS,
        /* Emulation on real HW or on FPGA. Used by Diagnostics, enforces
         * requirements of Diagnostics team. */
-       DCE_ENV_DIAG
+       DCE_ENV_DIAG,
+       /*
+        * Guest VM system, DC HW may exist but is not virtualized and
+        * should not be used.  SW support for VDI only.
+        */
+       DCE_ENV_VIRTUAL_HW
 };
 
 /* Note: use these macro definitions instead of direct comparison! */
@@ -598,7 +603,11 @@ struct audio_info {
        /* this field must be last in this struct */
        struct audio_mode modes[DC_MAX_AUDIO_DESC_COUNT];
 };
-
+struct audio_check {
+       unsigned int audio_packet_type;
+       unsigned int max_audiosample_rate;
+       unsigned int acat;
+};
 enum dc_infoframe_type {
        DC_HDMI_INFOFRAME_TYPE_VENDOR = 0x81,
        DC_HDMI_INFOFRAME_TYPE_AVI = 0x82,
index 4d1301e..31b6473 100644 (file)
@@ -810,8 +810,7 @@ static void hubp1_set_vm_context0_settings(struct hubp *hubp,
 void min_set_viewport(
        struct hubp *hubp,
        const struct rect *viewport,
-       const struct rect *viewport_c,
-       enum dc_rotation_angle rotation)
+       const struct rect *viewport_c)
 {
        struct dcn10_hubp *hubp1 = TO_DCN10_HUBP(hubp);
 
index e44eaae..780af5b 100644 (file)
@@ -749,9 +749,7 @@ void hubp1_set_blank(struct hubp *hubp, bool blank);
 
 void min_set_viewport(struct hubp *hubp,
                const struct rect *viewport,
-               const struct rect *viewport_c,
-               enum dc_rotation_angle rotation);
-/* rotation angle added for use by hubp21_set_viewport */
+               const struct rect *viewport_c);
 
 void hubp1_clk_cntl(struct hubp *hubp, bool enable);
 void hubp1_vtg_sel(struct hubp *hubp, uint32_t otg_inst);
index 3996fef..2baff3c 100644 (file)
@@ -479,10 +479,10 @@ void dcn10_enable_power_gating_plane(
        struct dce_hwseq *hws,
        bool enable)
 {
-       bool force_on = 1; /* disable power gating */
+       bool force_on = true; /* disable power gating */
 
        if (enable)
-               force_on = 0;
+               force_on = false;
 
        /* DCHUBP0/1/2/3 */
        REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN0_POWER_FORCEON, force_on);
@@ -860,6 +860,7 @@ static void dcn10_reset_back_end_for_pipe(
                struct dc_state *context)
 {
        int i;
+       struct dc_link *link;
        DC_LOGGER_INIT(dc->ctx->logger);
        if (pipe_ctx->stream_res.stream_enc == NULL) {
                pipe_ctx->stream = NULL;
@@ -867,8 +868,14 @@ static void dcn10_reset_back_end_for_pipe(
        }
 
        if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
-               /* DPMS may already disable */
-               if (!pipe_ctx->stream->dpms_off)
+               link = pipe_ctx->stream->link;
+               /* DPMS may already disable or */
+               /* dpms_off status is incorrect due to fastboot
+                * feature. When system resume from S4 with second
+                * screen only, the dpms_off would be true but
+                * VBIOS lit up eDP, so check link status too.
+                */
+               if (!pipe_ctx->stream->dpms_off || link->link_status.link_active)
                        core_link_disable_stream(pipe_ctx);
                else if (pipe_ctx->stream_res.audio)
                        dc->hwss.disable_audio_stream(pipe_ctx);
@@ -1156,7 +1163,8 @@ void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
                }
        }
 
-       for (i = 0; i < dc->res_pool->pipe_count; i++) {
+       /* num_opp will be equal to number of mpcc */
+       for (i = 0; i < dc->res_pool->res_cap->num_opp; i++) {
                struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
 
                /* Cannot reset the MPC mux if seamless boot */
@@ -2291,8 +2299,7 @@ static void dcn10_update_dchubp_dpp(
                hubp->funcs->mem_program_viewport(
                        hubp,
                        &pipe_ctx->plane_res.scl_data.viewport,
-                       &pipe_ctx->plane_res.scl_data.viewport_c,
-                       plane_state->rotation);
+                       &pipe_ctx->plane_res.scl_data.viewport_c);
        }
 
        if (pipe_ctx->stream->cursor_attributes.address.quad_part != 0) {
@@ -2909,6 +2916,8 @@ void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
                .rotation = pipe_ctx->plane_state->rotation,
                .mirror = pipe_ctx->plane_state->horizontal_mirror
        };
+       bool pipe_split_on = (pipe_ctx->top_pipe != NULL) ||
+               (pipe_ctx->bottom_pipe != NULL);
 
        int x_plane = pipe_ctx->plane_state->dst_rect.x;
        int y_plane = pipe_ctx->plane_state->dst_rect.y;
@@ -2941,6 +2950,7 @@ void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
        // Swap axis and mirror horizontally
        if (param.rotation == ROTATION_ANGLE_90) {
                uint32_t temp_x = pos_cpy.x;
+
                pos_cpy.x = pipe_ctx->plane_res.scl_data.viewport.width -
                                (pos_cpy.y - pipe_ctx->plane_res.scl_data.viewport.x) + pipe_ctx->plane_res.scl_data.viewport.x;
                pos_cpy.y = temp_x;
@@ -2948,26 +2958,44 @@ void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
        // Swap axis and mirror vertically
        else if (param.rotation == ROTATION_ANGLE_270) {
                uint32_t temp_y = pos_cpy.y;
-               if (pos_cpy.x >  pipe_ctx->plane_res.scl_data.viewport.height) {
-                       pos_cpy.x = pos_cpy.x - pipe_ctx->plane_res.scl_data.viewport.height;
-                       pos_cpy.y = pipe_ctx->plane_res.scl_data.viewport.height - pos_cpy.x;
-               } else {
-                       pos_cpy.y = 2 * pipe_ctx->plane_res.scl_data.viewport.height - pos_cpy.x;
-               }
+               int viewport_height =
+                       pipe_ctx->plane_res.scl_data.viewport.height;
+
+               if (pipe_split_on) {
+                       if (pos_cpy.x > viewport_height) {
+                               pos_cpy.x = pos_cpy.x - viewport_height;
+                               pos_cpy.y = viewport_height - pos_cpy.x;
+                       } else {
+                               pos_cpy.y = 2 * viewport_height - pos_cpy.x;
+                       }
+               } else
+                       pos_cpy.y = viewport_height - pos_cpy.x;
                pos_cpy.x = temp_y;
        }
        // Mirror horizontally and vertically
        else if (param.rotation == ROTATION_ANGLE_180) {
-               if (pos_cpy.x >= pipe_ctx->plane_res.scl_data.viewport.width + pipe_ctx->plane_res.scl_data.viewport.x) {
-                       pos_cpy.x = 2 * pipe_ctx->plane_res.scl_data.viewport.width
-                                       - pos_cpy.x + 2 * pipe_ctx->plane_res.scl_data.viewport.x;
-               } else {
-                       uint32_t temp_x = pos_cpy.x;
-                       pos_cpy.x = 2 * pipe_ctx->plane_res.scl_data.viewport.x - pos_cpy.x;
-                       if (temp_x >= pipe_ctx->plane_res.scl_data.viewport.x + (int)hubp->curs_attr.width
-                                       || pos_cpy.x <= (int)hubp->curs_attr.width + pipe_ctx->plane_state->src_rect.x) {
-                               pos_cpy.x = temp_x + pipe_ctx->plane_res.scl_data.viewport.width;
+               int viewport_width =
+                       pipe_ctx->plane_res.scl_data.viewport.width;
+               int viewport_x =
+                       pipe_ctx->plane_res.scl_data.viewport.x;
+
+               if (pipe_split_on) {
+                       if (pos_cpy.x >= viewport_width + viewport_x) {
+                               pos_cpy.x = 2 * viewport_width
+                                               - pos_cpy.x + 2 * viewport_x;
+                       } else {
+                               uint32_t temp_x = pos_cpy.x;
+
+                               pos_cpy.x = 2 * viewport_x - pos_cpy.x;
+                               if (temp_x >= viewport_x +
+                                       (int)hubp->curs_attr.width || pos_cpy.x
+                                       <= (int)hubp->curs_attr.width +
+                                       pipe_ctx->plane_state->src_rect.x) {
+                                       pos_cpy.x = temp_x + viewport_width;
+                               }
                        }
+               } else {
+                       pos_cpy.x = viewport_width - pos_cpy.x + 2 * viewport_x;
                }
                pos_cpy.y = pipe_ctx->plane_res.scl_data.viewport.height - pos_cpy.y;
        }
index 7493a63..eb13589 100644 (file)
@@ -124,6 +124,26 @@ struct dcn10_link_enc_registers {
        uint32_t RDPCSTX_PHY_CNTL13;
        uint32_t RDPCSTX_PHY_CNTL14;
        uint32_t RDPCSTX_PHY_CNTL15;
+       uint32_t RDPCSTX_CNTL;
+       uint32_t RDPCSTX_CLOCK_CNTL;
+       uint32_t RDPCSTX_PHY_CNTL0;
+       uint32_t RDPCSTX_PHY_CNTL2;
+       uint32_t RDPCSTX_PLL_UPDATE_DATA;
+       uint32_t RDPCS_TX_CR_ADDR;
+       uint32_t RDPCS_TX_CR_DATA;
+       uint32_t DPCSTX_TX_CLOCK_CNTL;
+       uint32_t DPCSTX_TX_CNTL;
+       uint32_t RDPCSTX_INTERRUPT_CONTROL;
+       uint32_t RDPCSTX_PHY_FUSE0;
+       uint32_t RDPCSTX_PHY_FUSE1;
+       uint32_t RDPCSTX_PHY_FUSE2;
+       uint32_t RDPCSTX_PHY_FUSE3;
+       uint32_t RDPCSTX_PHY_RX_LD_VAL;
+       uint32_t DPCSTX_DEBUG_CONFIG;
+       uint32_t RDPCSTX_DEBUG_CONFIG;
+       uint32_t RDPCSTX0_RDPCSTX_SCRATCH;
+       uint32_t RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG;
+       uint32_t DCIO_SOFT_RESET;
        /* indirect registers */
        uint32_t RAWLANE0_DIG_PCS_XF_RX_OVRD_IN_2;
        uint32_t RAWLANE0_DIG_PCS_XF_RX_OVRD_IN_3;
index fd52862..5fcaf78 100644 (file)
@@ -9,7 +9,13 @@ DCN20 = dcn20_resource.o dcn20_init.o dcn20_hwseq.o dcn20_dpp.o dcn20_dpp_cm.o d
 
 DCN20 += dcn20_dsc.o
 
+ifdef CONFIG_X86
 CFLAGS_$(AMDDALPATH)/dc/dcn20/dcn20_resource.o := -mhard-float -msse
+endif
+
+ifdef CONFIG_PPC64
+CFLAGS_$(AMDDALPATH)/dc/dcn20/dcn20_resource.o := -mhard-float -maltivec
+endif
 
 ifdef CONFIG_CC_IS_GCC
 ifeq ($(call cc-ifversion, -lt, 0701, y), y)
@@ -17,6 +23,7 @@ IS_OLD_GCC = 1
 endif
 endif
 
+ifdef CONFIG_X86
 ifdef IS_OLD_GCC
 # Stack alignment mismatch, proceed with caution.
 # GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3
@@ -25,6 +32,7 @@ CFLAGS_$(AMDDALPATH)/dc/dcn20/dcn20_resource.o += -mpreferred-stack-boundary=4
 else
 CFLAGS_$(AMDDALPATH)/dc/dcn20/dcn20_resource.o += -msse2
 endif
+endif
 
 AMD_DAL_DCN20 = $(addprefix $(AMDDALPATH)/dc/dcn20/,$(DCN20))
 
index 1e11513..50bffbf 100644 (file)
@@ -50,20 +50,20 @@ void dccg2_update_dpp_dto(struct dccg *dccg, int dpp_inst, int req_dppclk)
 
        if (dccg->ref_dppclk && req_dppclk) {
                int ref_dppclk = dccg->ref_dppclk;
+               int modulo, phase;
 
-               ASSERT(req_dppclk <= ref_dppclk);
-               /* need to clamp to 8 bits */
-               if (ref_dppclk > 0xff) {
-                       int divider = (ref_dppclk + 0xfe) / 0xff;
+               // phase / modulo = dpp pipe clk / dpp global clk
+               modulo = 0xff;   // use FF at the end
+               phase = ((modulo * req_dppclk) + ref_dppclk - 1) / ref_dppclk;
 
-                       ref_dppclk /= divider;
-                       req_dppclk = (req_dppclk + divider - 1) / divider;
-                       if (req_dppclk > ref_dppclk)
-                               req_dppclk = ref_dppclk;
+               if (phase > 0xff) {
+                       ASSERT(false);
+                       phase = 0xff;
                }
+
                REG_SET_2(DPPCLK_DTO_PARAM[dpp_inst], 0,
-                               DPPCLK0_DTO_PHASE, req_dppclk,
-                               DPPCLK0_DTO_MODULO, ref_dppclk);
+                               DPPCLK0_DTO_PHASE, phase,
+                               DPPCLK0_DTO_MODULO, modulo);
                REG_UPDATE(DPPCLK_DTO_CTRL,
                                DPPCLK_DTO_ENABLE[dpp_inst], 1);
        } else {
index 0111545..6bdfee2 100644 (file)
@@ -206,6 +206,9 @@ static bool dsc2_get_packed_pps(struct display_stream_compressor *dsc, const str
        struct dsc_reg_values dsc_reg_vals;
        struct dsc_optc_config dsc_optc_cfg;
 
+       memset(&dsc_reg_vals, 0, sizeof(dsc_reg_vals));
+       memset(&dsc_optc_cfg, 0, sizeof(dsc_optc_cfg));
+
        DC_LOG_DSC("Getting packed DSC PPS for DSC Config:");
        dsc_config_log(dsc, dsc_cfg);
        DC_LOG_DSC("DSC Picture Parameter Set (PPS):");
index 32878a6..5b9cbed 100644 (file)
@@ -183,10 +183,10 @@ void dcn20_enable_power_gating_plane(
        struct dce_hwseq *hws,
        bool enable)
 {
-       bool force_on = 1; /* disable power gating */
+       bool force_on = true; /* disable power gating */
 
        if (enable)
-               force_on = 0;
+               force_on = false;
 
        /* DCHUBP0/1/2/3/4/5 */
        REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN0_POWER_FORCEON, force_on);
@@ -1305,6 +1305,7 @@ static void dcn20_update_dchubp_dpp(
        struct hubp *hubp = pipe_ctx->plane_res.hubp;
        struct dpp *dpp = pipe_ctx->plane_res.dpp;
        struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+       bool viewport_changed = false;
 
        if (pipe_ctx->update_flags.bits.dppclk)
                dpp->funcs->dpp_dppclk_control(dpp, false, true);
@@ -1355,9 +1356,9 @@ static void dcn20_update_dchubp_dpp(
                        || plane_state->update_flags.bits.global_alpha_change
                        || plane_state->update_flags.bits.per_pixel_alpha_change) {
                // MPCC inst is equal to pipe index in practice
-               int mpcc_inst = pipe_ctx->pipe_idx;
+               int mpcc_inst = hubp->inst;
                int opp_inst;
-               int opp_count = dc->res_pool->res_cap->num_opp;
+               int opp_count = dc->res_pool->pipe_count;
 
                for (opp_inst = 0; opp_inst < opp_count; opp_inst++) {
                        if (dc->res_pool->opps[opp_inst]->mpcc_disconnect_pending[mpcc_inst]) {
@@ -1383,15 +1384,18 @@ static void dcn20_update_dchubp_dpp(
 
        if (pipe_ctx->update_flags.bits.viewport ||
                        (context == dc->current_state && plane_state->update_flags.bits.scaling_change) ||
-                       (context == dc->current_state && pipe_ctx->stream->update_flags.bits.scaling))
+                       (context == dc->current_state && pipe_ctx->stream->update_flags.bits.scaling)) {
+
                hubp->funcs->mem_program_viewport(
                        hubp,
                        &pipe_ctx->plane_res.scl_data.viewport,
-                       &pipe_ctx->plane_res.scl_data.viewport_c,
-                       plane_state->rotation);
+                       &pipe_ctx->plane_res.scl_data.viewport_c);
+               viewport_changed = true;
+       }
 
        /* Any updates are handled in dc interface, just need to apply existing for plane enable */
-       if ((pipe_ctx->update_flags.bits.enable || pipe_ctx->update_flags.bits.opp_changed)
+       if ((pipe_ctx->update_flags.bits.enable || pipe_ctx->update_flags.bits.opp_changed ||
+                       pipe_ctx->update_flags.bits.scaler || pipe_ctx->update_flags.bits.viewport)
                        && pipe_ctx->stream->cursor_attributes.address.quad_part != 0) {
                dc->hwss.set_cursor_position(pipe_ctx);
                dc->hwss.set_cursor_attribute(pipe_ctx);
@@ -1441,9 +1445,14 @@ static void dcn20_update_dchubp_dpp(
                hubp->power_gated = false;
        }
 
+       if (hubp->funcs->apply_PLAT_54186_wa && viewport_changed)
+               hubp->funcs->apply_PLAT_54186_wa(hubp, &plane_state->address);
+
        if (pipe_ctx->update_flags.bits.enable || plane_state->update_flags.bits.addr_update)
                hws->funcs.update_plane_addr(dc, pipe_ctx);
 
+
+
        if (pipe_ctx->update_flags.bits.enable)
                hubp->funcs->set_blank(hubp, false);
 }
@@ -1731,7 +1740,6 @@ bool dcn20_update_bandwidth(
 
 void dcn20_enable_writeback(
                struct dc *dc,
-               const struct dc_stream_status *stream_status,
                struct dc_writeback_info *wb_info,
                struct dc_state *context)
 {
@@ -1745,8 +1753,7 @@ void dcn20_enable_writeback(
        mcif_wb = dc->res_pool->mcif_wb[wb_info->dwb_pipe_inst];
 
        /* set the OPTC source mux */
-       ASSERT(stream_status->primary_otg_inst < MAX_PIPES);
-       optc = dc->res_pool->timing_generators[stream_status->primary_otg_inst];
+       optc = dc->res_pool->timing_generators[dwb->otg_inst];
        optc->funcs->set_dwb_source(optc, wb_info->dwb_pipe_inst);
        /* set MCIF_WB buffer and arbitration configuration */
        mcif_wb->funcs->config_mcif_buf(mcif_wb, &wb_info->mcif_buf_params, wb_info->dwb_params.dest_height);
@@ -1995,6 +2002,7 @@ static void dcn20_reset_back_end_for_pipe(
                struct dc_state *context)
 {
        int i;
+       struct dc_link *link;
        DC_LOGGER_INIT(dc->ctx->logger);
        if (pipe_ctx->stream_res.stream_enc == NULL) {
                pipe_ctx->stream = NULL;
@@ -2002,8 +2010,14 @@ static void dcn20_reset_back_end_for_pipe(
        }
 
        if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
-               /* DPMS may already disable */
-               if (!pipe_ctx->stream->dpms_off)
+               link = pipe_ctx->stream->link;
+               /* DPMS may already disable or */
+               /* dpms_off status is incorrect due to fastboot
+                * feature. When system resume from S4 with second
+                * screen only, the dpms_off would be true but
+                * VBIOS lit up eDP, so check link status too.
+                */
+               if (!pipe_ctx->stream->dpms_off || link->link_status.link_active)
                        core_link_disable_stream(pipe_ctx);
                else if (pipe_ctx->stream_res.audio)
                        dc->hwss.disable_audio_stream(pipe_ctx);
index eecd7a2..02c9be5 100644 (file)
@@ -104,7 +104,6 @@ void dcn20_program_triple_buffer(
        bool enable_triple_buffer);
 void dcn20_enable_writeback(
                struct dc *dc,
-               const struct dc_stream_status *stream_status,
                struct dc_writeback_info *wb_info,
                struct dc_state *context);
 void dcn20_disable_writeback(
index 62dfd34..8cab810 100644 (file)
        SRI(AUX_DPHY_TX_CONTROL, DP_AUX, id)
 
 #define UNIPHY_MASK_SH_LIST(mask_sh)\
-       LE_SF(UNIPHYA_CHANNEL_XBAR_CNTL, UNIPHY_LINK_ENABLE, mask_sh)
+       LE_SF(SYMCLKA_CLOCK_ENABLE, SYMCLKA_CLOCK_ENABLE, mask_sh),\
+       LE_SF(UNIPHYA_CHANNEL_XBAR_CNTL, UNIPHY_LINK_ENABLE, mask_sh),\
+       LE_SF(UNIPHYA_CHANNEL_XBAR_CNTL, UNIPHY_CHANNEL0_XBAR_SOURCE, mask_sh),\
+       LE_SF(UNIPHYA_CHANNEL_XBAR_CNTL, UNIPHY_CHANNEL1_XBAR_SOURCE, mask_sh),\
+       LE_SF(UNIPHYA_CHANNEL_XBAR_CNTL, UNIPHY_CHANNEL2_XBAR_SOURCE, mask_sh),\
+       LE_SF(UNIPHYA_CHANNEL_XBAR_CNTL, UNIPHY_CHANNEL3_XBAR_SOURCE, mask_sh)
+
+#define DPCS_MASK_SH_LIST(mask_sh)\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX0_CLK_RDY, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX0_DATA_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX1_CLK_RDY, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX1_DATA_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX2_CLK_RDY, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX2_DATA_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX3_CLK_RDY, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX3_DATA_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL4, RDPCS_PHY_DP_TX0_TERM_CTRL, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL4, RDPCS_PHY_DP_TX1_TERM_CTRL, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL4, RDPCS_PHY_DP_TX2_TERM_CTRL, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL4, RDPCS_PHY_DP_TX3_TERM_CTRL, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL11, RDPCS_PHY_DP_MPLLB_MULTIPLIER, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL5, RDPCS_PHY_DP_TX0_WIDTH, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL5, RDPCS_PHY_DP_TX0_RATE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL5, RDPCS_PHY_DP_TX1_WIDTH, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL5, RDPCS_PHY_DP_TX1_RATE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX2_PSTATE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX3_PSTATE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX2_MPLL_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX3_MPLL_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL7, RDPCS_PHY_DP_MPLLB_FRACN_QUOT, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL7, RDPCS_PHY_DP_MPLLB_FRACN_DEN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL8, RDPCS_PHY_DP_MPLLB_SSC_PEAK, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL9, RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL9, RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL10, RDPCS_PHY_DP_MPLLB_FRACN_REM, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL11, RDPCS_PHY_DP_REF_CLK_MPLLB_DIV, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL11, RDPCS_PHY_HDMI_MPLLB_HDMI_DIV, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL12, RDPCS_PHY_DP_MPLLB_SSC_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL12, RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL12, RDPCS_PHY_DP_MPLLB_TX_CLK_DIV, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL12, RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL12, RDPCS_PHY_DP_MPLLB_STATE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL13, RDPCS_PHY_DP_MPLLB_DIV_CLK_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL13, RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL14, RDPCS_PHY_DP_MPLLB_FRACN_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL14, RDPCS_PHY_DP_MPLLB_PMIX_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CNTL, RDPCS_TX_FIFO_LANE0_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CNTL, RDPCS_TX_FIFO_LANE1_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CNTL, RDPCS_TX_FIFO_LANE2_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CNTL, RDPCS_TX_FIFO_LANE3_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CNTL, RDPCS_TX_FIFO_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CNTL, RDPCS_TX_FIFO_RD_START_DELAY, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CLOCK_CNTL, RDPCS_EXT_REFCLK_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CLOCK_CNTL, RDPCS_SRAMCLK_BYPASS, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CLOCK_CNTL, RDPCS_SRAMCLK_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CLOCK_CNTL, RDPCS_SRAMCLK_CLOCK_ON, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CLOCK_CNTL, RDPCS_SYMCLK_DIV2_CLOCK_ON, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CLOCK_CNTL, RDPCS_SYMCLK_DIV2_GATE_DIS, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_CLOCK_CNTL, RDPCS_SYMCLK_DIV2_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX0_DISABLE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX1_DISABLE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX2_DISABLE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX3_DISABLE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX0_REQ, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX1_REQ, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX2_REQ, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX3_REQ, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX0_ACK, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX1_ACK, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX2_ACK, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX3_ACK, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX0_RESET, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX1_RESET, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX2_RESET, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL3, RDPCS_PHY_DP_TX3_RESET, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL0, RDPCS_PHY_RESET, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL0, RDPCS_PHY_CR_MUX_SEL, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL0, RDPCS_PHY_REF_RANGE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL0, RDPCS_SRAM_BYPASS, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL0, RDPCS_SRAM_EXT_LD_DONE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL0, RDPCS_PHY_HDMIMODE_ENABLE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL0, RDPCS_SRAM_INIT_DONE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL2, RDPCS_PHY_DP4_POR, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PLL_UPDATE_DATA, RDPCS_PLL_UPDATE_DATA, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL, RDPCS_REG_FIFO_ERROR_MASK, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL, RDPCS_TX_FIFO_ERROR_MASK, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL, RDPCS_DPALT_DISABLE_TOGGLE_MASK, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL, RDPCS_DPALT_4LANE_TOGGLE_MASK, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCS_TX_CR_ADDR, RDPCS_TX_CR_ADDR, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCS_TX_CR_DATA, RDPCS_TX_CR_DATA, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE0, RDPCS_PHY_DP_MPLLB_V2I, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE0, RDPCS_PHY_DP_TX0_EQ_MAIN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE0, RDPCS_PHY_DP_TX0_EQ_PRE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE0, RDPCS_PHY_DP_TX0_EQ_POST, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE0, RDPCS_PHY_DP_MPLLB_FREQ_VCO, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE1, RDPCS_PHY_DP_MPLLB_CP_INT, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE1, RDPCS_PHY_DP_MPLLB_CP_PROP, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE1, RDPCS_PHY_DP_TX1_EQ_MAIN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE1, RDPCS_PHY_DP_TX1_EQ_PRE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE1, RDPCS_PHY_DP_TX1_EQ_POST, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE2, RDPCS_PHY_DP_TX2_EQ_MAIN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE2, RDPCS_PHY_DP_TX2_EQ_PRE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE2, RDPCS_PHY_DP_TX2_EQ_POST, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE3, RDPCS_PHY_DP_TX3_EQ_MAIN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE3, RDPCS_PHY_DCO_FINETUNE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE3, RDPCS_PHY_DCO_RANGE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE3, RDPCS_PHY_DP_TX3_EQ_PRE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE3, RDPCS_PHY_DP_TX3_EQ_POST, mask_sh),\
+       LE_SF(DPCSTX0_DPCSTX_TX_CLOCK_CNTL, DPCS_SYMCLK_CLOCK_ON, mask_sh),\
+       LE_SF(DPCSTX0_DPCSTX_TX_CLOCK_CNTL, DPCS_SYMCLK_GATE_DIS, mask_sh),\
+       LE_SF(DPCSTX0_DPCSTX_TX_CLOCK_CNTL, DPCS_SYMCLK_EN, mask_sh),\
+       LE_SF(DPCSTX0_DPCSTX_TX_CNTL, DPCS_TX_DATA_SWAP, mask_sh),\
+       LE_SF(DPCSTX0_DPCSTX_TX_CNTL, DPCS_TX_DATA_ORDER_INVERT, mask_sh),\
+       LE_SF(DPCSTX0_DPCSTX_TX_CNTL, DPCS_TX_FIFO_EN, mask_sh),\
+       LE_SF(DPCSTX0_DPCSTX_TX_CNTL, DPCS_TX_FIFO_RD_START_DELAY, mask_sh),\
+       LE_SF(DPCSTX0_DPCSTX_DEBUG_CONFIG, DPCS_DBG_CBUS_DIS, mask_sh)
+
+#define DPCS_DCN2_MASK_SH_LIST(mask_sh)\
+       DPCS_MASK_SH_LIST(mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_RX_LD_VAL, RDPCS_PHY_RX_REF_LD_VAL, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_RX_LD_VAL, RDPCS_PHY_RX_VCO_LD_VAL, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE_ACK, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX0_PSTATE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX1_PSTATE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX0_MPLL_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX1_MPLL_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_REF_CLK_EN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL5, RDPCS_PHY_DP_TX2_WIDTH, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL5, RDPCS_PHY_DP_TX2_RATE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL5, RDPCS_PHY_DP_TX3_WIDTH, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL5, RDPCS_PHY_DP_TX3_RATE, mask_sh),\
+       LE_SF(DCIO_SOFT_RESET, UNIPHYA_SOFT_RESET, mask_sh),\
+       LE_SF(DCIO_SOFT_RESET, UNIPHYB_SOFT_RESET, mask_sh),\
+       LE_SF(DCIO_SOFT_RESET, UNIPHYC_SOFT_RESET, mask_sh),\
+       LE_SF(DCIO_SOFT_RESET, UNIPHYD_SOFT_RESET, mask_sh),\
+       LE_SF(DCIO_SOFT_RESET, UNIPHYE_SOFT_RESET, mask_sh)
 
 #define LINK_ENCODER_MASK_SH_LIST_DCN20(mask_sh)\
        LINK_ENCODER_MASK_SH_LIST_DCN10(mask_sh),\
        SRI(CLOCK_ENABLE, SYMCLK, id), \
        SRI(CHANNEL_XBAR_CNTL, UNIPHY, id)
 
+#define DPCS_DCN2_CMN_REG_LIST(id) \
+       SRI(DIG_LANE_ENABLE, DIG, id), \
+       SRI(TMDS_CTL_BITS, DIG, id), \
+       SRI(RDPCSTX_PHY_CNTL3, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL4, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL5, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL6, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL7, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL8, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL9, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL10, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL11, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL12, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL13, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL14, RDPCSTX, id), \
+       SRI(RDPCSTX_CNTL, RDPCSTX, id), \
+       SRI(RDPCSTX_CLOCK_CNTL, RDPCSTX, id), \
+       SRI(RDPCSTX_INTERRUPT_CONTROL, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL0, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_CNTL2, RDPCSTX, id), \
+       SRI(RDPCSTX_PLL_UPDATE_DATA, RDPCSTX, id), \
+       SRI(RDPCS_TX_CR_ADDR, RDPCSTX, id), \
+       SRI(RDPCS_TX_CR_DATA, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_FUSE0, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_FUSE1, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_FUSE2, RDPCSTX, id), \
+       SRI(RDPCSTX_PHY_FUSE3, RDPCSTX, id), \
+       SRI(DPCSTX_TX_CLOCK_CNTL, DPCSTX, id), \
+       SRI(DPCSTX_TX_CNTL, DPCSTX, id), \
+       SRI(DPCSTX_DEBUG_CONFIG, DPCSTX, id), \
+       SRI(RDPCSTX_DEBUG_CONFIG, RDPCSTX, id), \
+       SR(RDPCSTX0_RDPCSTX_SCRATCH)
+
+
+#define DPCS_DCN2_REG_LIST(id) \
+       DPCS_DCN2_CMN_REG_LIST(id), \
+       SRI(RDPCSTX_PHY_RX_LD_VAL, RDPCSTX, id),\
+       SRI(RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG, RDPCSTX, id)
+
+#define LE_DCN2_REG_LIST(id) \
+               LE_DCN10_REG_LIST(id), \
+               SR(DCIO_SOFT_RESET)
+
 struct mpll_cfg {
        uint32_t mpllb_ana_v2i;
        uint32_t mpllb_ana_freq_vco;
index 673c83e..d875b0c 100644 (file)
@@ -236,12 +236,13 @@ void optc2_set_odm_combine(struct timing_generator *optc, int *opp_id, int opp_c
                struct dc_crtc_timing *timing)
 {
        struct optc *optc1 = DCN10TG_FROM_TG(optc);
-       /* 2 pieces of memory required for up to 5120 displays, 4 for up to 8192 */
        int mpcc_hactive = (timing->h_addressable + timing->h_border_left + timing->h_border_right)
                        / opp_cnt;
-       int memory_mask = mpcc_hactive <= 2560 ? 0x3 : 0xf;
+       uint32_t memory_mask;
        uint32_t data_fmt = 0;
 
+       ASSERT(opp_cnt == 2);
+
        /* TODO: In pseudocode but does not affect maximus, delete comment if we dont need on asic
         * REG_SET(OTG_GLOBAL_CONTROL2, 0, GLOBAL_UPDATE_LOCK_EN, 1);
         * Program OTG register MASTER_UPDATE_LOCK_DB_X/Y to the position before DP frame start
@@ -249,9 +250,17 @@ void optc2_set_odm_combine(struct timing_generator *optc, int *opp_id, int opp_c
         *              MASTER_UPDATE_LOCK_DB_X, 160,
         *              MASTER_UPDATE_LOCK_DB_Y, 240);
         */
+
+       /* 2 pieces of memory required for up to 5120 displays, 4 for up to 8192,
+        * however, for ODM combine we can simplify by always using 4.
+        * To make sure there's no overlap, each instance "reserves" 2 memories and
+        * they are uniquely combined here.
+        */
+       memory_mask = 0x3 << (opp_id[0] * 2) | 0x3 << (opp_id[1] * 2);
+
        if (REG(OPTC_MEMORY_CONFIG))
                REG_SET(OPTC_MEMORY_CONFIG, 0,
-                       OPTC_MEM_SEL, memory_mask << (optc->inst * 4));
+                       OPTC_MEM_SEL, memory_mask);
 
        if (timing->pixel_encoding == PIXEL_ENCODING_YCBCR422)
                data_fmt = 1;
@@ -260,7 +269,6 @@ void optc2_set_odm_combine(struct timing_generator *optc, int *opp_id, int opp_c
 
        REG_UPDATE(OPTC_DATA_FORMAT_CONTROL, OPTC_DATA_FORMAT, data_fmt);
 
-       ASSERT(opp_cnt == 2);
        REG_SET_3(OPTC_DATA_SOURCE_SELECT, 0,
                        OPTC_NUM_OF_INPUT_SEGMENT, 1,
                        OPTC_SEG0_SRC_SEL, opp_id[0],
@@ -382,14 +390,8 @@ void optc2_setup_manual_trigger(struct timing_generator *optc)
 {
        struct optc *optc1 = DCN10TG_FROM_TG(optc);
 
-       REG_SET(OTG_MANUAL_FLOW_CONTROL, 0,
-                       MANUAL_FLOW_CONTROL, 1);
-
-       REG_SET(OTG_GLOBAL_CONTROL2, 0,
-                       MANUAL_FLOW_CONTROL_SEL, optc->inst);
-
        REG_SET_8(OTG_TRIGA_CNTL, 0,
-                       OTG_TRIGA_SOURCE_SELECT, 22,
+                       OTG_TRIGA_SOURCE_SELECT, 21,
                        OTG_TRIGA_SOURCE_PIPE_SELECT, optc->inst,
                        OTG_TRIGA_RISING_EDGE_DETECT_CNTL, 1,
                        OTG_TRIGA_FALLING_EDGE_DETECT_CNTL, 0,
index ac93fbf..239cc40 100644 (file)
@@ -106,6 +106,7 @@ void optc2_triplebuffer_lock(struct timing_generator *optc);
 void optc2_triplebuffer_unlock(struct timing_generator *optc);
 void optc2_lock_doublebuffer_disable(struct timing_generator *optc);
 void optc2_lock_doublebuffer_enable(struct timing_generator *optc);
+void optc2_setup_manual_trigger(struct timing_generator *optc);
 void optc2_program_manual_trigger(struct timing_generator *optc);
 bool optc2_is_two_pixels_per_containter(const struct dc_crtc_timing *timing);
 #endif /* __DC_OPTC_DCN20_H__ */
index cfc6991..2dafa20 100644 (file)
@@ -1,5 +1,6 @@
 /*
 * Copyright 2016 Advanced Micro Devices, Inc.
+ * Copyright 2019 Raptor Engineering, LLC
  *
  * Permission is hereby granted, free of charge, to any person obtaining a
  * copy of this software and associated documentation files (the "Software"),
@@ -65,6 +66,8 @@
 
 #include "dcn/dcn_2_0_0_offset.h"
 #include "dcn/dcn_2_0_0_sh_mask.h"
+#include "dpcs/dpcs_2_0_0_offset.h"
+#include "dpcs/dpcs_2_0_0_sh_mask.h"
 
 #include "nbio/nbio_2_3_offset.h"
 
@@ -548,6 +551,7 @@ static const struct dcn10_link_enc_hpd_registers link_enc_hpd_regs[] = {
 [id] = {\
        LE_DCN10_REG_LIST(id), \
        UNIPHY_DCN2_REG_LIST(phyid), \
+       DPCS_DCN2_REG_LIST(id), \
        SRI(DP_DPHY_INTERNAL_CTRL, DP, id) \
 }
 
@@ -561,11 +565,13 @@ static const struct dcn10_link_enc_registers link_enc_regs[] = {
 };
 
 static const struct dcn10_link_enc_shift le_shift = {
-       LINK_ENCODER_MASK_SH_LIST_DCN20(__SHIFT)
+       LINK_ENCODER_MASK_SH_LIST_DCN20(__SHIFT),\
+       DPCS_DCN2_MASK_SH_LIST(__SHIFT)
 };
 
 static const struct dcn10_link_enc_mask le_mask = {
-       LINK_ENCODER_MASK_SH_LIST_DCN20(_MASK)
+       LINK_ENCODER_MASK_SH_LIST_DCN20(_MASK),\
+       DPCS_DCN2_MASK_SH_LIST(_MASK)
 };
 
 #define ipp_regs(id)\
@@ -1563,7 +1569,7 @@ static void release_dsc(struct resource_context *res_ctx,
 
 
 
-static enum dc_status add_dsc_to_stream_resource(struct dc *dc,
+enum dc_status dcn20_add_dsc_to_stream_resource(struct dc *dc,
                struct dc_state *dc_ctx,
                struct dc_stream_state *dc_stream)
 {
@@ -1578,6 +1584,9 @@ static enum dc_status add_dsc_to_stream_resource(struct dc *dc,
                if (pipe_ctx->stream != dc_stream)
                        continue;
 
+               if (pipe_ctx->stream_res.dsc)
+                       continue;
+
                acquire_dsc(&dc_ctx->res_ctx, pool, &pipe_ctx->stream_res.dsc, i);
 
                /* The number of DSCs can be less than the number of pipes */
@@ -1626,7 +1635,7 @@ enum dc_status dcn20_add_stream_to_ctx(struct dc *dc, struct dc_state *new_ctx,
 
        /* Get a DSC if required and available */
        if (result == DC_OK && dc_stream->timing.flags.DSC)
-               result = add_dsc_to_stream_resource(dc, new_ctx, dc_stream);
+               result = dcn20_add_dsc_to_stream_resource(dc, new_ctx, dc_stream);
 
        if (result == DC_OK)
                result = dcn20_build_mapped_resource(dc, new_ctx, dc_stream);
@@ -2886,12 +2895,19 @@ bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context,
        bool voltage_supported = false;
        bool full_pstate_supported = false;
        bool dummy_pstate_supported = false;
-       double p_state_latency_us = context->bw_ctx.dml.soc.dram_clock_change_latency_us;
-       context->bw_ctx.dml.soc.disable_dram_clock_change_vactive_support = dc->debug.disable_dram_clock_change_vactive_support;
+       double p_state_latency_us;
+
+       DC_FP_START();
+       p_state_latency_us = context->bw_ctx.dml.soc.dram_clock_change_latency_us;
+       context->bw_ctx.dml.soc.disable_dram_clock_change_vactive_support =
+               dc->debug.disable_dram_clock_change_vactive_support;
 
-       if (fast_validate)
-               return dcn20_validate_bandwidth_internal(dc, context, true);
+       if (fast_validate) {
+               voltage_supported = dcn20_validate_bandwidth_internal(dc, context, true);
 
+               DC_FP_END();
+               return voltage_supported;
+       }
 
        // Best case, we support full UCLK switch latency
        voltage_supported = dcn20_validate_bandwidth_internal(dc, context, false);
@@ -2920,6 +2936,7 @@ bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context,
 restore_dml_state:
        context->bw_ctx.dml.soc.dram_clock_change_latency_us = p_state_latency_us;
 
+       DC_FP_END();
        return voltage_supported;
 }
 
@@ -3211,7 +3228,6 @@ void dcn20_update_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_s
 
 void dcn20_patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_st *bb)
 {
-       kernel_fpu_begin();
        if ((int)(bb->sr_exit_time_us * 1000) != dc->bb_overrides.sr_exit_time_ns
                        && dc->bb_overrides.sr_exit_time_ns) {
                bb->sr_exit_time_us = dc->bb_overrides.sr_exit_time_ns / 1000.0;
@@ -3235,7 +3251,6 @@ void dcn20_patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_st
                bb->dram_clock_change_latency_us =
                                dc->bb_overrides.dram_clock_change_latency_ns / 1000.0;
        }
-       kernel_fpu_end();
 }
 
 static struct _vcs_dpi_soc_bounding_box_st *get_asic_rev_soc_bb(
@@ -3441,6 +3456,8 @@ static bool dcn20_resource_construct(
        enum dml_project dml_project_version =
                        get_dml_project_version(ctx->asic_id.hw_internal_rev);
 
+       DC_FP_START();
+
        ctx->dc_bios->regs = &bios_regs;
        pool->base.funcs = &dcn20_res_pool_funcs;
 
@@ -3738,10 +3755,12 @@ static bool dcn20_resource_construct(
                pool->base.oem_device = NULL;
        }
 
+       DC_FP_END();
        return true;
 
 create_fail:
 
+       DC_FP_END();
        dcn20_resource_destruct(pool);
 
        return false;
index 840ca66..f589384 100644 (file)
@@ -157,6 +157,7 @@ void dcn20_calculate_dlg_params(
 
 enum dc_status dcn20_build_mapped_resource(const struct dc *dc, struct dc_state *context, struct dc_stream_state *stream);
 enum dc_status dcn20_add_stream_to_ctx(struct dc *dc, struct dc_state *new_ctx, struct dc_stream_state *dc_stream);
+enum dc_status dcn20_add_dsc_to_stream_resource(struct dc *dc, struct dc_state *dc_ctx, struct dc_stream_state *dc_stream);
 enum dc_status dcn20_remove_stream_from_ctx(struct dc *dc, struct dc_state *new_ctx, struct dc_stream_state *dc_stream);
 enum dc_status dcn20_get_default_swizzle_mode(struct dc_plane_state *plane_state);
 
index 4763721..07684d3 100644 (file)
@@ -5,7 +5,13 @@
 DCN21 = dcn21_init.o dcn21_hubp.o dcn21_hubbub.o dcn21_resource.o \
         dcn21_hwseq.o dcn21_link_encoder.o
 
+ifdef CONFIG_X86
 CFLAGS_$(AMDDALPATH)/dc/dcn21/dcn21_resource.o := -mhard-float -msse
+endif
+
+ifdef CONFIG_PPC64
+CFLAGS_$(AMDDALPATH)/dc/dcn21/dcn21_resource.o := -mhard-float -maltivec
+endif
 
 ifdef CONFIG_CC_IS_GCC
 ifeq ($(call cc-ifversion, -lt, 0701, y), y)
@@ -13,6 +19,7 @@ IS_OLD_GCC = 1
 endif
 endif
 
+ifdef CONFIG_X86
 ifdef IS_OLD_GCC
 # Stack alignment mismatch, proceed with caution.
 # GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3
@@ -21,6 +28,7 @@ CFLAGS_$(AMDDALPATH)/dc/dcn21/dcn21_resource.o += -mpreferred-stack-boundary=4
 else
 CFLAGS_$(AMDDALPATH)/dc/dcn21/dcn21_resource.o += -msse2
 endif
+endif
 
 AMD_DAL_DCN21 = $(addprefix $(AMDDALPATH)/dc/dcn21/,$(DCN21))
 
index 332bf3d..216ae17 100644 (file)
@@ -169,12 +169,9 @@ static void hubp21_setup(
 void hubp21_set_viewport(
        struct hubp *hubp,
        const struct rect *viewport,
-       const struct rect *viewport_c,
-       enum dc_rotation_angle rotation)
+       const struct rect *viewport_c)
 {
        struct dcn21_hubp *hubp21 = TO_DCN21_HUBP(hubp);
-       int patched_viewport_height = 0;
-       struct dc_debug_options *debug = &hubp->ctx->dc->debug;
 
        REG_SET_2(DCSURF_PRI_VIEWPORT_DIMENSION, 0,
                  PRI_VIEWPORT_WIDTH, viewport->width,
@@ -193,31 +190,10 @@ void hubp21_set_viewport(
                  SEC_VIEWPORT_X_START, viewport->x,
                  SEC_VIEWPORT_Y_START, viewport->y);
 
-       /*
-        *      Work around for underflow issue with NV12 + rIOMMU translation
-        *      + immediate flip. This will cause hubp underflow, but will not
-        *      be user visible since underflow is in blank region
-        *      Disable w/a when rotated 180 degrees, causes vertical chroma offset
-        */
-       patched_viewport_height = viewport_c->height;
-       if (debug->nv12_iflip_vm_wa && viewport_c->height > 512 &&
-                       rotation != ROTATION_ANGLE_180) {
-               int pte_row_height = 0;
-               int pte_rows = 0;
-
-               REG_GET(DCHUBP_REQ_SIZE_CONFIG_C,
-                       PTE_ROW_HEIGHT_LINEAR_C, &pte_row_height);
-
-               pte_row_height = 1 << (pte_row_height + 3);
-               pte_rows = (viewport_c->height / pte_row_height) + 1;
-               patched_viewport_height = pte_rows * pte_row_height + 1;
-       }
-
-
        /* DC supports NV12 only at the moment */
        REG_SET_2(DCSURF_PRI_VIEWPORT_DIMENSION_C, 0,
                  PRI_VIEWPORT_WIDTH_C, viewport_c->width,
-                 PRI_VIEWPORT_HEIGHT_C, patched_viewport_height);
+                 PRI_VIEWPORT_HEIGHT_C, viewport_c->height);
 
        REG_SET_2(DCSURF_PRI_VIEWPORT_START_C, 0,
                  PRI_VIEWPORT_X_START_C, viewport_c->x,
@@ -225,13 +201,113 @@ void hubp21_set_viewport(
 
        REG_SET_2(DCSURF_SEC_VIEWPORT_DIMENSION_C, 0,
                  SEC_VIEWPORT_WIDTH_C, viewport_c->width,
-                 SEC_VIEWPORT_HEIGHT_C, patched_viewport_height);
+                 SEC_VIEWPORT_HEIGHT_C, viewport_c->height);
 
        REG_SET_2(DCSURF_SEC_VIEWPORT_START_C, 0,
                  SEC_VIEWPORT_X_START_C, viewport_c->x,
                  SEC_VIEWPORT_Y_START_C, viewport_c->y);
 }
 
+static void hubp21_apply_PLAT_54186_wa(
+               struct hubp *hubp,
+               const struct dc_plane_address *address)
+{
+       struct dcn21_hubp *hubp21 = TO_DCN21_HUBP(hubp);
+       struct dc_debug_options *debug = &hubp->ctx->dc->debug;
+       unsigned int chroma_bpe = 2;
+       unsigned int luma_addr_high_part = 0;
+       unsigned int row_height = 0;
+       unsigned int chroma_pitch = 0;
+       unsigned int viewport_c_height = 0;
+       unsigned int viewport_c_width = 0;
+       unsigned int patched_viewport_height = 0;
+       unsigned int patched_viewport_width = 0;
+       unsigned int rotation_angle = 0;
+       unsigned int pix_format = 0;
+       unsigned int h_mirror_en = 0;
+       unsigned int tile_blk_size = 64 * 1024; /* 64KB for 64KB SW, 4KB for 4KB SW */
+
+
+       if (!debug->nv12_iflip_vm_wa)
+               return;
+
+       REG_GET(DCHUBP_REQ_SIZE_CONFIG_C,
+               PTE_ROW_HEIGHT_LINEAR_C, &row_height);
+
+       REG_GET_2(DCSURF_PRI_VIEWPORT_DIMENSION_C,
+                       PRI_VIEWPORT_WIDTH_C, &viewport_c_width,
+                       PRI_VIEWPORT_HEIGHT_C, &viewport_c_height);
+
+       REG_GET(DCSURF_PRIMARY_SURFACE_ADDRESS_HIGH_C,
+                       PRIMARY_SURFACE_ADDRESS_HIGH_C, &luma_addr_high_part);
+
+       REG_GET(DCSURF_SURFACE_PITCH_C,
+                       PITCH_C, &chroma_pitch);
+
+       chroma_pitch += 1;
+
+       REG_GET_3(DCSURF_SURFACE_CONFIG,
+                       SURFACE_PIXEL_FORMAT, &pix_format,
+                       ROTATION_ANGLE, &rotation_angle,
+                       H_MIRROR_EN, &h_mirror_en);
+
+       /* apply wa only for NV12 surface with scatter gather enabled with view port > 512 */
+       if (address->type != PLN_ADDR_TYPE_VIDEO_PROGRESSIVE ||
+                       address->video_progressive.luma_addr.high_part == 0xf4
+                       || viewport_c_height <= 512)
+               return;
+
+       switch (rotation_angle) {
+       case 0: /* 0 degree rotation */
+               row_height = 128;
+               patched_viewport_height = (viewport_c_height / row_height + 1) * row_height + 1;
+               patched_viewport_width = viewport_c_width;
+               hubp21->PLAT_54186_wa_chroma_addr_offset = 0;
+               break;
+       case 2: /* 180 degree rotation */
+               row_height = 128;
+               patched_viewport_height = viewport_c_height + row_height;
+               patched_viewport_width = viewport_c_width;
+               hubp21->PLAT_54186_wa_chroma_addr_offset = 0 - chroma_pitch * row_height * chroma_bpe;
+               break;
+       case 1: /* 90 degree rotation */
+               row_height = 256;
+               if (h_mirror_en) {
+                       patched_viewport_height = viewport_c_height;
+                       patched_viewport_width = viewport_c_width + row_height;
+                       hubp21->PLAT_54186_wa_chroma_addr_offset = 0;
+               } else {
+                       patched_viewport_height = viewport_c_height;
+                       patched_viewport_width = viewport_c_width + row_height;
+                       hubp21->PLAT_54186_wa_chroma_addr_offset = 0 - tile_blk_size;
+               }
+               break;
+       case 3: /* 270 degree rotation */
+               row_height = 256;
+               if (h_mirror_en) {
+                       patched_viewport_height = viewport_c_height;
+                       patched_viewport_width = viewport_c_width + row_height;
+                       hubp21->PLAT_54186_wa_chroma_addr_offset = 0 - tile_blk_size;
+               } else {
+                       patched_viewport_height = viewport_c_height;
+                       patched_viewport_width = viewport_c_width + row_height;
+                       hubp21->PLAT_54186_wa_chroma_addr_offset = 0;
+               }
+               break;
+       default:
+               ASSERT(0);
+               break;
+       }
+
+       /* catch cases where viewport keep growing */
+       ASSERT(patched_viewport_height && patched_viewport_height < 5000);
+       ASSERT(patched_viewport_width && patched_viewport_width < 5000);
+
+       REG_UPDATE_2(DCSURF_PRI_VIEWPORT_DIMENSION_C,
+                       PRI_VIEWPORT_WIDTH_C, patched_viewport_width,
+                       PRI_VIEWPORT_HEIGHT_C, patched_viewport_height);
+}
+
 void hubp21_set_vm_system_aperture_settings(struct hubp *hubp,
                struct vm_system_aperture_param *apt)
 {
@@ -602,6 +678,187 @@ void hubp21_validate_dml_output(struct hubp *hubp,
                                dml_dlg_attr->refcyc_per_meta_chunk_flip_l, dlg_attr.refcyc_per_meta_chunk_flip_l);
 }
 
+bool hubp21_program_surface_flip_and_addr(
+       struct hubp *hubp,
+       const struct dc_plane_address *address,
+       bool flip_immediate)
+{
+       struct dcn21_hubp *hubp21 = TO_DCN21_HUBP(hubp);
+       struct dc_debug_options *debug = &hubp->ctx->dc->debug;
+
+       //program flip type
+       REG_UPDATE(DCSURF_FLIP_CONTROL,
+                       SURFACE_FLIP_TYPE, flip_immediate);
+
+       // Program VMID reg
+       REG_UPDATE(VMID_SETTINGS_0,
+                       VMID, address->vmid);
+
+       if (address->type == PLN_ADDR_TYPE_GRPH_STEREO) {
+               REG_UPDATE(DCSURF_FLIP_CONTROL, SURFACE_FLIP_MODE_FOR_STEREOSYNC, 0x1);
+               REG_UPDATE(DCSURF_FLIP_CONTROL, SURFACE_FLIP_IN_STEREOSYNC, 0x1);
+
+       } else {
+               // turn off stereo if not in stereo
+               REG_UPDATE(DCSURF_FLIP_CONTROL, SURFACE_FLIP_MODE_FOR_STEREOSYNC, 0x0);
+               REG_UPDATE(DCSURF_FLIP_CONTROL, SURFACE_FLIP_IN_STEREOSYNC, 0x0);
+       }
+
+
+
+       /* HW automatically latch rest of address register on write to
+        * DCSURF_PRIMARY_SURFACE_ADDRESS if SURFACE_UPDATE_LOCK is not used
+        *
+        * program high first and then the low addr, order matters!
+        */
+       switch (address->type) {
+       case PLN_ADDR_TYPE_GRAPHICS:
+               /* DCN1.0 does not support const color
+                * TODO: program DCHUBBUB_RET_PATH_DCC_CFGx_0/1
+                * base on address->grph.dcc_const_color
+                * x = 0, 2, 4, 6 for pipe 0, 1, 2, 3 for rgb and luma
+                * x = 1, 3, 5, 7 for pipe 0, 1, 2, 3 for chroma
+                */
+
+               if (address->grph.addr.quad_part == 0)
+                       break;
+
+               REG_UPDATE_2(DCSURF_SURFACE_CONTROL,
+                               PRIMARY_SURFACE_TMZ, address->tmz_surface,
+                               PRIMARY_META_SURFACE_TMZ, address->tmz_surface);
+
+               if (address->grph.meta_addr.quad_part != 0) {
+                       REG_SET(DCSURF_PRIMARY_META_SURFACE_ADDRESS_HIGH, 0,
+                                       PRIMARY_META_SURFACE_ADDRESS_HIGH,
+                                       address->grph.meta_addr.high_part);
+
+                       REG_SET(DCSURF_PRIMARY_META_SURFACE_ADDRESS, 0,
+                                       PRIMARY_META_SURFACE_ADDRESS,
+                                       address->grph.meta_addr.low_part);
+               }
+
+               REG_SET(DCSURF_PRIMARY_SURFACE_ADDRESS_HIGH, 0,
+                               PRIMARY_SURFACE_ADDRESS_HIGH,
+                               address->grph.addr.high_part);
+
+               REG_SET(DCSURF_PRIMARY_SURFACE_ADDRESS, 0,
+                               PRIMARY_SURFACE_ADDRESS,
+                               address->grph.addr.low_part);
+               break;
+       case PLN_ADDR_TYPE_VIDEO_PROGRESSIVE:
+               if (address->video_progressive.luma_addr.quad_part == 0
+                               || address->video_progressive.chroma_addr.quad_part == 0)
+                       break;
+
+               REG_UPDATE_4(DCSURF_SURFACE_CONTROL,
+                               PRIMARY_SURFACE_TMZ, address->tmz_surface,
+                               PRIMARY_SURFACE_TMZ_C, address->tmz_surface,
+                               PRIMARY_META_SURFACE_TMZ, address->tmz_surface,
+                               PRIMARY_META_SURFACE_TMZ_C, address->tmz_surface);
+
+               if (address->video_progressive.luma_meta_addr.quad_part != 0) {
+                       REG_SET(DCSURF_PRIMARY_META_SURFACE_ADDRESS_HIGH_C, 0,
+                                       PRIMARY_META_SURFACE_ADDRESS_HIGH_C,
+                                       address->video_progressive.chroma_meta_addr.high_part);
+
+                       REG_SET(DCSURF_PRIMARY_META_SURFACE_ADDRESS_C, 0,
+                                       PRIMARY_META_SURFACE_ADDRESS_C,
+                                       address->video_progressive.chroma_meta_addr.low_part);
+
+                       REG_SET(DCSURF_PRIMARY_META_SURFACE_ADDRESS_HIGH, 0,
+                                       PRIMARY_META_SURFACE_ADDRESS_HIGH,
+                                       address->video_progressive.luma_meta_addr.high_part);
+
+                       REG_SET(DCSURF_PRIMARY_META_SURFACE_ADDRESS, 0,
+                                       PRIMARY_META_SURFACE_ADDRESS,
+                                       address->video_progressive.luma_meta_addr.low_part);
+               }
+
+               REG_SET(DCSURF_PRIMARY_SURFACE_ADDRESS_HIGH_C, 0,
+                               PRIMARY_SURFACE_ADDRESS_HIGH_C,
+                               address->video_progressive.chroma_addr.high_part);
+
+               if (debug->nv12_iflip_vm_wa) {
+                       REG_SET(DCSURF_PRIMARY_SURFACE_ADDRESS_C, 0,
+                                       PRIMARY_SURFACE_ADDRESS_C,
+                                       address->video_progressive.chroma_addr.low_part + hubp21->PLAT_54186_wa_chroma_addr_offset);
+               } else {
+                       REG_SET(DCSURF_PRIMARY_SURFACE_ADDRESS_C, 0,
+                                       PRIMARY_SURFACE_ADDRESS_C,
+                                       address->video_progressive.chroma_addr.low_part);
+               }
+
+               REG_SET(DCSURF_PRIMARY_SURFACE_ADDRESS_HIGH, 0,
+                               PRIMARY_SURFACE_ADDRESS_HIGH,
+                               address->video_progressive.luma_addr.high_part);
+
+               REG_SET(DCSURF_PRIMARY_SURFACE_ADDRESS, 0,
+                               PRIMARY_SURFACE_ADDRESS,
+                               address->video_progressive.luma_addr.low_part);
+               break;
+       case PLN_ADDR_TYPE_GRPH_STEREO:
+               if (address->grph_stereo.left_addr.quad_part == 0)
+                       break;
+               if (address->grph_stereo.right_addr.quad_part == 0)
+                       break;
+
+               REG_UPDATE_8(DCSURF_SURFACE_CONTROL,
+                               PRIMARY_SURFACE_TMZ, address->tmz_surface,
+                               PRIMARY_SURFACE_TMZ_C, address->tmz_surface,
+                               PRIMARY_META_SURFACE_TMZ, address->tmz_surface,
+                               PRIMARY_META_SURFACE_TMZ_C, address->tmz_surface,
+                               SECONDARY_SURFACE_TMZ, address->tmz_surface,
+                               SECONDARY_SURFACE_TMZ_C, address->tmz_surface,
+                               SECONDARY_META_SURFACE_TMZ, address->tmz_surface,
+                               SECONDARY_META_SURFACE_TMZ_C, address->tmz_surface);
+
+               if (address->grph_stereo.right_meta_addr.quad_part != 0) {
+
+                       REG_SET(DCSURF_SECONDARY_META_SURFACE_ADDRESS_HIGH, 0,
+                                       SECONDARY_META_SURFACE_ADDRESS_HIGH,
+                                       address->grph_stereo.right_meta_addr.high_part);
+
+                       REG_SET(DCSURF_SECONDARY_META_SURFACE_ADDRESS, 0,
+                                       SECONDARY_META_SURFACE_ADDRESS,
+                                       address->grph_stereo.right_meta_addr.low_part);
+               }
+               if (address->grph_stereo.left_meta_addr.quad_part != 0) {
+
+                       REG_SET(DCSURF_PRIMARY_META_SURFACE_ADDRESS_HIGH, 0,
+                                       PRIMARY_META_SURFACE_ADDRESS_HIGH,
+                                       address->grph_stereo.left_meta_addr.high_part);
+
+                       REG_SET(DCSURF_PRIMARY_META_SURFACE_ADDRESS, 0,
+                                       PRIMARY_META_SURFACE_ADDRESS,
+                                       address->grph_stereo.left_meta_addr.low_part);
+               }
+
+               REG_SET(DCSURF_SECONDARY_SURFACE_ADDRESS_HIGH, 0,
+                               SECONDARY_SURFACE_ADDRESS_HIGH,
+                               address->grph_stereo.right_addr.high_part);
+
+               REG_SET(DCSURF_SECONDARY_SURFACE_ADDRESS, 0,
+                               SECONDARY_SURFACE_ADDRESS,
+                               address->grph_stereo.right_addr.low_part);
+
+               REG_SET(DCSURF_PRIMARY_SURFACE_ADDRESS_HIGH, 0,
+                               PRIMARY_SURFACE_ADDRESS_HIGH,
+                               address->grph_stereo.left_addr.high_part);
+
+               REG_SET(DCSURF_PRIMARY_SURFACE_ADDRESS, 0,
+                               PRIMARY_SURFACE_ADDRESS,
+                               address->grph_stereo.left_addr.low_part);
+               break;
+       default:
+               BREAK_TO_DEBUGGER();
+               break;
+       }
+
+       hubp->request_address = *address;
+
+       return true;
+}
+
 void hubp21_init(struct hubp *hubp)
 {
        // DEDCN21-133: Inconsistent row starting line for flip between DPTE and Meta
@@ -614,7 +871,7 @@ void hubp21_init(struct hubp *hubp)
 static struct hubp_funcs dcn21_hubp_funcs = {
        .hubp_enable_tripleBuffer = hubp2_enable_triplebuffer,
        .hubp_is_triplebuffer_enabled = hubp2_is_triplebuffer_enabled,
-       .hubp_program_surface_flip_and_addr = hubp2_program_surface_flip_and_addr,
+       .hubp_program_surface_flip_and_addr = hubp21_program_surface_flip_and_addr,
        .hubp_program_surface_config = hubp1_program_surface_config,
        .hubp_is_flip_pending = hubp1_is_flip_pending,
        .hubp_setup = hubp21_setup,
@@ -623,6 +880,7 @@ static struct hubp_funcs dcn21_hubp_funcs = {
        .set_blank = hubp1_set_blank,
        .dcc_control = hubp1_dcc_control,
        .mem_program_viewport = hubp21_set_viewport,
+       .apply_PLAT_54186_wa = hubp21_apply_PLAT_54186_wa,
        .set_cursor_attributes  = hubp2_cursor_set_attributes,
        .set_cursor_position    = hubp1_cursor_set_position,
        .hubp_clk_cntl = hubp1_clk_cntl,
index aeda719..9873b6c 100644 (file)
@@ -108,6 +108,7 @@ struct dcn21_hubp {
        const struct dcn_hubp2_registers *hubp_regs;
        const struct dcn_hubp2_shift *hubp_shift;
        const struct dcn_hubp2_mask *hubp_mask;
+       int PLAT_54186_wa_chroma_addr_offset;
 };
 
 bool hubp21_construct(
index 1d7a1a5..033d5d7 100644 (file)
@@ -33,6 +33,45 @@ struct dcn21_link_encoder {
        struct dpcssys_phy_seq_cfg phy_seq_cfg;
 };
 
+#define DPCS_DCN21_MASK_SH_LIST(mask_sh)\
+       DPCS_DCN2_MASK_SH_LIST(mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE3, RDPCS_PHY_TX_VBOOST_LVL, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE2, RDPCS_PHY_DP_MPLLB_CP_PROP_GS, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE0, RDPCS_PHY_RX_VREF_CTRL, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE0, RDPCS_PHY_DP_MPLLB_CP_INT_GS, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG, RDPCS_DMCU_DPALT_DIS_BLOCK_REG, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL15, RDPCS_PHY_SUP_PRE_HP, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL15, RDPCS_PHY_DP_TX0_VREGDRV_BYP, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL15, RDPCS_PHY_DP_TX1_VREGDRV_BYP, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL15, RDPCS_PHY_DP_TX2_VREGDRV_BYP, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL15, RDPCS_PHY_DP_TX3_VREGDRV_BYP, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE0, RDPCS_PHY_DP_TX0_EQ_MAIN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE0, RDPCS_PHY_DP_TX0_EQ_PRE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE0, RDPCS_PHY_DP_TX0_EQ_POST, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE1, RDPCS_PHY_DP_TX1_EQ_MAIN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE1, RDPCS_PHY_DP_TX1_EQ_PRE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE1, RDPCS_PHY_DP_TX1_EQ_POST, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE2, RDPCS_PHY_DP_TX2_EQ_MAIN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE2, RDPCS_PHY_DP_TX2_EQ_PRE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE2, RDPCS_PHY_DP_TX2_EQ_POST, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE3, RDPCS_PHY_DP_TX3_EQ_MAIN, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE3, RDPCS_PHY_DCO_FINETUNE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE3, RDPCS_PHY_DCO_RANGE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE3, RDPCS_PHY_DP_TX3_EQ_PRE, mask_sh),\
+       LE_SF(RDPCSTX0_RDPCSTX_PHY_FUSE3, RDPCS_PHY_DP_TX3_EQ_POST, mask_sh),\
+       LE_SF(DCIO_SOFT_RESET, UNIPHYA_SOFT_RESET, mask_sh),\
+       LE_SF(DCIO_SOFT_RESET, UNIPHYB_SOFT_RESET, mask_sh),\
+       LE_SF(DCIO_SOFT_RESET, UNIPHYC_SOFT_RESET, mask_sh),\
+       LE_SF(DCIO_SOFT_RESET, UNIPHYD_SOFT_RESET, mask_sh),\
+       LE_SF(DCIO_SOFT_RESET, UNIPHYE_SOFT_RESET, mask_sh)
+
+#define DPCS_DCN21_REG_LIST(id) \
+       DPCS_DCN2_REG_LIST(id),\
+       SRI(RDPCSTX_PHY_CNTL15, RDPCSTX, id),\
+       SRI(RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG, RDPCSTX, id)
+
 #define LINK_ENCODER_MASK_SH_LIST_DCN21(mask_sh)\
        LINK_ENCODER_MASK_SH_LIST_DCN20(mask_sh),\
        LE_SF(UNIPHYA_CHANNEL_XBAR_CNTL, UNIPHY_CHANNEL0_XBAR_SOURCE, mask_sh),\
index c865b95..c76449f 100644 (file)
@@ -1,5 +1,6 @@
 /*
 * Copyright 2018 Advanced Micro Devices, Inc.
+ * Copyright 2019 Raptor Engineering, LLC
  *
  * Permission is hereby granted, free of charge, to any person obtaining a
  * copy of this software and associated documentation files (the "Software"),
@@ -62,6 +63,8 @@
 
 #include "dcn20/dcn20_dwb.h"
 #include "dcn20/dcn20_mmhubbub.h"
+#include "dpcs/dpcs_2_1_0_offset.h"
+#include "dpcs/dpcs_2_1_0_sh_mask.h"
 
 #include "renoir_ip_offset.h"
 #include "dcn/dcn_2_1_0_offset.h"
@@ -993,7 +996,8 @@ static void patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_s
 {
        int i;
 
-       kernel_fpu_begin();
+       DC_FP_START();
+
        if (dc->bb_overrides.sr_exit_time_ns) {
                for (i = 0; i < WM_SET_COUNT; i++) {
                          dc->clk_mgr->bw_params->wm_table.entries[i].sr_exit_time_us =
@@ -1019,7 +1023,7 @@ static void patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_s
                }
        }
 
-       kernel_fpu_end();
+       DC_FP_END();
 }
 
 void dcn21_calculate_wm(
@@ -1319,12 +1323,6 @@ struct display_stream_compressor *dcn21_dsc_create(
 
 static void update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
 {
-       /*
-       TODO: Fix this function to calcualte correct values.
-       There are known issues with this function currently
-       that will need to be investigated. Use hardcoded known good values for now.
-
-
        struct dcn21_resource_pool *pool = TO_DCN21_RES_POOL(dc->res_pool);
        struct clk_limit_table *clk_table = &bw_params->clk_table;
        int i;
@@ -1339,11 +1337,10 @@ static void update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_param
                dcn2_1_soc.clock_limits[i].dcfclk_mhz = clk_table->entries[i].dcfclk_mhz;
                dcn2_1_soc.clock_limits[i].fabricclk_mhz = clk_table->entries[i].fclk_mhz;
                dcn2_1_soc.clock_limits[i].socclk_mhz = clk_table->entries[i].socclk_mhz;
-               dcn2_1_soc.clock_limits[i].dram_speed_mts = clk_table->entries[i].memclk_mhz * 16 / 1000;
+               dcn2_1_soc.clock_limits[i].dram_speed_mts = clk_table->entries[i].memclk_mhz * 2;
        }
-       dcn2_1_soc.clock_limits[i] = dcn2_1_soc.clock_limits[i - i];
+       dcn2_1_soc.clock_limits[i] = dcn2_1_soc.clock_limits[i - 1];
        dcn2_1_soc.num_states = i;
-       */
 }
 
 /* Temporary Place holder until we can get them from fuse */
@@ -1497,8 +1494,9 @@ static const struct encoder_feature_support link_enc_feature = {
 
 #define link_regs(id, phyid)\
 [id] = {\
-       LE_DCN10_REG_LIST(id), \
+       LE_DCN2_REG_LIST(id), \
        UNIPHY_DCN2_REG_LIST(phyid), \
+       DPCS_DCN21_REG_LIST(id), \
        SRI(DP_DPHY_INTERNAL_CTRL, DP, id) \
 }
 
@@ -1537,11 +1535,13 @@ static const struct dcn10_link_enc_hpd_registers link_enc_hpd_regs[] = {
 };
 
 static const struct dcn10_link_enc_shift le_shift = {
-       LINK_ENCODER_MASK_SH_LIST_DCN20(__SHIFT)
+       LINK_ENCODER_MASK_SH_LIST_DCN20(__SHIFT),\
+       DPCS_DCN21_MASK_SH_LIST(__SHIFT)
 };
 
 static const struct dcn10_link_enc_mask le_mask = {
-       LINK_ENCODER_MASK_SH_LIST_DCN20(_MASK)
+       LINK_ENCODER_MASK_SH_LIST_DCN20(_MASK),\
+       DPCS_DCN21_MASK_SH_LIST(_MASK)
 };
 
 static int map_transmitter_id_to_phy_instance(
@@ -1776,41 +1776,41 @@ static bool dcn21_resource_construct(
                if ((pipe_fuses & (1 << i)) != 0)
                        continue;
 
-               pool->base.hubps[i] = dcn21_hubp_create(ctx, i);
-               if (pool->base.hubps[i] == NULL) {
+               pool->base.hubps[j] = dcn21_hubp_create(ctx, i);
+               if (pool->base.hubps[j] == NULL) {
                        BREAK_TO_DEBUGGER();
                        dm_error(
                                "DC: failed to create memory input!\n");
                        goto create_fail;
                }
 
-               pool->base.ipps[i] = dcn21_ipp_create(ctx, i);
-               if (pool->base.ipps[i] == NULL) {
+               pool->base.ipps[j] = dcn21_ipp_create(ctx, i);
+               if (pool->base.ipps[j] == NULL) {
                        BREAK_TO_DEBUGGER();
                        dm_error(
                                "DC: failed to create input pixel processor!\n");
                        goto create_fail;
                }
 
-               pool->base.dpps[i] = dcn21_dpp_create(ctx, i);
-               if (pool->base.dpps[i] == NULL) {
+               pool->base.dpps[j] = dcn21_dpp_create(ctx, i);
+               if (pool->base.dpps[j] == NULL) {
                        BREAK_TO_DEBUGGER();
                        dm_error(
                                "DC: failed to create dpps!\n");
                        goto create_fail;
                }
 
-               pool->base.opps[i] = dcn21_opp_create(ctx, i);
-               if (pool->base.opps[i] == NULL) {
+               pool->base.opps[j] = dcn21_opp_create(ctx, i);
+               if (pool->base.opps[j] == NULL) {
                        BREAK_TO_DEBUGGER();
                        dm_error(
                                "DC: failed to create output pixel processor!\n");
                        goto create_fail;
                }
 
-               pool->base.timing_generators[i] = dcn21_timing_generator_create(
+               pool->base.timing_generators[j] = dcn21_timing_generator_create(
                                ctx, i);
-               if (pool->base.timing_generators[i] == NULL) {
+               if (pool->base.timing_generators[j] == NULL) {
                        BREAK_TO_DEBUGGER();
                        dm_error("DC: failed to create tg!\n");
                        goto create_fail;
index a3d1be2..b52ba6f 100644 (file)
@@ -220,6 +220,7 @@ struct dm_bl_data_point {
 };
 
 /* Total size of the structure should not exceed 256 bytes */
+#define BL_DATA_POINTS 99
 struct dm_acpi_atif_backlight_caps {
        uint16_t size; /* Bytes 0-1 (2 bytes) */
        uint16_t flags; /* Byted 2-3 (2 bytes) */
@@ -229,7 +230,7 @@ struct dm_acpi_atif_backlight_caps {
        uint8_t  min_input_signal; /* Byte 7 */
        uint8_t  max_input_signal; /* Byte 8 */
        uint8_t  num_data_points; /* Byte 9 */
-       struct dm_bl_data_point data_points[99]; /* Bytes 10-207 (198 bytes)*/
+       struct dm_bl_data_point data_points[BL_DATA_POINTS]; /* Bytes 10-207 (198 bytes)*/
 };
 
 enum dm_acpi_display_type {
index fb63580..7ee8b84 100644 (file)
@@ -1,5 +1,6 @@
 #
 # Copyright 2017 Advanced Micro Devices, Inc.
+# Copyright 2019 Raptor Engineering, LLC
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the "Software"),
 # It provides the general basic services required by other DAL
 # subcomponents.
 
+ifdef CONFIG_X86
 dml_ccflags := -mhard-float -msse
+endif
+
+ifdef CONFIG_PPC64
+dml_ccflags := -mhard-float -maltivec
+endif
 
 ifdef CONFIG_CC_IS_GCC
 ifeq ($(call cc-ifversion, -lt, 0701, y), y)
@@ -32,6 +39,7 @@ IS_OLD_GCC = 1
 endif
 endif
 
+ifdef CONFIG_X86
 ifdef IS_OLD_GCC
 # Stack alignment mismatch, proceed with caution.
 # GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3
@@ -40,6 +48,7 @@ dml_ccflags += -mpreferred-stack-boundary=4
 else
 dml_ccflags += -msse2
 endif
+endif
 
 CFLAGS_$(AMDDALPATH)/dc/dml/display_mode_lib.o := $(dml_ccflags)
 
index 9df24ec..ca80784 100644 (file)
@@ -107,10 +107,10 @@ static unsigned int get_bytes_per_element(enum source_format_class source_format
 
 static bool is_dual_plane(enum source_format_class source_format)
 {
-       bool ret_val = 0;
+       bool ret_val = false;
 
        if ((source_format == dm_420_8) || (source_format == dm_420_10))
-               ret_val = 1;
+               ret_val = true;
 
        return ret_val;
 }
@@ -240,8 +240,8 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
        unsigned int swath_bytes_c = 0;
        unsigned int full_swath_bytes_packed_l = 0;
        unsigned int full_swath_bytes_packed_c = 0;
-       bool req128_l = 0;
-       bool req128_c = 0;
+       bool req128_l = false;
+       bool req128_c = false;
        bool surf_linear = (pipe_src_param.sw_mode == dm_sw_linear);
        bool surf_vert = (pipe_src_param.source_scan == dm_vert);
        unsigned int log2_swath_height_l = 0;
@@ -264,13 +264,13 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
                total_swath_bytes = 2 * full_swath_bytes_packed_l + 2 * full_swath_bytes_packed_c;
 
                if (total_swath_bytes <= detile_buf_size_in_bytes) { //full 256b request
-                       req128_l = 0;
-                       req128_c = 0;
+                       req128_l = false;
+                       req128_c = false;
                        swath_bytes_l = full_swath_bytes_packed_l;
                        swath_bytes_c = full_swath_bytes_packed_c;
                } else { //128b request (for luma only for yuv420 8bpc)
-                       req128_l = 1;
-                       req128_c = 0;
+                       req128_l = true;
+                       req128_c = false;
                        swath_bytes_l = full_swath_bytes_packed_l / 2;
                        swath_bytes_c = full_swath_bytes_packed_c;
                }
@@ -280,9 +280,9 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
                total_swath_bytes = 2 * full_swath_bytes_packed_l;
 
                if (total_swath_bytes <= detile_buf_size_in_bytes)
-                       req128_l = 0;
+                       req128_l = false;
                else
-                       req128_l = 1;
+                       req128_l = true;
 
                swath_bytes_l = total_swath_bytes;
                swath_bytes_c = 0;
@@ -670,7 +670,7 @@ static void get_surf_rq_param(struct display_mode_lib *mode_lib,
                const display_pipe_source_params_st pipe_src_param,
                bool is_chroma)
 {
-       bool mode_422 = 0;
+       bool mode_422 = false;
        unsigned int vp_width = 0;
        unsigned int vp_height = 0;
        unsigned int data_pitch = 0;
@@ -958,7 +958,7 @@ static void dml20_rq_dlg_get_dlg_params(struct display_mode_lib *mode_lib,
        // Source
 //             dcc_en              = src.dcc;
        dual_plane = is_dual_plane((enum source_format_class)(src->source_format));
-       mode_422 = 0; // TODO
+       mode_422 = false; // TODO
        access_dir = (src->source_scan == dm_vert); // vp access direction: horizontal or vertical accessed
 //      bytes_per_element_l = get_bytes_per_element(source_format_class(src.source_format), 0);
 //      bytes_per_element_c = get_bytes_per_element(source_format_class(src.source_format), 1);
index 1e6aeb1..287b7a0 100644 (file)
@@ -107,10 +107,10 @@ static unsigned int get_bytes_per_element(enum source_format_class source_format
 
 static bool is_dual_plane(enum source_format_class source_format)
 {
-       bool ret_val = 0;
+       bool ret_val = false;
 
        if ((source_format == dm_420_8) || (source_format == dm_420_10))
-               ret_val = 1;
+               ret_val = true;
 
        return ret_val;
 }
@@ -240,8 +240,8 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
        unsigned int swath_bytes_c = 0;
        unsigned int full_swath_bytes_packed_l = 0;
        unsigned int full_swath_bytes_packed_c = 0;
-       bool req128_l = 0;
-       bool req128_c = 0;
+       bool req128_l = false;
+       bool req128_c = false;
        bool surf_linear = (pipe_src_param.sw_mode == dm_sw_linear);
        bool surf_vert = (pipe_src_param.source_scan == dm_vert);
        unsigned int log2_swath_height_l = 0;
@@ -264,13 +264,13 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
                total_swath_bytes = 2 * full_swath_bytes_packed_l + 2 * full_swath_bytes_packed_c;
 
                if (total_swath_bytes <= detile_buf_size_in_bytes) { //full 256b request
-                       req128_l = 0;
-                       req128_c = 0;
+                       req128_l = false;
+                       req128_c = false;
                        swath_bytes_l = full_swath_bytes_packed_l;
                        swath_bytes_c = full_swath_bytes_packed_c;
                } else { //128b request (for luma only for yuv420 8bpc)
-                       req128_l = 1;
-                       req128_c = 0;
+                       req128_l = true;
+                       req128_c = false;
                        swath_bytes_l = full_swath_bytes_packed_l / 2;
                        swath_bytes_c = full_swath_bytes_packed_c;
                }
@@ -280,9 +280,9 @@ static void handle_det_buf_split(struct display_mode_lib *mode_lib,
                total_swath_bytes = 2 * full_swath_bytes_packed_l;
 
                if (total_swath_bytes <= detile_buf_size_in_bytes)
-                       req128_l = 0;
+                       req128_l = false;
                else
-                       req128_l = 1;
+                       req128_l = true;
 
                swath_bytes_l = total_swath_bytes;
                swath_bytes_c = 0;
@@ -670,7 +670,7 @@ static void get_surf_rq_param(struct display_mode_lib *mode_lib,
                const display_pipe_source_params_st pipe_src_param,
                bool is_chroma)
 {
-       bool mode_422 = 0;
+       bool mode_422 = false;
        unsigned int vp_width = 0;
        unsigned int vp_height = 0;
        unsigned int data_pitch = 0;
@@ -959,7 +959,7 @@ static void dml20v2_rq_dlg_get_dlg_params(struct display_mode_lib *mode_lib,
        // Source
 //             dcc_en              = src.dcc;
        dual_plane = is_dual_plane((enum source_format_class)(src->source_format));
-       mode_422 = 0; // TODO
+       mode_422 = false; // TODO
        access_dir = (src->source_scan == dm_vert); // vp access direction: horizontal or vertical accessed
 //      bytes_per_element_l = get_bytes_per_element(source_format_class(src.source_format), 0);
 //      bytes_per_element_c = get_bytes_per_element(source_format_class(src.source_format), 1);
index 945291d..b6d3466 100644 (file)
@@ -4121,11 +4121,11 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
        }
        for (i = 0; i <= mode_lib->vba.soc.num_states; i++) {
                for (k = 0; k <= mode_lib->vba.NumberOfActivePlanes - 1; k++) {
-                       locals->RequiresDSC[i][k] = 0;
+                       locals->RequiresDSC[i][k] = false;
                        locals->RequiresFEC[i][k] = 0;
                        if (mode_lib->vba.BlendingAndTiming[k] == k) {
                                if (mode_lib->vba.Output[k] == dm_hdmi) {
-                                       locals->RequiresDSC[i][k] = 0;
+                                       locals->RequiresDSC[i][k] = false;
                                        locals->RequiresFEC[i][k] = 0;
                                        locals->OutputBppPerState[i][k] = TruncToValidBPP(
                                                        dml_min(600.0, mode_lib->vba.PHYCLKPerState[i]) / mode_lib->vba.PixelClockBackEnd[k] * 24,
@@ -5204,7 +5204,7 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
                        mode_lib->vba.ODMCombineEnabled[k] =
                                        locals->ODMCombineEnablePerState[mode_lib->vba.VoltageLevel][k];
                } else {
-                       mode_lib->vba.ODMCombineEnabled[k] = 0;
+                       mode_lib->vba.ODMCombineEnabled[k] = false;
                }
                mode_lib->vba.DSCEnabled[k] =
                                locals->RequiresDSC[mode_lib->vba.VoltageLevel][k];
index e60af38..a38baa7 100644 (file)
@@ -82,10 +82,10 @@ static unsigned int get_bytes_per_element(enum source_format_class source_format
 
 static bool is_dual_plane(enum source_format_class source_format)
 {
-       bool ret_val = 0;
+       bool ret_val = false;
 
        if ((source_format == dm_420_8) || (source_format == dm_420_10))
-               ret_val = 1;
+               ret_val = true;
 
        return ret_val;
 }
@@ -222,8 +222,8 @@ static void handle_det_buf_split(
        unsigned int swath_bytes_c = 0;
        unsigned int full_swath_bytes_packed_l = 0;
        unsigned int full_swath_bytes_packed_c = 0;
-       bool req128_l = 0;
-       bool req128_c = 0;
+       bool req128_l = false;
+       bool req128_c = false;
        bool surf_linear = (pipe_src_param.sw_mode == dm_sw_linear);
        bool surf_vert = (pipe_src_param.source_scan == dm_vert);
        unsigned int log2_swath_height_l = 0;
@@ -248,13 +248,13 @@ static void handle_det_buf_split(
                total_swath_bytes = 2 * full_swath_bytes_packed_l + 2 * full_swath_bytes_packed_c;
 
                if (total_swath_bytes <= detile_buf_size_in_bytes) { //full 256b request
-                       req128_l = 0;
-                       req128_c = 0;
+                       req128_l = false;
+                       req128_c = false;
                        swath_bytes_l = full_swath_bytes_packed_l;
                        swath_bytes_c = full_swath_bytes_packed_c;
                } else { //128b request (for luma only for yuv420 8bpc)
-                       req128_l = 1;
-                       req128_c = 0;
+                       req128_l = true;
+                       req128_c = false;
                        swath_bytes_l = full_swath_bytes_packed_l / 2;
                        swath_bytes_c = full_swath_bytes_packed_c;
                }
@@ -264,9 +264,9 @@ static void handle_det_buf_split(
                total_swath_bytes = 2 * full_swath_bytes_packed_l;
 
                if (total_swath_bytes <= detile_buf_size_in_bytes)
-                       req128_l = 0;
+                       req128_l = false;
                else
-                       req128_l = 1;
+                       req128_l = true;
 
                swath_bytes_l = total_swath_bytes;
                swath_bytes_c = 0;
@@ -679,7 +679,7 @@ static void get_surf_rq_param(
                const display_pipe_params_st pipe_param,
                bool is_chroma)
 {
-       bool mode_422 = 0;
+       bool mode_422 = false;
        unsigned int vp_width = 0;
        unsigned int vp_height = 0;
        unsigned int data_pitch = 0;
@@ -1010,7 +1010,7 @@ static void dml_rq_dlg_get_dlg_params(
        // Source
        //             dcc_en              = src.dcc;
        dual_plane = is_dual_plane((enum source_format_class) (src->source_format));
-       mode_422 = 0; // FIXME
+       mode_422 = false; // FIXME
        access_dir = (src->source_scan == dm_vert); // vp access direction: horizontal or vertical accessed
                                                    //      bytes_per_element_l = get_bytes_per_element(source_format_class(src.source_format), 0);
                                                    //      bytes_per_element_c = get_bytes_per_element(source_format_class(src.source_format), 1);
index 220d5e6..dbf6a02 100644 (file)
@@ -278,6 +278,7 @@ struct _vcs_dpi_display_output_params_st {
        int output_type;
        int output_format;
        int dsc_slices;
+       int max_audio_sample_rate;
        struct writeback_st wb;
 };
 
index 15b72a8..66ca014 100644 (file)
@@ -454,7 +454,7 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib)
                                dout->dp_lanes;
                /* TODO: Needs to be set based on dout->audio.audio_sample_rate_khz/sample_layout */
                mode_lib->vba.AudioSampleRate[mode_lib->vba.NumberOfActivePlanes] =
-                       44.1 * 1000;
+                       dout->max_audio_sample_rate;
                mode_lib->vba.AudioSampleLayout[mode_lib->vba.NumberOfActivePlanes] =
                        1;
                mode_lib->vba.DRAMClockChangeLatencyOverride = 0.0;
index 641ffb7..3f66868 100644 (file)
@@ -2,7 +2,13 @@
 #
 # Makefile for the 'dsc' sub-component of DAL.
 
+ifdef CONFIG_X86
 dsc_ccflags := -mhard-float -msse
+endif
+
+ifdef CONFIG_PPC64
+dsc_ccflags := -mhard-float -maltivec
+endif
 
 ifdef CONFIG_CC_IS_GCC
 ifeq ($(call cc-ifversion, -lt, 0701, y), y)
@@ -10,6 +16,7 @@ IS_OLD_GCC = 1
 endif
 endif
 
+ifdef CONFIG_X86
 ifdef IS_OLD_GCC
 # Stack alignment mismatch, proceed with caution.
 # GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3
@@ -18,6 +25,7 @@ dsc_ccflags += -mpreferred-stack-boundary=4
 else
 dsc_ccflags += -msse2
 endif
+endif
 
 CFLAGS_$(AMDDALPATH)/dc/dsc/rc_calc.o := $(dsc_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/dsc/rc_calc_dpi.o := $(dsc_ccflags)
index d2423ad..8b78fcb 100644 (file)
@@ -29,6 +29,9 @@
 
 /* This module's internal functions */
 
+/* default DSC policy target bitrate limit is 16bpp */
+static uint32_t dsc_policy_max_target_bpp_limit = 16;
+
 static uint32_t dc_dsc_bandwidth_in_kbps_from_timing(
        const struct dc_crtc_timing *timing)
 {
@@ -757,7 +760,7 @@ done:
        return is_dsc_possible;
 }
 
-bool dc_dsc_parse_dsc_dpcd(const uint8_t *dpcd_dsc_basic_data, const uint8_t *dpcd_dsc_ext_data, struct dsc_dec_dpcd_caps *dsc_sink_caps)
+bool dc_dsc_parse_dsc_dpcd(const struct dc *dc, const uint8_t *dpcd_dsc_basic_data, const uint8_t *dpcd_dsc_ext_data, struct dsc_dec_dpcd_caps *dsc_sink_caps)
 {
        if (!dpcd_dsc_basic_data)
                return false;
@@ -810,6 +813,23 @@ bool dc_dsc_parse_dsc_dpcd(const uint8_t *dpcd_dsc_basic_data, const uint8_t *dp
        if (!dsc_bpp_increment_div_from_dpcd(dpcd_dsc_basic_data[DP_DSC_BITS_PER_PIXEL_INC - DP_DSC_SUPPORT], &dsc_sink_caps->bpp_increment_div))
                return false;
 
+       if (dc->debug.dsc_bpp_increment_div) {
+               /* dsc_bpp_increment_div should onl be 1, 2, 4, 8 or 16, but rather than rejecting invalid values,
+                * we'll accept all and get it into range. This also makes the above check against 0 redundant,
+                * but that one stresses out the override will be only used if it's not 0.
+                */
+               if (dc->debug.dsc_bpp_increment_div >= 1)
+                       dsc_sink_caps->bpp_increment_div = 1;
+               if (dc->debug.dsc_bpp_increment_div >= 2)
+                       dsc_sink_caps->bpp_increment_div = 2;
+               if (dc->debug.dsc_bpp_increment_div >= 4)
+                       dsc_sink_caps->bpp_increment_div = 4;
+               if (dc->debug.dsc_bpp_increment_div >= 8)
+                       dsc_sink_caps->bpp_increment_div = 8;
+               if (dc->debug.dsc_bpp_increment_div >= 16)
+                       dsc_sink_caps->bpp_increment_div = 16;
+       }
+
        /* Extended caps */
        if (dpcd_dsc_ext_data == NULL) { // Extended DPCD DSC data can be null, e.g. because it doesn't apply to SST
                dsc_sink_caps->branch_overall_throughput_0_mps = 0;
@@ -951,7 +971,12 @@ void dc_dsc_get_policy_for_timing(const struct dc_crtc_timing *timing, struct dc
        default:
                return;
        }
-       /* internal upper limit to 16 bpp */
-       if (policy->max_target_bpp > 16)
-               policy->max_target_bpp = 16;
+       /* internal upper limit, default 16 bpp */
+       if (policy->max_target_bpp > dsc_policy_max_target_bpp_limit)
+               policy->max_target_bpp = dsc_policy_max_target_bpp_limit;
+}
+
+void dc_dsc_policy_set_max_target_bpp_limit(uint32_t limit)
+{
+       dsc_policy_max_target_bpp_limit = limit;
 }
index 735f419..459f95f 100644 (file)
@@ -113,7 +113,8 @@ struct dwbc {
        int wb_src_plane_inst;/*hubp, mpcc, inst*/
        bool update_privacymask;
        uint32_t mask_id;
-
+        int otg_inst;
+        bool mvc_cfg;
 };
 
 struct dwbc_funcs {
index 85a34dd..6861459 100644 (file)
@@ -82,9 +82,10 @@ struct hubp_funcs {
        void (*mem_program_viewport)(
                        struct hubp *hubp,
                        const struct rect *viewport,
-                       const struct rect *viewport_c,
-                       enum dc_rotation_angle rotation);
-                       /* rotation needed for Renoir workaround */
+                       const struct rect *viewport_c);
+
+       void (*apply_PLAT_54186_wa)(struct hubp *hubp,
+                       const struct dc_plane_address *address);
 
        bool (*hubp_program_surface_flip_and_addr)(
                struct hubp *hubp,
index e9c6021..df32046 100644 (file)
@@ -149,16 +149,18 @@ struct hw_sequencer_funcs {
 
        /* Writeback Related */
        void (*update_writeback)(struct dc *dc,
-                       const struct dc_stream_status *stream_status,
                        struct dc_writeback_info *wb_info,
                        struct dc_state *context);
        void (*enable_writeback)(struct dc *dc,
-                       const struct dc_stream_status *stream_status,
                        struct dc_writeback_info *wb_info,
                        struct dc_state *context);
        void (*disable_writeback)(struct dc *dc,
                        unsigned int dwb_pipe_inst);
 
+       bool (*mmhubbub_warmup)(struct dc *dc,
+                       unsigned int num_dwb,
+                       struct dc_writeback_info *wb_info);
+
        /* Clock Related */
        enum dc_status (*set_clock)(struct dc *dc,
                        enum dc_clock_type clock_type,
index 7a85abc..5ae8ada 100644 (file)
@@ -177,4 +177,6 @@ void update_audio_usage(
 
 unsigned int resource_pixel_format_to_bpp(enum surface_pixel_format format);
 
+void get_audio_check(struct audio_info *aud_modes,
+       struct audio_check *aud_chk);
 #endif /* DRIVERS_GPU_DRM_AMD_DC_DEV_DC_INC_RESOURCE_H_ */
index 13b9a9b..c34eba1 100644 (file)
@@ -1,5 +1,6 @@
 /*
  * Copyright 2012-16 Advanced Micro Devices, Inc.
+ * Copyright 2019 Raptor Engineering, LLC
  *
  * Permission is hereby granted, free of charge, to any person obtaining a
  * copy of this software and associated documentation files (the "Software"),
 #define dm_error(fmt, ...) DRM_ERROR(fmt, ##__VA_ARGS__)
 
 #if defined(CONFIG_DRM_AMD_DC_DCN)
+#if defined(CONFIG_X86)
 #include <asm/fpu/api.h>
+#define DC_FP_START() kernel_fpu_begin()
+#define DC_FP_END() kernel_fpu_end()
+#elif defined(CONFIG_PPC64)
+#include <asm/switch_to.h>
+#include <asm/cputable.h>
+#define DC_FP_START() { \
+       if (cpu_has_feature(CPU_FTR_VSX_COMP)) { \
+               preempt_disable(); \
+               enable_kernel_vsx(); \
+       } else if (cpu_has_feature(CPU_FTR_ALTIVEC_COMP)) { \
+               preempt_disable(); \
+               enable_kernel_altivec(); \
+       } else if (!cpu_has_feature(CPU_FTR_FPU_UNAVAILABLE)) { \
+               preempt_disable(); \
+               enable_kernel_fp(); \
+       } \
+}
+#define DC_FP_END() { \
+       if (cpu_has_feature(CPU_FTR_VSX_COMP)) { \
+               disable_kernel_vsx(); \
+               preempt_enable(); \
+       } else if (cpu_has_feature(CPU_FTR_ALTIVEC_COMP)) { \
+               disable_kernel_altivec(); \
+               preempt_enable(); \
+       } else if (!cpu_has_feature(CPU_FTR_FPU_UNAVAILABLE)) { \
+               disable_kernel_fp(); \
+               preempt_enable(); \
+       } \
+}
+#endif
 #endif
 
 /*
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_meta.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_meta.h
new file mode 100644 (file)
index 0000000..242ec25
--- /dev/null
@@ -0,0 +1,63 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+#ifndef _DMUB_META_H_
+#define _DMUB_META_H_
+
+#include "dmub_types.h"
+
+#pragma pack(push, 1)
+
+/* Magic value for identifying dmub_fw_meta_info */
+#define DMUB_FW_META_MAGIC 0x444D5542
+
+/* Offset from the end of the file to the dmub_fw_meta_info */
+#define DMUB_FW_META_OFFSET 0x24
+
+/**
+ * struct dmub_fw_meta_info - metadata associated with fw binary
+ *
+ * NOTE: This should be considered a stable API. Fields should
+ *       not be repurposed or reordered. New fields should be
+ *       added instead to extend the structure.
+ *
+ * @magic_value: magic value identifying DMUB firmware meta info
+ * @fw_region_size: size of the firmware state region
+ * @trace_buffer_size: size of the tracebuffer region
+ */
+struct dmub_fw_meta_info {
+       uint32_t magic_value;
+       uint32_t fw_region_size;
+       uint32_t trace_buffer_size;
+};
+
+/* Ensure that the structure remains 64 bytes. */
+union dmub_fw_meta {
+       struct dmub_fw_meta_info info;
+       uint8_t reserved[64];
+};
+
+#pragma pack(pop)
+
+#endif /* _DMUB_META_H_ */
index 528243e..f34a50d 100644 (file)
@@ -67,7 +67,6 @@
 #include "dmub_types.h"
 #include "dmub_cmd.h"
 #include "dmub_rb.h"
-#include "dmub_fw_state.h"
 
 #if defined(__cplusplus)
 extern "C" {
@@ -76,7 +75,7 @@ extern "C" {
 /* Forward declarations */
 struct dmub_srv;
 struct dmub_cmd_header;
-struct dmcu;
+struct dmub_srv_common_regs;
 
 /* enum dmub_status - return code for dmcub functions */
 enum dmub_status {
@@ -145,11 +144,13 @@ struct dmub_fb {
  * @inst_const_size: size of the fw inst const section
  * @bss_data_size: size of the fw bss data section
  * @vbios_size: size of the vbios data
+ * @fw_bss_data: raw firmware bss data section
  */
 struct dmub_srv_region_params {
        uint32_t inst_const_size;
        uint32_t bss_data_size;
        uint32_t vbios_size;
+       const uint8_t *fw_bss_data;
 };
 
 /**
@@ -307,6 +308,8 @@ struct dmub_srv {
        volatile const struct dmub_fw_state *fw_state;
 
        /* private: internal use only */
+       const struct dmub_srv_common_regs *regs;
+
        struct dmub_srv_base_funcs funcs;
        struct dmub_srv_hw_funcs hw_funcs;
        struct dmub_rb inbox1_rb;
index 951ea70..f45e14a 100644 (file)
@@ -25,6 +25,7 @@
 
 #include "../inc/dmub_srv.h"
 #include "dmub_reg.h"
+#include "dmub_dcn20.h"
 
 #include "dcn/dcn_2_0_0_offset.h"
 #include "dcn/dcn_2_0_0_sh_mask.h"
 
 #define BASE_INNER(seg) DCN_BASE__INST0_SEG##seg
 #define CTX dmub
+#define REGS dmub->regs
+
+/* Registers. */
+
+const struct dmub_srv_common_regs dmub_srv_dcn20_regs = {
+#define DMUB_SR(reg) REG_OFFSET(reg),
+       { DMUB_COMMON_REGS() },
+#undef DMUB_SR
+
+#define DMUB_SF(reg, field) FD_MASK(reg, field),
+       { DMUB_COMMON_FIELDS() },
+#undef DMUB_SF
+
+#define DMUB_SF(reg, field) FD_SHIFT(reg, field),
+       { DMUB_COMMON_FIELDS() },
+#undef DMUB_SF
+};
+
+/* Shared functions. */
+
+static inline void dmub_dcn20_translate_addr(const union dmub_addr *addr_in,
+                                            uint64_t fb_base,
+                                            uint64_t fb_offset,
+                                            union dmub_addr *addr_out)
+{
+       addr_out->quad_part = addr_in->quad_part - fb_base + fb_offset;
+}
 
 void dmub_dcn20_reset(struct dmub_srv *dmub)
 {
@@ -47,22 +75,30 @@ void dmub_dcn20_reset_release(struct dmub_srv *dmub)
        REG_UPDATE(DMCUB_CNTL, DMCUB_SOFT_RESET, 0);
 }
 
-void dmub_dcn20_backdoor_load(struct dmub_srv *dmub, struct dmub_window *cw0,
-                             struct dmub_window *cw1)
+void dmub_dcn20_backdoor_load(struct dmub_srv *dmub,
+                             const struct dmub_window *cw0,
+                             const struct dmub_window *cw1)
 {
+       union dmub_addr offset;
+       uint64_t fb_base = dmub->fb_base, fb_offset = dmub->fb_offset;
+
        REG_UPDATE(DMCUB_SEC_CNTL, DMCUB_SEC_RESET, 1);
-       REG_UPDATE_2(DMCUB_MEM_CNTL, DMCUB_MEM_READ_SPACE, 0x4,
-                    DMCUB_MEM_WRITE_SPACE, 0x4);
+       REG_UPDATE_2(DMCUB_MEM_CNTL, DMCUB_MEM_READ_SPACE, 0x3,
+                    DMCUB_MEM_WRITE_SPACE, 0x3);
+
+       dmub_dcn20_translate_addr(&cw0->offset, fb_base, fb_offset, &offset);
 
-       REG_WRITE(DMCUB_REGION3_CW0_OFFSET, cw0->offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW0_OFFSET_HIGH, cw0->offset.u.high_part);
+       REG_WRITE(DMCUB_REGION3_CW0_OFFSET, offset.u.low_part);
+       REG_WRITE(DMCUB_REGION3_CW0_OFFSET_HIGH, offset.u.high_part);
        REG_WRITE(DMCUB_REGION3_CW0_BASE_ADDRESS, cw0->region.base);
        REG_SET_2(DMCUB_REGION3_CW0_TOP_ADDRESS, 0,
                  DMCUB_REGION3_CW0_TOP_ADDRESS, cw0->region.top,
                  DMCUB_REGION3_CW0_ENABLE, 1);
 
-       REG_WRITE(DMCUB_REGION3_CW1_OFFSET, cw1->offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW1_OFFSET_HIGH, cw1->offset.u.high_part);
+       dmub_dcn20_translate_addr(&cw1->offset, fb_base, fb_offset, &offset);
+
+       REG_WRITE(DMCUB_REGION3_CW1_OFFSET, offset.u.low_part);
+       REG_WRITE(DMCUB_REGION3_CW1_OFFSET_HIGH, offset.u.high_part);
        REG_WRITE(DMCUB_REGION3_CW1_BASE_ADDRESS, cw1->region.base);
        REG_SET_2(DMCUB_REGION3_CW1_TOP_ADDRESS, 0,
                  DMCUB_REGION3_CW1_TOP_ADDRESS, cw1->region.top,
@@ -79,37 +115,49 @@ void dmub_dcn20_setup_windows(struct dmub_srv *dmub,
                              const struct dmub_window *cw5,
                              const struct dmub_window *cw6)
 {
-       REG_WRITE(DMCUB_REGION3_CW2_OFFSET, cw2->offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW2_OFFSET_HIGH, cw2->offset.u.high_part);
+       union dmub_addr offset;
+       uint64_t fb_base = dmub->fb_base, fb_offset = dmub->fb_offset;
+
+       dmub_dcn20_translate_addr(&cw2->offset, fb_base, fb_offset, &offset);
+
+       REG_WRITE(DMCUB_REGION3_CW2_OFFSET, offset.u.low_part);
+       REG_WRITE(DMCUB_REGION3_CW2_OFFSET_HIGH, offset.u.high_part);
        REG_WRITE(DMCUB_REGION3_CW2_BASE_ADDRESS, cw2->region.base);
        REG_SET_2(DMCUB_REGION3_CW2_TOP_ADDRESS, 0,
                  DMCUB_REGION3_CW2_TOP_ADDRESS, cw2->region.top,
                  DMCUB_REGION3_CW2_ENABLE, 1);
 
-       REG_WRITE(DMCUB_REGION3_CW3_OFFSET, cw3->offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW3_OFFSET_HIGH, cw3->offset.u.high_part);
+       dmub_dcn20_translate_addr(&cw3->offset, fb_base, fb_offset, &offset);
+
+       REG_WRITE(DMCUB_REGION3_CW3_OFFSET, offset.u.low_part);
+       REG_WRITE(DMCUB_REGION3_CW3_OFFSET_HIGH, offset.u.high_part);
        REG_WRITE(DMCUB_REGION3_CW3_BASE_ADDRESS, cw3->region.base);
        REG_SET_2(DMCUB_REGION3_CW3_TOP_ADDRESS, 0,
                  DMCUB_REGION3_CW3_TOP_ADDRESS, cw3->region.top,
                  DMCUB_REGION3_CW3_ENABLE, 1);
 
        /* TODO: Move this to CW4. */
+       dmub_dcn20_translate_addr(&cw4->offset, fb_base, fb_offset, &offset);
 
-       REG_WRITE(DMCUB_REGION4_OFFSET, cw4->offset.u.low_part);
-       REG_WRITE(DMCUB_REGION4_OFFSET_HIGH, cw4->offset.u.high_part);
+       REG_WRITE(DMCUB_REGION4_OFFSET, offset.u.low_part);
+       REG_WRITE(DMCUB_REGION4_OFFSET_HIGH, offset.u.high_part);
        REG_SET_2(DMCUB_REGION4_TOP_ADDRESS, 0, DMCUB_REGION4_TOP_ADDRESS,
                  cw4->region.top - cw4->region.base - 1, DMCUB_REGION4_ENABLE,
                  1);
 
-       REG_WRITE(DMCUB_REGION3_CW5_OFFSET, cw5->offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW5_OFFSET_HIGH, cw5->offset.u.high_part);
+       dmub_dcn20_translate_addr(&cw5->offset, fb_base, fb_offset, &offset);
+
+       REG_WRITE(DMCUB_REGION3_CW5_OFFSET, offset.u.low_part);
+       REG_WRITE(DMCUB_REGION3_CW5_OFFSET_HIGH, offset.u.high_part);
        REG_WRITE(DMCUB_REGION3_CW5_BASE_ADDRESS, cw5->region.base);
        REG_SET_2(DMCUB_REGION3_CW5_TOP_ADDRESS, 0,
                  DMCUB_REGION3_CW5_TOP_ADDRESS, cw5->region.top,
                  DMCUB_REGION3_CW5_ENABLE, 1);
 
-       REG_WRITE(DMCUB_REGION3_CW6_OFFSET, cw6->offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW6_OFFSET_HIGH, cw6->offset.u.high_part);
+       dmub_dcn20_translate_addr(&cw6->offset, fb_base, fb_offset, &offset);
+
+       REG_WRITE(DMCUB_REGION3_CW6_OFFSET, offset.u.low_part);
+       REG_WRITE(DMCUB_REGION3_CW6_OFFSET_HIGH, offset.u.high_part);
        REG_WRITE(DMCUB_REGION3_CW6_BASE_ADDRESS, cw6->region.base);
        REG_SET_2(DMCUB_REGION3_CW6_TOP_ADDRESS, 0,
                  DMCUB_REGION3_CW6_TOP_ADDRESS, cw6->region.top,
index e70a575..68af9b1 100644 (file)
 
 struct dmub_srv;
 
+/* DCN20 register definitions. */
+
+#define DMUB_COMMON_REGS() \
+       DMUB_SR(DMCUB_CNTL) \
+       DMUB_SR(DMCUB_MEM_CNTL) \
+       DMUB_SR(DMCUB_SEC_CNTL) \
+       DMUB_SR(DMCUB_INBOX1_BASE_ADDRESS) \
+       DMUB_SR(DMCUB_INBOX1_SIZE) \
+       DMUB_SR(DMCUB_INBOX1_RPTR) \
+       DMUB_SR(DMCUB_INBOX1_WPTR) \
+       DMUB_SR(DMCUB_REGION3_CW0_OFFSET) \
+       DMUB_SR(DMCUB_REGION3_CW1_OFFSET) \
+       DMUB_SR(DMCUB_REGION3_CW2_OFFSET) \
+       DMUB_SR(DMCUB_REGION3_CW3_OFFSET) \
+       DMUB_SR(DMCUB_REGION3_CW4_OFFSET) \
+       DMUB_SR(DMCUB_REGION3_CW5_OFFSET) \
+       DMUB_SR(DMCUB_REGION3_CW6_OFFSET) \
+       DMUB_SR(DMCUB_REGION3_CW7_OFFSET) \
+       DMUB_SR(DMCUB_REGION3_CW0_OFFSET_HIGH) \
+       DMUB_SR(DMCUB_REGION3_CW1_OFFSET_HIGH) \
+       DMUB_SR(DMCUB_REGION3_CW2_OFFSET_HIGH) \
+       DMUB_SR(DMCUB_REGION3_CW3_OFFSET_HIGH) \
+       DMUB_SR(DMCUB_REGION3_CW4_OFFSET_HIGH) \
+       DMUB_SR(DMCUB_REGION3_CW5_OFFSET_HIGH) \
+       DMUB_SR(DMCUB_REGION3_CW6_OFFSET_HIGH) \
+       DMUB_SR(DMCUB_REGION3_CW7_OFFSET_HIGH) \
+       DMUB_SR(DMCUB_REGION3_CW0_BASE_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW1_BASE_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW2_BASE_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW3_BASE_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW4_BASE_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW5_BASE_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW6_BASE_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW7_BASE_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW0_TOP_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW1_TOP_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW2_TOP_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW3_TOP_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW4_TOP_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW5_TOP_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW6_TOP_ADDRESS) \
+       DMUB_SR(DMCUB_REGION3_CW7_TOP_ADDRESS) \
+       DMUB_SR(DMCUB_REGION4_OFFSET) \
+       DMUB_SR(DMCUB_REGION4_OFFSET_HIGH) \
+       DMUB_SR(DMCUB_REGION4_TOP_ADDRESS) \
+       DMUB_SR(DMCUB_SCRATCH0) \
+       DMUB_SR(DMCUB_SCRATCH1) \
+       DMUB_SR(DMCUB_SCRATCH2) \
+       DMUB_SR(DMCUB_SCRATCH3) \
+       DMUB_SR(DMCUB_SCRATCH4) \
+       DMUB_SR(DMCUB_SCRATCH5) \
+       DMUB_SR(DMCUB_SCRATCH6) \
+       DMUB_SR(DMCUB_SCRATCH7) \
+       DMUB_SR(DMCUB_SCRATCH8) \
+       DMUB_SR(DMCUB_SCRATCH9) \
+       DMUB_SR(DMCUB_SCRATCH10) \
+       DMUB_SR(DMCUB_SCRATCH11) \
+       DMUB_SR(DMCUB_SCRATCH12) \
+       DMUB_SR(DMCUB_SCRATCH13) \
+       DMUB_SR(DMCUB_SCRATCH14) \
+       DMUB_SR(DMCUB_SCRATCH15) \
+       DMUB_SR(CC_DC_PIPE_DIS)
+
+#define DMUB_COMMON_FIELDS() \
+       DMUB_SF(DMCUB_CNTL, DMCUB_ENABLE) \
+       DMUB_SF(DMCUB_CNTL, DMCUB_SOFT_RESET) \
+       DMUB_SF(DMCUB_CNTL, DMCUB_TRACEPORT_EN) \
+       DMUB_SF(DMCUB_MEM_CNTL, DMCUB_MEM_READ_SPACE) \
+       DMUB_SF(DMCUB_MEM_CNTL, DMCUB_MEM_WRITE_SPACE) \
+       DMUB_SF(DMCUB_SEC_CNTL, DMCUB_SEC_RESET) \
+       DMUB_SF(DMCUB_SEC_CNTL, DMCUB_MEM_UNIT_ID) \
+       DMUB_SF(DMCUB_REGION3_CW0_TOP_ADDRESS, DMCUB_REGION3_CW0_TOP_ADDRESS) \
+       DMUB_SF(DMCUB_REGION3_CW0_TOP_ADDRESS, DMCUB_REGION3_CW0_ENABLE) \
+       DMUB_SF(DMCUB_REGION3_CW1_TOP_ADDRESS, DMCUB_REGION3_CW1_TOP_ADDRESS) \
+       DMUB_SF(DMCUB_REGION3_CW1_TOP_ADDRESS, DMCUB_REGION3_CW1_ENABLE) \
+       DMUB_SF(DMCUB_REGION3_CW2_TOP_ADDRESS, DMCUB_REGION3_CW2_TOP_ADDRESS) \
+       DMUB_SF(DMCUB_REGION3_CW2_TOP_ADDRESS, DMCUB_REGION3_CW2_ENABLE) \
+       DMUB_SF(DMCUB_REGION3_CW3_TOP_ADDRESS, DMCUB_REGION3_CW3_TOP_ADDRESS) \
+       DMUB_SF(DMCUB_REGION3_CW3_TOP_ADDRESS, DMCUB_REGION3_CW3_ENABLE) \
+       DMUB_SF(DMCUB_REGION3_CW4_TOP_ADDRESS, DMCUB_REGION3_CW4_TOP_ADDRESS) \
+       DMUB_SF(DMCUB_REGION3_CW4_TOP_ADDRESS, DMCUB_REGION3_CW4_ENABLE) \
+       DMUB_SF(DMCUB_REGION3_CW5_TOP_ADDRESS, DMCUB_REGION3_CW5_TOP_ADDRESS) \
+       DMUB_SF(DMCUB_REGION3_CW5_TOP_ADDRESS, DMCUB_REGION3_CW5_ENABLE) \
+       DMUB_SF(DMCUB_REGION3_CW6_TOP_ADDRESS, DMCUB_REGION3_CW6_TOP_ADDRESS) \
+       DMUB_SF(DMCUB_REGION3_CW6_TOP_ADDRESS, DMCUB_REGION3_CW6_ENABLE) \
+       DMUB_SF(DMCUB_REGION3_CW7_TOP_ADDRESS, DMCUB_REGION3_CW7_TOP_ADDRESS) \
+       DMUB_SF(DMCUB_REGION3_CW7_TOP_ADDRESS, DMCUB_REGION3_CW7_ENABLE) \
+       DMUB_SF(DMCUB_REGION4_TOP_ADDRESS, DMCUB_REGION4_TOP_ADDRESS) \
+       DMUB_SF(DMCUB_REGION4_TOP_ADDRESS, DMCUB_REGION4_ENABLE) \
+       DMUB_SF(CC_DC_PIPE_DIS, DC_DMCUB_ENABLE)
+
+struct dmub_srv_common_reg_offset {
+#define DMUB_SR(reg) uint32_t reg;
+       DMUB_COMMON_REGS()
+#undef DMUB_SR
+};
+
+struct dmub_srv_common_reg_shift {
+#define DMUB_SF(reg, field) uint8_t reg##__##field;
+       DMUB_COMMON_FIELDS()
+#undef DMUB_SF
+};
+
+struct dmub_srv_common_reg_mask {
+#define DMUB_SF(reg, field) uint32_t reg##__##field;
+       DMUB_COMMON_FIELDS()
+#undef DMUB_SF
+};
+
+struct dmub_srv_common_regs {
+       const struct dmub_srv_common_reg_offset offset;
+       const struct dmub_srv_common_reg_mask mask;
+       const struct dmub_srv_common_reg_shift shift;
+};
+
+extern const struct dmub_srv_common_regs dmub_srv_dcn20_regs;
+
 /* Hardware functions. */
 
 void dmub_dcn20_init(struct dmub_srv *dmub);
index 9cea7a2..5bed9fc 100644 (file)
@@ -25,6 +25,7 @@
 
 #include "../inc/dmub_srv.h"
 #include "dmub_reg.h"
+#include "dmub_dcn21.h"
 
 #include "dcn/dcn_2_1_0_offset.h"
 #include "dcn/dcn_2_1_0_sh_mask.h"
 
 #define BASE_INNER(seg) DMU_BASE__INST0_SEG##seg
 #define CTX dmub
+#define REGS dmub->regs
 
-static inline void dmub_dcn21_translate_addr(const union dmub_addr *addr_in,
-                                            uint64_t fb_base,
-                                            uint64_t fb_offset,
-                                            union dmub_addr *addr_out)
-{
-       addr_out->quad_part = addr_in->quad_part - fb_base + fb_offset;
-}
-
-void dmub_dcn21_backdoor_load(struct dmub_srv *dmub,
-                             const struct dmub_window *cw0,
-                             const struct dmub_window *cw1)
-{
-       union dmub_addr offset;
-       uint64_t fb_base = dmub->fb_base, fb_offset = dmub->fb_offset;
-
-       REG_UPDATE(DMCUB_SEC_CNTL, DMCUB_SEC_RESET, 1);
-       REG_UPDATE_2(DMCUB_MEM_CNTL, DMCUB_MEM_READ_SPACE, 0x3,
-                    DMCUB_MEM_WRITE_SPACE, 0x3);
-
-       dmub_dcn21_translate_addr(&cw0->offset, fb_base, fb_offset, &offset);
-
-       REG_WRITE(DMCUB_REGION3_CW0_OFFSET, offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW0_OFFSET_HIGH, offset.u.high_part);
-       REG_WRITE(DMCUB_REGION3_CW0_BASE_ADDRESS, cw0->region.base);
-       REG_SET_2(DMCUB_REGION3_CW0_TOP_ADDRESS, 0,
-                 DMCUB_REGION3_CW0_TOP_ADDRESS, cw0->region.top,
-                 DMCUB_REGION3_CW0_ENABLE, 1);
-
-       dmub_dcn21_translate_addr(&cw1->offset, fb_base, fb_offset, &offset);
-
-       REG_WRITE(DMCUB_REGION3_CW1_OFFSET, offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW1_OFFSET_HIGH, offset.u.high_part);
-       REG_WRITE(DMCUB_REGION3_CW1_BASE_ADDRESS, cw1->region.base);
-       REG_SET_2(DMCUB_REGION3_CW1_TOP_ADDRESS, 0,
-                 DMCUB_REGION3_CW1_TOP_ADDRESS, cw1->region.top,
-                 DMCUB_REGION3_CW1_ENABLE, 1);
-
-       REG_UPDATE_2(DMCUB_SEC_CNTL, DMCUB_SEC_RESET, 0, DMCUB_MEM_UNIT_ID,
-                    0x20);
-}
-
-void dmub_dcn21_setup_windows(struct dmub_srv *dmub,
-                             const struct dmub_window *cw2,
-                             const struct dmub_window *cw3,
-                             const struct dmub_window *cw4,
-                             const struct dmub_window *cw5,
-                             const struct dmub_window *cw6)
-{
-       union dmub_addr offset;
-       uint64_t fb_base = dmub->fb_base, fb_offset = dmub->fb_offset;
-
-       dmub_dcn21_translate_addr(&cw2->offset, fb_base, fb_offset, &offset);
-
-       REG_WRITE(DMCUB_REGION3_CW2_OFFSET, offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW2_OFFSET_HIGH, offset.u.high_part);
-       REG_WRITE(DMCUB_REGION3_CW2_BASE_ADDRESS, cw2->region.base);
-       REG_SET_2(DMCUB_REGION3_CW2_TOP_ADDRESS, 0,
-                 DMCUB_REGION3_CW2_TOP_ADDRESS, cw2->region.top,
-                 DMCUB_REGION3_CW2_ENABLE, 1);
+/* Registers. */
 
-       dmub_dcn21_translate_addr(&cw3->offset, fb_base, fb_offset, &offset);
+const struct dmub_srv_common_regs dmub_srv_dcn21_regs = {
+#define DMUB_SR(reg) REG_OFFSET(reg),
+       { DMUB_COMMON_REGS() },
+#undef DMUB_SR
 
-       REG_WRITE(DMCUB_REGION3_CW3_OFFSET, offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW3_OFFSET_HIGH, offset.u.high_part);
-       REG_WRITE(DMCUB_REGION3_CW3_BASE_ADDRESS, cw3->region.base);
-       REG_SET_2(DMCUB_REGION3_CW3_TOP_ADDRESS, 0,
-                 DMCUB_REGION3_CW3_TOP_ADDRESS, cw3->region.top,
-                 DMCUB_REGION3_CW3_ENABLE, 1);
+#define DMUB_SF(reg, field) FD_MASK(reg, field),
+       { DMUB_COMMON_FIELDS() },
+#undef DMUB_SF
 
-       /* TODO: Move this to CW4. */
-       dmub_dcn21_translate_addr(&cw4->offset, fb_base, fb_offset, &offset);
+#define DMUB_SF(reg, field) FD_SHIFT(reg, field),
+       { DMUB_COMMON_FIELDS() },
+#undef DMUB_SF
+};
 
-       REG_WRITE(DMCUB_REGION4_OFFSET, offset.u.low_part);
-       REG_WRITE(DMCUB_REGION4_OFFSET_HIGH, offset.u.high_part);
-       REG_SET_2(DMCUB_REGION4_TOP_ADDRESS, 0, DMCUB_REGION4_TOP_ADDRESS,
-                 cw4->region.top - cw4->region.base - 1, DMCUB_REGION4_ENABLE,
-                 1);
-
-       dmub_dcn21_translate_addr(&cw5->offset, fb_base, fb_offset, &offset);
-
-       REG_WRITE(DMCUB_REGION3_CW5_OFFSET, offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW5_OFFSET_HIGH, offset.u.high_part);
-       REG_WRITE(DMCUB_REGION3_CW5_BASE_ADDRESS, cw5->region.base);
-       REG_SET_2(DMCUB_REGION3_CW5_TOP_ADDRESS, 0,
-                 DMCUB_REGION3_CW5_TOP_ADDRESS, cw5->region.top,
-                 DMCUB_REGION3_CW5_ENABLE, 1);
-
-       dmub_dcn21_translate_addr(&cw6->offset, fb_base, fb_offset, &offset);
-
-       REG_WRITE(DMCUB_REGION3_CW6_OFFSET, offset.u.low_part);
-       REG_WRITE(DMCUB_REGION3_CW6_OFFSET_HIGH, offset.u.high_part);
-       REG_WRITE(DMCUB_REGION3_CW6_BASE_ADDRESS, cw6->region.base);
-       REG_SET_2(DMCUB_REGION3_CW6_TOP_ADDRESS, 0,
-                 DMCUB_REGION3_CW6_TOP_ADDRESS, cw6->region.top,
-                 DMCUB_REGION3_CW6_ENABLE, 1);
-}
+/* Shared functions. */
 
 bool dmub_dcn21_is_auto_load_done(struct dmub_srv *dmub)
 {
index f7a93a5..2bbea23 100644 (file)
 
 #include "dmub_dcn20.h"
 
-/* Hardware functions. */
+/* Registers. */
 
-void dmub_dcn21_backdoor_load(struct dmub_srv *dmub,
-                             const struct dmub_window *cw0,
-                             const struct dmub_window *cw1);
+extern const struct dmub_srv_common_regs dmub_srv_dcn21_regs;
 
-void dmub_dcn21_setup_windows(struct dmub_srv *dmub,
-                             const struct dmub_window *cw2,
-                             const struct dmub_window *cw3,
-                             const struct dmub_window *cw4,
-                             const struct dmub_window *cw5,
-                             const struct dmub_window *cw6);
+/* Hardware functions. */
 
 bool dmub_dcn21_is_auto_load_done(struct dmub_srv *dmub);
 
index bac4ee8..c1f4030 100644 (file)
@@ -34,11 +34,15 @@ struct dmub_srv;
 
 #define BASE(seg) BASE_INNER(seg)
 
-#define REG_OFFSET(base_index, addr) (BASE(base_index) + addr)
+#define REG_OFFSET(reg_name) (BASE(mm##reg_name##_BASE_IDX) + mm##reg_name)
 
-#define REG(reg_name) REG_OFFSET(mm ## reg_name ## _BASE_IDX, mm ## reg_name)
+#define FD_SHIFT(reg_name, field) reg_name##__##field##__SHIFT
 
-#define FD(reg_field) reg_field ## __SHIFT,  reg_field ## _MASK
+#define FD_MASK(reg_name, field) reg_name##__##field##_MASK
+
+#define REG(reg) (REGS)->offset.reg
+
+#define FD(reg_field) (REGS)->shift.reg_field, (REGS)->mask.reg_field
 
 #define FN(reg_name, field) FD(reg_name##__##field)
 
index 5f39166..9a959f8 100644 (file)
@@ -26,7 +26,7 @@
 #include "../inc/dmub_srv.h"
 #include "dmub_dcn20.h"
 #include "dmub_dcn21.h"
-#include "dmub_trace_buffer.h"
+#include "dmub_fw_meta.h"
 #include "os_types.h"
 /*
  * Note: the DMUB service is standalone. No additional headers should be
 /* Mailbox size */
 #define DMUB_MAILBOX_SIZE (DMUB_RB_SIZE)
 
+/* Default state size if meta is absent. */
+#define DMUB_FW_STATE_SIZE (1024)
+
+/* Default tracebuffer size if meta is absent. */
+#define DMUB_TRACE_BUFFER_SIZE (1024)
 
 /* Number of windows in use. */
 #define DMUB_NUM_WINDOWS (DMUB_WINDOW_6_FW_STATE + 1)
@@ -62,6 +67,27 @@ static inline uint32_t dmub_align(uint32_t val, uint32_t factor)
        return (val + factor - 1) / factor * factor;
 }
 
+static const struct dmub_fw_meta_info *
+dmub_get_fw_meta_info(const uint8_t *fw_bss_data, uint32_t fw_bss_data_size)
+{
+       const union dmub_fw_meta *meta;
+
+       if (fw_bss_data == NULL)
+               return NULL;
+
+       if (fw_bss_data_size < sizeof(union dmub_fw_meta) + DMUB_FW_META_OFFSET)
+               return NULL;
+
+       meta = (const union dmub_fw_meta *)(fw_bss_data + fw_bss_data_size -
+                                           DMUB_FW_META_OFFSET -
+                                           sizeof(union dmub_fw_meta));
+
+       if (meta->info.magic_value != DMUB_FW_META_MAGIC)
+               return NULL;
+
+       return &meta->info;
+}
+
 static bool dmub_srv_hw_setup(struct dmub_srv *dmub, enum dmub_asic asic)
 {
        struct dmub_srv_hw_funcs *funcs = &dmub->hw_funcs;
@@ -69,6 +95,8 @@ static bool dmub_srv_hw_setup(struct dmub_srv *dmub, enum dmub_asic asic)
        switch (asic) {
        case DMUB_ASIC_DCN20:
        case DMUB_ASIC_DCN21:
+               dmub->regs = &dmub_srv_dcn20_regs;
+
                funcs->reset = dmub_dcn20_reset;
                funcs->reset_release = dmub_dcn20_reset_release;
                funcs->backdoor_load = dmub_dcn20_backdoor_load;
@@ -80,8 +108,8 @@ static bool dmub_srv_hw_setup(struct dmub_srv *dmub, enum dmub_asic asic)
                funcs->is_hw_init = dmub_dcn20_is_hw_init;
 
                if (asic == DMUB_ASIC_DCN21) {
-                       funcs->backdoor_load = dmub_dcn21_backdoor_load;
-                       funcs->setup_windows = dmub_dcn21_setup_windows;
+                       dmub->regs = &dmub_srv_dcn21_regs;
+
                        funcs->is_auto_load_done = dmub_dcn21_is_auto_load_done;
                        funcs->is_phy_init = dmub_dcn21_is_phy_init;
                }
@@ -160,6 +188,9 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
        struct dmub_region *mail = &out->regions[DMUB_WINDOW_4_MAILBOX];
        struct dmub_region *trace_buff = &out->regions[DMUB_WINDOW_5_TRACEBUFF];
        struct dmub_region *fw_state = &out->regions[DMUB_WINDOW_6_FW_STATE];
+       const struct dmub_fw_meta_info *fw_info;
+       uint32_t fw_state_size = DMUB_FW_STATE_SIZE;
+       uint32_t trace_buffer_size = DMUB_TRACE_BUFFER_SIZE;
 
        if (!dmub->sw_init)
                return DMUB_STATUS_INVALID;
@@ -174,6 +205,11 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
        data->base = dmub_align(inst->top, 256);
        data->top = data->base + params->bss_data_size;
 
+       /*
+        * All cache windows below should be aligned to the size
+        * of the DMCUB cache line, 64 bytes.
+        */
+
        stack->base = dmub_align(data->top, 256);
        stack->top = stack->base + DMUB_STACK_SIZE + DMUB_CONTEXT_SIZE;
 
@@ -183,14 +219,19 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
        mail->base = dmub_align(bios->top, 256);
        mail->top = mail->base + DMUB_MAILBOX_SIZE;
 
+       fw_info = dmub_get_fw_meta_info(params->fw_bss_data,
+                                       params->bss_data_size);
+
+       if (fw_info) {
+               fw_state_size = fw_info->fw_region_size;
+               trace_buffer_size = fw_info->trace_buffer_size;
+       }
+
        trace_buff->base = dmub_align(mail->top, 256);
-       trace_buff->top = trace_buff->base + TRACE_BUF_SIZE;
+       trace_buff->top = trace_buff->base + dmub_align(trace_buffer_size, 64);
 
        fw_state->base = dmub_align(trace_buff->top, 256);
-
-       /* Align firmware state to size of cache line. */
-       fw_state->top =
-               fw_state->base + dmub_align(sizeof(struct dmub_fw_state), 64);
+       fw_state->top = fw_state->base + dmub_align(fw_state_size, 64);
 
        out->fb_size = dmub_align(fw_state->top, 4096);
 
index 72b659c..11d7daf 100644 (file)
 #define RAVEN2_15D8_REV_E4 0xE4
 #define RAVEN1_F0 0xF0
 #define RAVEN_UNKNOWN 0xFF
-
+#ifndef ASICREV_IS_RAVEN
 #define ASICREV_IS_RAVEN(eChipRev) ((eChipRev >= RAVEN_A0) && eChipRev < RAVEN_UNKNOWN)
+#endif
+
 #define ASICREV_IS_PICASSO(eChipRev) ((eChipRev >= PICASSO_A0) && (eChipRev < RAVEN2_A0))
+#ifndef ASICREV_IS_RAVEN2
 #define ASICREV_IS_RAVEN2(eChipRev) ((eChipRev >= RAVEN2_A0) && (eChipRev < RAVEN1_F0))
+#endif
 #define ASICREV_IS_RV1_F0(eChipRev) ((eChipRev >= RAVEN1_F0) && (eChipRev < RAVEN_UNKNOWN))
 #define ASICREV_IS_DALI(eChipRev) ((eChipRev == RAVEN2_15D8_REV_E3) \
                || (eChipRev == RAVEN2_15D8_REV_E4))
index b52c4d3..1b278c4 100644 (file)
@@ -364,8 +364,10 @@ static struct fixed31_32 translate_from_linear_space(
                        scratch_2 = dc_fixpt_mul(gamma_of_2,
                                        pow_buffer[pow_buffer_ptr%16]);
 
-               pow_buffer[pow_buffer_ptr%16] = scratch_2;
-               pow_buffer_ptr++;
+               if (pow_buffer_ptr != -1) {
+                       pow_buffer[pow_buffer_ptr%16] = scratch_2;
+                       pow_buffer_ptr++;
+               }
 
                scratch_1 = dc_fixpt_mul(scratch_1, scratch_2);
                scratch_1 = dc_fixpt_sub(scratch_1, args->a2);
index a947009..fa57885 100644 (file)
@@ -37,8 +37,8 @@
 #define STATIC_SCREEN_RAMP_DELTA_REFRESH_RATE_PER_FRAME ((1000 / 60) * 65)
 /* Number of elements in the render times cache array */
 #define RENDER_TIMES_MAX_COUNT 10
-/* Threshold to exit BTR (to avoid frequent enter-exits at the lower limit) */
-#define BTR_EXIT_MARGIN 2000
+/* Threshold to exit/exit BTR (to avoid frequent enter-exits at the lower limit) */
+#define BTR_MAX_MARGIN 2500
 /* Threshold to change BTR multiplier (to avoid frequent changes) */
 #define BTR_DRIFT_MARGIN 2000
 /*Threshold to exit fixed refresh rate*/
@@ -254,24 +254,22 @@ static void apply_below_the_range(struct core_freesync *core_freesync,
        unsigned int delta_from_mid_point_in_us_1 = 0xFFFFFFFF;
        unsigned int delta_from_mid_point_in_us_2 = 0xFFFFFFFF;
        unsigned int frames_to_insert = 0;
-       unsigned int min_frame_duration_in_ns = 0;
-       unsigned int max_render_time_in_us = in_out_vrr->max_duration_in_us;
        unsigned int delta_from_mid_point_delta_in_us;
-
-       min_frame_duration_in_ns = ((unsigned int) (div64_u64(
-               (1000000000ULL * 1000000),
-               in_out_vrr->max_refresh_in_uhz)));
+       unsigned int max_render_time_in_us =
+                       in_out_vrr->max_duration_in_us - in_out_vrr->btr.margin_in_us;
 
        /* Program BTR */
-       if (last_render_time_in_us + BTR_EXIT_MARGIN < max_render_time_in_us) {
+       if ((last_render_time_in_us + in_out_vrr->btr.margin_in_us / 2) < max_render_time_in_us) {
                /* Exit Below the Range */
                if (in_out_vrr->btr.btr_active) {
                        in_out_vrr->btr.frame_counter = 0;
                        in_out_vrr->btr.btr_active = false;
                }
-       } else if (last_render_time_in_us > max_render_time_in_us) {
+       } else if (last_render_time_in_us > (max_render_time_in_us + in_out_vrr->btr.margin_in_us / 2)) {
                /* Enter Below the Range */
-               in_out_vrr->btr.btr_active = true;
+               if (!in_out_vrr->btr.btr_active) {
+                       in_out_vrr->btr.btr_active = true;
+               }
        }
 
        /* BTR set to "not active" so disengage */
@@ -327,7 +325,9 @@ static void apply_below_the_range(struct core_freesync *core_freesync,
                /* Choose number of frames to insert based on how close it
                 * can get to the mid point of the variable range.
                 */
-               if (delta_from_mid_point_in_us_1 < delta_from_mid_point_in_us_2) {
+               if ((frame_time_in_us / mid_point_frames_ceil) > in_out_vrr->min_duration_in_us &&
+                               (delta_from_mid_point_in_us_1 < delta_from_mid_point_in_us_2 ||
+                                               mid_point_frames_floor < 2)) {
                        frames_to_insert = mid_point_frames_ceil;
                        delta_from_mid_point_delta_in_us = delta_from_mid_point_in_us_2 -
                                        delta_from_mid_point_in_us_1;
@@ -343,7 +343,7 @@ static void apply_below_the_range(struct core_freesync *core_freesync,
                if (in_out_vrr->btr.frames_to_insert != 0 &&
                                delta_from_mid_point_delta_in_us < BTR_DRIFT_MARGIN) {
                        if (((last_render_time_in_us / in_out_vrr->btr.frames_to_insert) <
-                                       in_out_vrr->max_duration_in_us) &&
+                                       max_render_time_in_us) &&
                                ((last_render_time_in_us / in_out_vrr->btr.frames_to_insert) >
                                        in_out_vrr->min_duration_in_us))
                                frames_to_insert = in_out_vrr->btr.frames_to_insert;
@@ -796,6 +796,11 @@ void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
                refresh_range = in_out_vrr->max_refresh_in_uhz -
                                in_out_vrr->min_refresh_in_uhz;
 
+               in_out_vrr->btr.margin_in_us = in_out_vrr->max_duration_in_us -
+                               2 * in_out_vrr->min_duration_in_us;
+               if (in_out_vrr->btr.margin_in_us > BTR_MAX_MARGIN)
+                       in_out_vrr->btr.margin_in_us = BTR_MAX_MARGIN;
+
                in_out_vrr->supported = true;
        }
 
@@ -811,6 +816,7 @@ void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
        in_out_vrr->btr.inserted_duration_in_us = 0;
        in_out_vrr->btr.frames_to_insert = 0;
        in_out_vrr->btr.frame_counter = 0;
+
        in_out_vrr->btr.mid_point_in_us =
                                (in_out_vrr->min_duration_in_us +
                                 in_out_vrr->max_duration_in_us) / 2;
index 136b801..21ebc62 100644 (file)
@@ -67,11 +67,19 @@ enum mod_hdcp_status mod_hdcp_hdcp1_transition(struct mod_hdcp *hdcp,
                break;
        case H1_A2_COMPUTATIONS_A3_VALIDATE_RX_A6_TEST_FOR_REPEATER:
                if (input->bcaps_read != PASS ||
-                               input->r0p_read != PASS ||
-                               input->rx_validation != PASS ||
-                               (!conn->is_repeater && input->encryption != PASS)) {
+                               input->r0p_read != PASS) {
+                       fail_and_restart_in_ms(0, &status, output);
+                       break;
+               } else if (input->rx_validation != PASS) {
                        /* 1A-06: consider invalid r0' a failure */
                        /* 1A-08: consider bksv listed in SRM a failure */
+                       /*
+                        * some slow RX will fail rx validation when it is
+                        * not ready. give it more time to react before retry.
+                        */
+                       fail_and_restart_in_ms(1000, &status, output);
+                       break;
+               } else if (!conn->is_repeater && input->encryption != PASS) {
                        fail_and_restart_in_ms(0, &status, output);
                        break;
                }
@@ -212,7 +220,11 @@ enum mod_hdcp_status mod_hdcp_hdcp1_dp_transition(struct mod_hdcp *hdcp,
                                 * after 3 attempts.
                                 * 1A-08: consider bksv listed in SRM a failure
                                 */
-                               fail_and_restart_in_ms(0, &status, output);
+                               /*
+                                * some slow RX will fail rx validation when it is
+                                * not ready. give it more time to react before retry.
+                                */
+                               fail_and_restart_in_ms(1000, &status, output);
                        }
                        break;
                } else if ((!conn->is_repeater && input->encryption != PASS) ||
index e8043c9..8cae3e3 100644 (file)
@@ -114,7 +114,7 @@ enum mod_hdcp_status mod_hdcp_hdcp2_transition(struct mod_hdcp *hdcp,
                        if (event_ctx->event ==
                                        MOD_HDCP_EVENT_WATCHDOG_TIMEOUT) {
                                /* 1A-11-3: consider h' timeout a failure */
-                               fail_and_restart_in_ms(0, &status, output);
+                               fail_and_restart_in_ms(1000, &status, output);
                        } else {
                                /* continue h' polling */
                                callback_in_ms(100, output);
@@ -166,7 +166,7 @@ enum mod_hdcp_status mod_hdcp_hdcp2_transition(struct mod_hdcp *hdcp,
                        if (event_ctx->event ==
                                        MOD_HDCP_EVENT_WATCHDOG_TIMEOUT) {
                                /* 1A-11-2: consider h' timeout a failure */
-                               fail_and_restart_in_ms(0, &status, output);
+                               fail_and_restart_in_ms(1000, &status, output);
                        } else {
                                /* continue h' polling */
                                callback_in_ms(20, output);
@@ -439,7 +439,7 @@ enum mod_hdcp_status mod_hdcp_hdcp2_dp_transition(struct mod_hdcp *hdcp,
                        if (event_ctx->event ==
                                        MOD_HDCP_EVENT_WATCHDOG_TIMEOUT)
                                /* 1A-10-3: consider h' timeout a failure */
-                               fail_and_restart_in_ms(0, &status, output);
+                               fail_and_restart_in_ms(1000, &status, output);
                        else
                                increment_stay_counter(hdcp);
                        break;
@@ -484,7 +484,7 @@ enum mod_hdcp_status mod_hdcp_hdcp2_dp_transition(struct mod_hdcp *hdcp,
                        if (event_ctx->event ==
                                        MOD_HDCP_EVENT_WATCHDOG_TIMEOUT)
                                /* 1A-10-2: consider h' timeout a failure */
-                               fail_and_restart_in_ms(0, &status, output);
+                               fail_and_restart_in_ms(1000, &status, output);
                        else
                                increment_stay_counter(hdcp);
                        break;
@@ -630,7 +630,10 @@ enum mod_hdcp_status mod_hdcp_hdcp2_dp_transition(struct mod_hdcp *hdcp,
                        break;
                } else if (input->prepare_stream_manage != PASS ||
                                input->stream_manage_write != PASS) {
-                       fail_and_restart_in_ms(0, &status, output);
+                       if (event_ctx->event == MOD_HDCP_EVENT_CALLBACK)
+                               fail_and_restart_in_ms(0, &status, output);
+                       else
+                               increment_stay_counter(hdcp);
                        break;
                }
                callback_in_ms(100, output);
@@ -655,10 +658,12 @@ enum mod_hdcp_status mod_hdcp_hdcp2_dp_transition(struct mod_hdcp *hdcp,
                         */
                        if (hdcp->auth.count.stream_management_retry_count > 10) {
                                fail_and_restart_in_ms(0, &status, output);
-                       } else {
+                       } else if (event_ctx->event == MOD_HDCP_EVENT_CALLBACK) {
                                hdcp->auth.count.stream_management_retry_count++;
                                callback_in_ms(0, output);
                                set_state_id(hdcp, output, D2_A9_SEND_STREAM_MANAGEMENT);
+                       } else {
+                               increment_stay_counter(hdcp);
                        }
                        break;
                }
index ef4eb55..7911dc1 100644 (file)
@@ -145,10 +145,11 @@ enum mod_hdcp_status mod_hdcp_hdcp1_create_session(struct mod_hdcp *hdcp)
 
        psp_hdcp_invoke(psp, hdcp_cmd->cmd_id);
 
+       hdcp->auth.id = hdcp_cmd->out_msg.hdcp1_create_session.session_handle;
+
        if (hdcp_cmd->hdcp_status != TA_HDCP_STATUS__SUCCESS)
                return MOD_HDCP_STATUS_HDCP1_CREATE_SESSION_FAILURE;
 
-       hdcp->auth.id = hdcp_cmd->out_msg.hdcp1_create_session.session_handle;
        hdcp->auth.msg.hdcp1.ainfo = hdcp_cmd->out_msg.hdcp1_create_session.ainfo_primary;
        memcpy(hdcp->auth.msg.hdcp1.aksv, hdcp_cmd->out_msg.hdcp1_create_session.aksv_primary,
                sizeof(hdcp->auth.msg.hdcp1.aksv));
@@ -510,7 +511,7 @@ enum mod_hdcp_status mod_hdcp_hdcp2_validate_h_prime(struct mod_hdcp *hdcp)
        psp_hdcp_invoke(psp, hdcp_cmd->cmd_id);
 
        if (hdcp_cmd->hdcp_status != TA_HDCP_STATUS__SUCCESS)
-               return MOD_HDCP_STATUS_HDCP2_VALIDATE_AKE_CERT_FAILURE;
+               return MOD_HDCP_STATUS_HDCP2_VALIDATE_H_PRIME_FAILURE;
 
        if (msg_out->process.msg1_status != TA_HDCP2_MSG_AUTHENTICATION_STATUS__SUCCESS)
                return MOD_HDCP_STATUS_HDCP2_VALIDATE_H_PRIME_FAILURE;
@@ -794,7 +795,7 @@ enum mod_hdcp_status mod_hdcp_hdcp2_validate_stream_ready(struct mod_hdcp *hdcp)
        hdcp_cmd->cmd_id = TA_HDCP_COMMAND__HDCP2_PREPARE_PROCESS_AUTHENTICATION_MSG_V2;
        psp_hdcp_invoke(psp, hdcp_cmd->cmd_id);
 
-       return (hdcp_cmd->hdcp_status != TA_HDCP_STATUS__SUCCESS) &&
+       return (hdcp_cmd->hdcp_status == TA_HDCP_STATUS__SUCCESS) &&
                               (msg_out->process.msg1_status == TA_HDCP2_MSG_AUTHENTICATION_STATUS__SUCCESS)
                       ? MOD_HDCP_STATUS_SUCCESS
                       : MOD_HDCP_STATUS_HDCP2_VALIDATE_STREAM_READY_FAILURE;
index dc18784..dbe7835 100644 (file)
@@ -92,6 +92,7 @@ struct mod_vrr_params_btr {
        uint32_t inserted_duration_in_us;
        uint32_t frames_to_insert;
        uint32_t frame_counter;
+       uint32_t margin_in_us;
 };
 
 struct mod_vrr_params_fixed_refresh {
index c2bd255..f301e5f 100644 (file)
 #define smnPerfMonCtlHi2                                       0x01d464UL
 #define smnPerfMonCtlLo3                                       0x01d470UL
 #define smnPerfMonCtlHi3                                       0x01d474UL
+#define smnPerfMonCtlLo4                                       0x01d880UL
+#define smnPerfMonCtlHi4                                       0x01d884UL
+#define smnPerfMonCtlLo5                                       0x01d888UL
+#define smnPerfMonCtlHi5                                       0x01d88cUL
+#define smnPerfMonCtlLo6                                       0x01d890UL
+#define smnPerfMonCtlHi6                                       0x01d894UL
+#define smnPerfMonCtlLo7                                       0x01d898UL
+#define smnPerfMonCtlHi7                                       0x01d89cUL
 
 #define smnPerfMonCtrLo0                                       0x01d448UL
 #define smnPerfMonCtrHi0                                       0x01d44cUL
 #define smnPerfMonCtrHi2                                       0x01d46cUL
 #define smnPerfMonCtrLo3                                       0x01d478UL
 #define smnPerfMonCtrHi3                                       0x01d47cUL
+#define smnPerfMonCtrLo4                                       0x01d790UL
+#define smnPerfMonCtrHi4                                       0x01d794UL
+#define smnPerfMonCtrLo5                                       0x01d798UL
+#define smnPerfMonCtrHi5                                       0x01d79cUL
+#define smnPerfMonCtrLo6                                       0x01d7a0UL
+#define smnPerfMonCtrHi6                                       0x01d7a4UL
+#define smnPerfMonCtrLo7                                       0x01d7a8UL
+#define smnPerfMonCtrHi7                                       0x01d7acUL
 
 #define smnDF_PIE_AON_FabricIndirectConfigAccessAddress3       0x1d05cUL
 #define smnDF_PIE_AON_FabricIndirectConfigAccessDataLo3                0x1d098UL
diff --git a/drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_2_0_0_offset.h b/drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_2_0_0_offset.h
new file mode 100644 (file)
index 0000000..36ae5b7
--- /dev/null
@@ -0,0 +1,647 @@
+/*
+ * Copyright (C) 2019  Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
+ * AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+#ifndef _dpcs_2_0_0_OFFSET_HEADER
+#define _dpcs_2_0_0_OFFSET_HEADER
+
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx0_dispdec
+// base address: 0x0
+#define mmDPCSTX0_DPCSTX_TX_CLOCK_CNTL                                                                 0x2928
+#define mmDPCSTX0_DPCSTX_TX_CLOCK_CNTL_BASE_IDX                                                        2
+#define mmDPCSTX0_DPCSTX_TX_CNTL                                                                       0x2929
+#define mmDPCSTX0_DPCSTX_TX_CNTL_BASE_IDX                                                              2
+#define mmDPCSTX0_DPCSTX_CBUS_CNTL                                                                     0x292a
+#define mmDPCSTX0_DPCSTX_CBUS_CNTL_BASE_IDX                                                            2
+#define mmDPCSTX0_DPCSTX_INTERRUPT_CNTL                                                                0x292b
+#define mmDPCSTX0_DPCSTX_INTERRUPT_CNTL_BASE_IDX                                                       2
+#define mmDPCSTX0_DPCSTX_PLL_UPDATE_ADDR                                                               0x292c
+#define mmDPCSTX0_DPCSTX_PLL_UPDATE_ADDR_BASE_IDX                                                      2
+#define mmDPCSTX0_DPCSTX_PLL_UPDATE_DATA                                                               0x292d
+#define mmDPCSTX0_DPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                      2
+#define mmDPCSTX0_DPCSTX_DEBUG_CONFIG                                                                  0x292e
+#define mmDPCSTX0_DPCSTX_DEBUG_CONFIG_BASE_IDX                                                         2
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx0_dispdec
+// base address: 0x0
+#define mmRDPCSTX0_RDPCSTX_CNTL                                                                        0x2930
+#define mmRDPCSTX0_RDPCSTX_CNTL_BASE_IDX                                                               2
+#define mmRDPCSTX0_RDPCSTX_CLOCK_CNTL                                                                  0x2931
+#define mmRDPCSTX0_RDPCSTX_CLOCK_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX0_RDPCSTX_INTERRUPT_CONTROL                                                           0x2932
+#define mmRDPCSTX0_RDPCSTX_INTERRUPT_CONTROL_BASE_IDX                                                  2
+#define mmRDPCSTX0_RDPCSTX_PLL_UPDATE_DATA                                                             0x2933
+#define mmRDPCSTX0_RDPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                    2
+#define mmRDPCSTX0_RDPCS_TX_CR_ADDR                                                                    0x2934
+#define mmRDPCSTX0_RDPCS_TX_CR_ADDR_BASE_IDX                                                           2
+#define mmRDPCSTX0_RDPCS_TX_CR_DATA                                                                    0x2935
+#define mmRDPCSTX0_RDPCS_TX_CR_DATA_BASE_IDX                                                           2
+#define mmRDPCSTX0_RDPCS_TX_SRAM_CNTL                                                                  0x2936
+#define mmRDPCSTX0_RDPCS_TX_SRAM_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX0_RDPCSTX_MEM_POWER_CTRL                                                              0x2937
+#define mmRDPCSTX0_RDPCSTX_MEM_POWER_CTRL_BASE_IDX                                                     2
+#define mmRDPCSTX0_RDPCSTX_MEM_POWER_CTRL2                                                             0x2938
+#define mmRDPCSTX0_RDPCSTX_MEM_POWER_CTRL2_BASE_IDX                                                    2
+#define mmRDPCSTX0_RDPCSTX_SCRATCH                                                                     0x2939
+#define mmRDPCSTX0_RDPCSTX_SCRATCH_BASE_IDX                                                            2
+#define mmRDPCSTX0_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG                                                    0x293c
+#define mmRDPCSTX0_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG_BASE_IDX                                           2
+#define mmRDPCSTX0_RDPCSTX_DEBUG_CONFIG                                                                0x293d
+#define mmRDPCSTX0_RDPCSTX_DEBUG_CONFIG_BASE_IDX                                                       2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL0                                                                   0x2940
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL0_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL1                                                                   0x2941
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL1_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL2                                                                   0x2942
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL2_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL3                                                                   0x2943
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL3_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL4                                                                   0x2944
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL4_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL5                                                                   0x2945
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL5_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL6                                                                   0x2946
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL6_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL7                                                                   0x2947
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL7_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL8                                                                   0x2948
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL8_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL9                                                                   0x2949
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL9_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL10                                                                  0x294a
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL10_BASE_IDX                                                         2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL11                                                                  0x294b
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL11_BASE_IDX                                                         2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL12                                                                  0x294c
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL12_BASE_IDX                                                         2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL13                                                                  0x294d
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL13_BASE_IDX                                                         2
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL14                                                                  0x294e
+#define mmRDPCSTX0_RDPCSTX_PHY_CNTL14_BASE_IDX                                                         2
+#define mmRDPCSTX0_RDPCSTX_PHY_FUSE0                                                                   0x294f
+#define mmRDPCSTX0_RDPCSTX_PHY_FUSE0_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_FUSE1                                                                   0x2950
+#define mmRDPCSTX0_RDPCSTX_PHY_FUSE1_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_FUSE2                                                                   0x2951
+#define mmRDPCSTX0_RDPCSTX_PHY_FUSE2_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_FUSE3                                                                   0x2952
+#define mmRDPCSTX0_RDPCSTX_PHY_FUSE3_BASE_IDX                                                          2
+#define mmRDPCSTX0_RDPCSTX_PHY_RX_LD_VAL                                                               0x2953
+#define mmRDPCSTX0_RDPCSTX_PHY_RX_LD_VAL_BASE_IDX                                                      2
+#define mmRDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3                                                        0x2954
+#define mmRDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3_BASE_IDX                                               2
+#define mmRDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6                                                        0x2955
+#define mmRDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6_BASE_IDX                                               2
+#define mmRDPCSTX0_RDPCSTX_DPALT_CONTROL_REG                                                           0x2956
+#define mmRDPCSTX0_RDPCSTX_DPALT_CONTROL_REG_BASE_IDX                                                  2
+
+
+// addressBlock: dpcssys_dpcssys_cr0_dispdec
+// base address: 0x0
+#define mmDPCSSYS_CR0_DPCSSYS_CR_ADDR                                                                  0x2934
+#define mmDPCSSYS_CR0_DPCSSYS_CR_ADDR_BASE_IDX                                                         2
+#define mmDPCSSYS_CR0_DPCSSYS_CR_DATA                                                                  0x2935
+#define mmDPCSSYS_CR0_DPCSSYS_CR_DATA_BASE_IDX                                                         2
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx1_dispdec
+// base address: 0x360
+#define mmDPCSTX1_DPCSTX_TX_CLOCK_CNTL                                                                 0x2a00
+#define mmDPCSTX1_DPCSTX_TX_CLOCK_CNTL_BASE_IDX                                                        2
+#define mmDPCSTX1_DPCSTX_TX_CNTL                                                                       0x2a01
+#define mmDPCSTX1_DPCSTX_TX_CNTL_BASE_IDX                                                              2
+#define mmDPCSTX1_DPCSTX_CBUS_CNTL                                                                     0x2a02
+#define mmDPCSTX1_DPCSTX_CBUS_CNTL_BASE_IDX                                                            2
+#define mmDPCSTX1_DPCSTX_INTERRUPT_CNTL                                                                0x2a03
+#define mmDPCSTX1_DPCSTX_INTERRUPT_CNTL_BASE_IDX                                                       2
+#define mmDPCSTX1_DPCSTX_PLL_UPDATE_ADDR                                                               0x2a04
+#define mmDPCSTX1_DPCSTX_PLL_UPDATE_ADDR_BASE_IDX                                                      2
+#define mmDPCSTX1_DPCSTX_PLL_UPDATE_DATA                                                               0x2a05
+#define mmDPCSTX1_DPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                      2
+#define mmDPCSTX1_DPCSTX_DEBUG_CONFIG                                                                  0x2a06
+#define mmDPCSTX1_DPCSTX_DEBUG_CONFIG_BASE_IDX                                                         2
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx1_dispdec
+// base address: 0x360
+#define mmRDPCSTX1_RDPCSTX_CNTL                                                                        0x2a08
+#define mmRDPCSTX1_RDPCSTX_CNTL_BASE_IDX                                                               2
+#define mmRDPCSTX1_RDPCSTX_CLOCK_CNTL                                                                  0x2a09
+#define mmRDPCSTX1_RDPCSTX_CLOCK_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX1_RDPCSTX_INTERRUPT_CONTROL                                                           0x2a0a
+#define mmRDPCSTX1_RDPCSTX_INTERRUPT_CONTROL_BASE_IDX                                                  2
+#define mmRDPCSTX1_RDPCSTX_PLL_UPDATE_DATA                                                             0x2a0b
+#define mmRDPCSTX1_RDPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                    2
+#define mmRDPCSTX1_RDPCS_TX_CR_ADDR                                                                    0x2a0c
+#define mmRDPCSTX1_RDPCS_TX_CR_ADDR_BASE_IDX                                                           2
+#define mmRDPCSTX1_RDPCS_TX_CR_DATA                                                                    0x2a0d
+#define mmRDPCSTX1_RDPCS_TX_CR_DATA_BASE_IDX                                                           2
+#define mmRDPCSTX1_RDPCS_TX_SRAM_CNTL                                                                  0x2a0e
+#define mmRDPCSTX1_RDPCS_TX_SRAM_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX1_RDPCSTX_MEM_POWER_CTRL                                                              0x2a0f
+#define mmRDPCSTX1_RDPCSTX_MEM_POWER_CTRL_BASE_IDX                                                     2
+#define mmRDPCSTX1_RDPCSTX_MEM_POWER_CTRL2                                                             0x2a10
+#define mmRDPCSTX1_RDPCSTX_MEM_POWER_CTRL2_BASE_IDX                                                    2
+#define mmRDPCSTX1_RDPCSTX_SCRATCH                                                                     0x2a11
+#define mmRDPCSTX1_RDPCSTX_SCRATCH_BASE_IDX                                                            2
+#define mmRDPCSTX1_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG                                                    0x2a14
+#define mmRDPCSTX1_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG_BASE_IDX                                           2
+#define mmRDPCSTX1_RDPCSTX_DEBUG_CONFIG                                                                0x2a15
+#define mmRDPCSTX1_RDPCSTX_DEBUG_CONFIG_BASE_IDX                                                       2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL0                                                                   0x2a18
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL0_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL1                                                                   0x2a19
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL1_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL2                                                                   0x2a1a
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL2_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL3                                                                   0x2a1b
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL3_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL4                                                                   0x2a1c
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL4_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL5                                                                   0x2a1d
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL5_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL6                                                                   0x2a1e
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL6_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL7                                                                   0x2a1f
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL7_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL8                                                                   0x2a20
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL8_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL9                                                                   0x2a21
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL9_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL10                                                                  0x2a22
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL10_BASE_IDX                                                         2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL11                                                                  0x2a23
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL11_BASE_IDX                                                         2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL12                                                                  0x2a24
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL12_BASE_IDX                                                         2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL13                                                                  0x2a25
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL13_BASE_IDX                                                         2
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL14                                                                  0x2a26
+#define mmRDPCSTX1_RDPCSTX_PHY_CNTL14_BASE_IDX                                                         2
+#define mmRDPCSTX1_RDPCSTX_PHY_FUSE0                                                                   0x2a27
+#define mmRDPCSTX1_RDPCSTX_PHY_FUSE0_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_FUSE1                                                                   0x2a28
+#define mmRDPCSTX1_RDPCSTX_PHY_FUSE1_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_FUSE2                                                                   0x2a29
+#define mmRDPCSTX1_RDPCSTX_PHY_FUSE2_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_FUSE3                                                                   0x2a2a
+#define mmRDPCSTX1_RDPCSTX_PHY_FUSE3_BASE_IDX                                                          2
+#define mmRDPCSTX1_RDPCSTX_PHY_RX_LD_VAL                                                               0x2a2b
+#define mmRDPCSTX1_RDPCSTX_PHY_RX_LD_VAL_BASE_IDX                                                      2
+#define mmRDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3                                                        0x2a2c
+#define mmRDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3_BASE_IDX                                               2
+#define mmRDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6                                                        0x2a2d
+#define mmRDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6_BASE_IDX                                               2
+#define mmRDPCSTX1_RDPCSTX_DPALT_CONTROL_REG                                                           0x2a2e
+#define mmRDPCSTX1_RDPCSTX_DPALT_CONTROL_REG_BASE_IDX                                                  2
+
+
+// addressBlock: dpcssys_dpcssys_cr1_dispdec
+// base address: 0x360
+#define mmDPCSSYS_CR1_DPCSSYS_CR_ADDR                                                                  0x2a0c
+#define mmDPCSSYS_CR1_DPCSSYS_CR_ADDR_BASE_IDX                                                         2
+#define mmDPCSSYS_CR1_DPCSSYS_CR_DATA                                                                  0x2a0d
+#define mmDPCSSYS_CR1_DPCSSYS_CR_DATA_BASE_IDX                                                         2
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx2_dispdec
+// base address: 0x6c0
+#define mmDPCSTX2_DPCSTX_TX_CLOCK_CNTL                                                                 0x2ad8
+#define mmDPCSTX2_DPCSTX_TX_CLOCK_CNTL_BASE_IDX                                                        2
+#define mmDPCSTX2_DPCSTX_TX_CNTL                                                                       0x2ad9
+#define mmDPCSTX2_DPCSTX_TX_CNTL_BASE_IDX                                                              2
+#define mmDPCSTX2_DPCSTX_CBUS_CNTL                                                                     0x2ada
+#define mmDPCSTX2_DPCSTX_CBUS_CNTL_BASE_IDX                                                            2
+#define mmDPCSTX2_DPCSTX_INTERRUPT_CNTL                                                                0x2adb
+#define mmDPCSTX2_DPCSTX_INTERRUPT_CNTL_BASE_IDX                                                       2
+#define mmDPCSTX2_DPCSTX_PLL_UPDATE_ADDR                                                               0x2adc
+#define mmDPCSTX2_DPCSTX_PLL_UPDATE_ADDR_BASE_IDX                                                      2
+#define mmDPCSTX2_DPCSTX_PLL_UPDATE_DATA                                                               0x2add
+#define mmDPCSTX2_DPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                      2
+#define mmDPCSTX2_DPCSTX_DEBUG_CONFIG                                                                  0x2ade
+#define mmDPCSTX2_DPCSTX_DEBUG_CONFIG_BASE_IDX                                                         2
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx2_dispdec
+// base address: 0x6c0
+#define mmRDPCSTX2_RDPCSTX_CNTL                                                                        0x2ae0
+#define mmRDPCSTX2_RDPCSTX_CNTL_BASE_IDX                                                               2
+#define mmRDPCSTX2_RDPCSTX_CLOCK_CNTL                                                                  0x2ae1
+#define mmRDPCSTX2_RDPCSTX_CLOCK_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX2_RDPCSTX_INTERRUPT_CONTROL                                                           0x2ae2
+#define mmRDPCSTX2_RDPCSTX_INTERRUPT_CONTROL_BASE_IDX                                                  2
+#define mmRDPCSTX2_RDPCSTX_PLL_UPDATE_DATA                                                             0x2ae3
+#define mmRDPCSTX2_RDPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                    2
+#define mmRDPCSTX2_RDPCS_TX_CR_ADDR                                                                    0x2ae4
+#define mmRDPCSTX2_RDPCS_TX_CR_ADDR_BASE_IDX                                                           2
+#define mmRDPCSTX2_RDPCS_TX_CR_DATA                                                                    0x2ae5
+#define mmRDPCSTX2_RDPCS_TX_CR_DATA_BASE_IDX                                                           2
+#define mmRDPCSTX2_RDPCS_TX_SRAM_CNTL                                                                  0x2ae6
+#define mmRDPCSTX2_RDPCS_TX_SRAM_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX2_RDPCSTX_MEM_POWER_CTRL                                                              0x2ae7
+#define mmRDPCSTX2_RDPCSTX_MEM_POWER_CTRL_BASE_IDX                                                     2
+#define mmRDPCSTX2_RDPCSTX_MEM_POWER_CTRL2                                                             0x2ae8
+#define mmRDPCSTX2_RDPCSTX_MEM_POWER_CTRL2_BASE_IDX                                                    2
+#define mmRDPCSTX2_RDPCSTX_SCRATCH                                                                     0x2ae9
+#define mmRDPCSTX2_RDPCSTX_SCRATCH_BASE_IDX                                                            2
+#define mmRDPCSTX2_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG                                                    0x2aec
+#define mmRDPCSTX2_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG_BASE_IDX                                           2
+#define mmRDPCSTX2_RDPCSTX_DEBUG_CONFIG                                                                0x2aed
+#define mmRDPCSTX2_RDPCSTX_DEBUG_CONFIG_BASE_IDX                                                       2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL0                                                                   0x2af0
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL0_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL1                                                                   0x2af1
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL1_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL2                                                                   0x2af2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL2_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL3                                                                   0x2af3
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL3_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL4                                                                   0x2af4
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL4_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL5                                                                   0x2af5
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL5_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL6                                                                   0x2af6
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL6_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL7                                                                   0x2af7
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL7_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL8                                                                   0x2af8
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL8_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL9                                                                   0x2af9
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL9_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL10                                                                  0x2afa
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL10_BASE_IDX                                                         2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL11                                                                  0x2afb
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL11_BASE_IDX                                                         2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL12                                                                  0x2afc
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL12_BASE_IDX                                                         2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL13                                                                  0x2afd
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL13_BASE_IDX                                                         2
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL14                                                                  0x2afe
+#define mmRDPCSTX2_RDPCSTX_PHY_CNTL14_BASE_IDX                                                         2
+#define mmRDPCSTX2_RDPCSTX_PHY_FUSE0                                                                   0x2aff
+#define mmRDPCSTX2_RDPCSTX_PHY_FUSE0_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_FUSE1                                                                   0x2b00
+#define mmRDPCSTX2_RDPCSTX_PHY_FUSE1_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_FUSE2                                                                   0x2b01
+#define mmRDPCSTX2_RDPCSTX_PHY_FUSE2_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_FUSE3                                                                   0x2b02
+#define mmRDPCSTX2_RDPCSTX_PHY_FUSE3_BASE_IDX                                                          2
+#define mmRDPCSTX2_RDPCSTX_PHY_RX_LD_VAL                                                               0x2b03
+#define mmRDPCSTX2_RDPCSTX_PHY_RX_LD_VAL_BASE_IDX                                                      2
+#define mmRDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3                                                        0x2b04
+#define mmRDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3_BASE_IDX                                               2
+#define mmRDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6                                                        0x2b05
+#define mmRDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6_BASE_IDX                                               2
+#define mmRDPCSTX2_RDPCSTX_DPALT_CONTROL_REG                                                           0x2b06
+#define mmRDPCSTX2_RDPCSTX_DPALT_CONTROL_REG_BASE_IDX                                                  2
+
+
+// addressBlock: dpcssys_dpcssys_cr2_dispdec
+// base address: 0x6c0
+#define mmDPCSSYS_CR2_DPCSSYS_CR_ADDR                                                                  0x2ae4
+#define mmDPCSSYS_CR2_DPCSSYS_CR_ADDR_BASE_IDX                                                         2
+#define mmDPCSSYS_CR2_DPCSSYS_CR_DATA                                                                  0x2ae5
+#define mmDPCSSYS_CR2_DPCSSYS_CR_DATA_BASE_IDX                                                         2
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx3_dispdec
+// base address: 0xa20
+#define mmDPCSTX3_DPCSTX_TX_CLOCK_CNTL                                                                 0x2bb0
+#define mmDPCSTX3_DPCSTX_TX_CLOCK_CNTL_BASE_IDX                                                        2
+#define mmDPCSTX3_DPCSTX_TX_CNTL                                                                       0x2bb1
+#define mmDPCSTX3_DPCSTX_TX_CNTL_BASE_IDX                                                              2
+#define mmDPCSTX3_DPCSTX_CBUS_CNTL                                                                     0x2bb2
+#define mmDPCSTX3_DPCSTX_CBUS_CNTL_BASE_IDX                                                            2
+#define mmDPCSTX3_DPCSTX_INTERRUPT_CNTL                                                                0x2bb3
+#define mmDPCSTX3_DPCSTX_INTERRUPT_CNTL_BASE_IDX                                                       2
+#define mmDPCSTX3_DPCSTX_PLL_UPDATE_ADDR                                                               0x2bb4
+#define mmDPCSTX3_DPCSTX_PLL_UPDATE_ADDR_BASE_IDX                                                      2
+#define mmDPCSTX3_DPCSTX_PLL_UPDATE_DATA                                                               0x2bb5
+#define mmDPCSTX3_DPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                      2
+#define mmDPCSTX3_DPCSTX_DEBUG_CONFIG                                                                  0x2bb6
+#define mmDPCSTX3_DPCSTX_DEBUG_CONFIG_BASE_IDX                                                         2
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx3_dispdec
+// base address: 0xa20
+#define mmRDPCSTX3_RDPCSTX_CNTL                                                                        0x2bb8
+#define mmRDPCSTX3_RDPCSTX_CNTL_BASE_IDX                                                               2
+#define mmRDPCSTX3_RDPCSTX_CLOCK_CNTL                                                                  0x2bb9
+#define mmRDPCSTX3_RDPCSTX_CLOCK_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX3_RDPCSTX_INTERRUPT_CONTROL                                                           0x2bba
+#define mmRDPCSTX3_RDPCSTX_INTERRUPT_CONTROL_BASE_IDX                                                  2
+#define mmRDPCSTX3_RDPCSTX_PLL_UPDATE_DATA                                                             0x2bbb
+#define mmRDPCSTX3_RDPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                    2
+#define mmRDPCSTX3_RDPCS_TX_CR_ADDR                                                                    0x2bbc
+#define mmRDPCSTX3_RDPCS_TX_CR_ADDR_BASE_IDX                                                           2
+#define mmRDPCSTX3_RDPCS_TX_CR_DATA                                                                    0x2bbd
+#define mmRDPCSTX3_RDPCS_TX_CR_DATA_BASE_IDX                                                           2
+#define mmRDPCSTX3_RDPCS_TX_SRAM_CNTL                                                                  0x2bbe
+#define mmRDPCSTX3_RDPCS_TX_SRAM_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX3_RDPCSTX_MEM_POWER_CTRL                                                              0x2bbf
+#define mmRDPCSTX3_RDPCSTX_MEM_POWER_CTRL_BASE_IDX                                                     2
+#define mmRDPCSTX3_RDPCSTX_MEM_POWER_CTRL2                                                             0x2bc0
+#define mmRDPCSTX3_RDPCSTX_MEM_POWER_CTRL2_BASE_IDX                                                    2
+#define mmRDPCSTX3_RDPCSTX_SCRATCH                                                                     0x2bc1
+#define mmRDPCSTX3_RDPCSTX_SCRATCH_BASE_IDX                                                            2
+#define mmRDPCSTX3_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG                                                    0x2bc4
+#define mmRDPCSTX3_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG_BASE_IDX                                           2
+#define mmRDPCSTX3_RDPCSTX_DEBUG_CONFIG                                                                0x2bc5
+#define mmRDPCSTX3_RDPCSTX_DEBUG_CONFIG_BASE_IDX                                                       2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL0                                                                   0x2bc8
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL0_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL1                                                                   0x2bc9
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL1_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL2                                                                   0x2bca
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL2_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL3                                                                   0x2bcb
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL3_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL4                                                                   0x2bcc
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL4_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL5                                                                   0x2bcd
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL5_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL6                                                                   0x2bce
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL6_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL7                                                                   0x2bcf
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL7_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL8                                                                   0x2bd0
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL8_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL9                                                                   0x2bd1
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL9_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL10                                                                  0x2bd2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL10_BASE_IDX                                                         2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL11                                                                  0x2bd3
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL11_BASE_IDX                                                         2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL12                                                                  0x2bd4
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL12_BASE_IDX                                                         2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL13                                                                  0x2bd5
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL13_BASE_IDX                                                         2
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL14                                                                  0x2bd6
+#define mmRDPCSTX3_RDPCSTX_PHY_CNTL14_BASE_IDX                                                         2
+#define mmRDPCSTX3_RDPCSTX_PHY_FUSE0                                                                   0x2bd7
+#define mmRDPCSTX3_RDPCSTX_PHY_FUSE0_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_FUSE1                                                                   0x2bd8
+#define mmRDPCSTX3_RDPCSTX_PHY_FUSE1_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_FUSE2                                                                   0x2bd9
+#define mmRDPCSTX3_RDPCSTX_PHY_FUSE2_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_FUSE3                                                                   0x2bda
+#define mmRDPCSTX3_RDPCSTX_PHY_FUSE3_BASE_IDX                                                          2
+#define mmRDPCSTX3_RDPCSTX_PHY_RX_LD_VAL                                                               0x2bdb
+#define mmRDPCSTX3_RDPCSTX_PHY_RX_LD_VAL_BASE_IDX                                                      2
+#define mmRDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3                                                        0x2bdc
+#define mmRDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3_BASE_IDX                                               2
+#define mmRDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6                                                        0x2bdd
+#define mmRDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6_BASE_IDX                                               2
+#define mmRDPCSTX3_RDPCSTX_DPALT_CONTROL_REG                                                           0x2bde
+#define mmRDPCSTX3_RDPCSTX_DPALT_CONTROL_REG_BASE_IDX                                                  2
+
+
+// addressBlock: dpcssys_dpcssys_cr3_dispdec
+// base address: 0xa20
+#define mmDPCSSYS_CR3_DPCSSYS_CR_ADDR                                                                  0x2bbc
+#define mmDPCSSYS_CR3_DPCSSYS_CR_ADDR_BASE_IDX                                                         2
+#define mmDPCSSYS_CR3_DPCSSYS_CR_DATA                                                                  0x2bbd
+#define mmDPCSSYS_CR3_DPCSSYS_CR_DATA_BASE_IDX                                                         2
+
+
+// addressBlock: dpcssys_dpcs0_dpcsrx_dispdec
+// base address: 0x0
+#define mmDPCSRX_PHY_CNTL                                                                              0x2c76
+#define mmDPCSRX_PHY_CNTL_BASE_IDX                                                                     2
+#define mmDPCSRX_RX_CLOCK_CNTL                                                                         0x2c78
+#define mmDPCSRX_RX_CLOCK_CNTL_BASE_IDX                                                                2
+#define mmDPCSRX_RX_CNTL                                                                               0x2c7a
+#define mmDPCSRX_RX_CNTL_BASE_IDX                                                                      2
+#define mmDPCSRX_CBUS_CNTL                                                                             0x2c7b
+#define mmDPCSRX_CBUS_CNTL_BASE_IDX                                                                    2
+#define mmDPCSRX_REG_ERROR_STATUS                                                                      0x2c7c
+#define mmDPCSRX_REG_ERROR_STATUS_BASE_IDX                                                             2
+#define mmDPCSRX_RX_ERROR_STATUS                                                                       0x2c7d
+#define mmDPCSRX_RX_ERROR_STATUS_BASE_IDX                                                              2
+#define mmDPCSRX_INDEX_MODE_ADDR                                                                       0x2c80
+#define mmDPCSRX_INDEX_MODE_ADDR_BASE_IDX                                                              2
+#define mmDPCSRX_INDEX_MODE_DATA                                                                       0x2c81
+#define mmDPCSRX_INDEX_MODE_DATA_BASE_IDX                                                              2
+#define mmDPCSRX_DEBUG_CONFIG                                                                          0x2c82
+#define mmDPCSRX_DEBUG_CONFIG_BASE_IDX                                                                 2
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx4_dispdec
+// base address: 0xd80
+#define mmDPCSTX4_DPCSTX_TX_CLOCK_CNTL                                                                 0x2c88
+#define mmDPCSTX4_DPCSTX_TX_CLOCK_CNTL_BASE_IDX                                                        2
+#define mmDPCSTX4_DPCSTX_TX_CNTL                                                                       0x2c89
+#define mmDPCSTX4_DPCSTX_TX_CNTL_BASE_IDX                                                              2
+#define mmDPCSTX4_DPCSTX_CBUS_CNTL                                                                     0x2c8a
+#define mmDPCSTX4_DPCSTX_CBUS_CNTL_BASE_IDX                                                            2
+#define mmDPCSTX4_DPCSTX_INTERRUPT_CNTL                                                                0x2c8b
+#define mmDPCSTX4_DPCSTX_INTERRUPT_CNTL_BASE_IDX                                                       2
+#define mmDPCSTX4_DPCSTX_PLL_UPDATE_ADDR                                                               0x2c8c
+#define mmDPCSTX4_DPCSTX_PLL_UPDATE_ADDR_BASE_IDX                                                      2
+#define mmDPCSTX4_DPCSTX_PLL_UPDATE_DATA                                                               0x2c8d
+#define mmDPCSTX4_DPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                      2
+#define mmDPCSTX4_DPCSTX_DEBUG_CONFIG                                                                  0x2c8e
+#define mmDPCSTX4_DPCSTX_DEBUG_CONFIG_BASE_IDX                                                         2
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx4_dispdec
+// base address: 0xd80
+#define mmRDPCSTX4_RDPCSTX_CNTL                                                                        0x2c90
+#define mmRDPCSTX4_RDPCSTX_CNTL_BASE_IDX                                                               2
+#define mmRDPCSTX4_RDPCSTX_CLOCK_CNTL                                                                  0x2c91
+#define mmRDPCSTX4_RDPCSTX_CLOCK_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX4_RDPCSTX_INTERRUPT_CONTROL                                                           0x2c92
+#define mmRDPCSTX4_RDPCSTX_INTERRUPT_CONTROL_BASE_IDX                                                  2
+#define mmRDPCSTX4_RDPCSTX_PLL_UPDATE_DATA                                                             0x2c93
+#define mmRDPCSTX4_RDPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                    2
+#define mmRDPCSTX4_RDPCS_TX_CR_ADDR                                                                    0x2c94
+#define mmRDPCSTX4_RDPCS_TX_CR_ADDR_BASE_IDX                                                           2
+#define mmRDPCSTX4_RDPCS_TX_CR_DATA                                                                    0x2c95
+#define mmRDPCSTX4_RDPCS_TX_CR_DATA_BASE_IDX                                                           2
+#define mmRDPCSTX4_RDPCS_TX_SRAM_CNTL                                                                  0x2c96
+#define mmRDPCSTX4_RDPCS_TX_SRAM_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX4_RDPCSTX_MEM_POWER_CTRL                                                              0x2c97
+#define mmRDPCSTX4_RDPCSTX_MEM_POWER_CTRL_BASE_IDX                                                     2
+#define mmRDPCSTX4_RDPCSTX_MEM_POWER_CTRL2                                                             0x2c98
+#define mmRDPCSTX4_RDPCSTX_MEM_POWER_CTRL2_BASE_IDX                                                    2
+#define mmRDPCSTX4_RDPCSTX_SCRATCH                                                                     0x2c99
+#define mmRDPCSTX4_RDPCSTX_SCRATCH_BASE_IDX                                                            2
+#define mmRDPCSTX4_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG                                                    0x2c9c
+#define mmRDPCSTX4_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG_BASE_IDX                                           2
+#define mmRDPCSTX4_RDPCSTX_DEBUG_CONFIG                                                                0x2c9d
+#define mmRDPCSTX4_RDPCSTX_DEBUG_CONFIG_BASE_IDX                                                       2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL0                                                                   0x2ca0
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL0_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL1                                                                   0x2ca1
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL1_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL2                                                                   0x2ca2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL2_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL3                                                                   0x2ca3
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL3_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL4                                                                   0x2ca4
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL4_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL5                                                                   0x2ca5
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL5_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL6                                                                   0x2ca6
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL6_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL7                                                                   0x2ca7
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL7_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL8                                                                   0x2ca8
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL8_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL9                                                                   0x2ca9
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL9_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL10                                                                  0x2caa
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL10_BASE_IDX                                                         2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL11                                                                  0x2cab
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL11_BASE_IDX                                                         2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL12                                                                  0x2cac
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL12_BASE_IDX                                                         2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL13                                                                  0x2cad
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL13_BASE_IDX                                                         2
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL14                                                                  0x2cae
+#define mmRDPCSTX4_RDPCSTX_PHY_CNTL14_BASE_IDX                                                         2
+#define mmRDPCSTX4_RDPCSTX_PHY_FUSE0                                                                   0x2caf
+#define mmRDPCSTX4_RDPCSTX_PHY_FUSE0_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_FUSE1                                                                   0x2cb0
+#define mmRDPCSTX4_RDPCSTX_PHY_FUSE1_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_FUSE2                                                                   0x2cb1
+#define mmRDPCSTX4_RDPCSTX_PHY_FUSE2_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_FUSE3                                                                   0x2cb2
+#define mmRDPCSTX4_RDPCSTX_PHY_FUSE3_BASE_IDX                                                          2
+#define mmRDPCSTX4_RDPCSTX_PHY_RX_LD_VAL                                                               0x2cb3
+#define mmRDPCSTX4_RDPCSTX_PHY_RX_LD_VAL_BASE_IDX                                                      2
+#define mmRDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3                                                        0x2cb4
+#define mmRDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3_BASE_IDX                                               2
+#define mmRDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6                                                        0x2cb5
+#define mmRDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6_BASE_IDX                                               2
+#define mmRDPCSTX4_RDPCSTX_DPALT_CONTROL_REG                                                           0x2cb6
+#define mmRDPCSTX4_RDPCSTX_DPALT_CONTROL_REG_BASE_IDX                                                  2
+
+
+// addressBlock: dpcssys_dpcssys_cr4_dispdec
+// base address: 0xd80
+#define mmDPCSSYS_CR4_DPCSSYS_CR_ADDR                                                                  0x2c94
+#define mmDPCSSYS_CR4_DPCSSYS_CR_ADDR_BASE_IDX                                                         2
+#define mmDPCSSYS_CR4_DPCSSYS_CR_DATA                                                                  0x2c95
+#define mmDPCSSYS_CR4_DPCSSYS_CR_DATA_BASE_IDX                                                         2
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx5_dispdec
+// base address: 0x10e0
+#define mmDPCSTX5_DPCSTX_TX_CLOCK_CNTL                                                                 0x2d60
+#define mmDPCSTX5_DPCSTX_TX_CLOCK_CNTL_BASE_IDX                                                        2
+#define mmDPCSTX5_DPCSTX_TX_CNTL                                                                       0x2d61
+#define mmDPCSTX5_DPCSTX_TX_CNTL_BASE_IDX                                                              2
+#define mmDPCSTX5_DPCSTX_CBUS_CNTL                                                                     0x2d62
+#define mmDPCSTX5_DPCSTX_CBUS_CNTL_BASE_IDX                                                            2
+#define mmDPCSTX5_DPCSTX_INTERRUPT_CNTL                                                                0x2d63
+#define mmDPCSTX5_DPCSTX_INTERRUPT_CNTL_BASE_IDX                                                       2
+#define mmDPCSTX5_DPCSTX_PLL_UPDATE_ADDR                                                               0x2d64
+#define mmDPCSTX5_DPCSTX_PLL_UPDATE_ADDR_BASE_IDX                                                      2
+#define mmDPCSTX5_DPCSTX_PLL_UPDATE_DATA                                                               0x2d65
+#define mmDPCSTX5_DPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                      2
+#define mmDPCSTX5_DPCSTX_DEBUG_CONFIG                                                                  0x2d66
+#define mmDPCSTX5_DPCSTX_DEBUG_CONFIG_BASE_IDX                                                         2
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx5_dispdec
+// base address: 0x10e0
+#define mmRDPCSTX5_RDPCSTX_CNTL                                                                        0x2d68
+#define mmRDPCSTX5_RDPCSTX_CNTL_BASE_IDX                                                               2
+#define mmRDPCSTX5_RDPCSTX_CLOCK_CNTL                                                                  0x2d69
+#define mmRDPCSTX5_RDPCSTX_CLOCK_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX5_RDPCSTX_INTERRUPT_CONTROL                                                           0x2d6a
+#define mmRDPCSTX5_RDPCSTX_INTERRUPT_CONTROL_BASE_IDX                                                  2
+#define mmRDPCSTX5_RDPCSTX_PLL_UPDATE_DATA                                                             0x2d6b
+#define mmRDPCSTX5_RDPCSTX_PLL_UPDATE_DATA_BASE_IDX                                                    2
+#define mmRDPCSTX5_RDPCS_TX_CR_ADDR                                                                    0x2d6c
+#define mmRDPCSTX5_RDPCS_TX_CR_ADDR_BASE_IDX                                                           2
+#define mmRDPCSTX5_RDPCS_TX_CR_DATA                                                                    0x2d6d
+#define mmRDPCSTX5_RDPCS_TX_CR_DATA_BASE_IDX                                                           2
+#define mmRDPCSTX5_RDPCS_TX_SRAM_CNTL                                                                  0x2d6e
+#define mmRDPCSTX5_RDPCS_TX_SRAM_CNTL_BASE_IDX                                                         2
+#define mmRDPCSTX5_RDPCSTX_MEM_POWER_CTRL                                                              0x2d6f
+#define mmRDPCSTX5_RDPCSTX_MEM_POWER_CTRL_BASE_IDX                                                     2
+#define mmRDPCSTX5_RDPCSTX_MEM_POWER_CTRL2                                                             0x2d70
+#define mmRDPCSTX5_RDPCSTX_MEM_POWER_CTRL2_BASE_IDX                                                    2
+#define mmRDPCSTX5_RDPCSTX_SCRATCH                                                                     0x2d71
+#define mmRDPCSTX5_RDPCSTX_SCRATCH_BASE_IDX                                                            2
+#define mmRDPCSTX5_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG                                                    0x2d74
+#define mmRDPCSTX5_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG_BASE_IDX                                           2
+#define mmRDPCSTX5_RDPCSTX_DEBUG_CONFIG                                                                0x2d75
+#define mmRDPCSTX5_RDPCSTX_DEBUG_CONFIG_BASE_IDX                                                       2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL0                                                                   0x2d78
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL0_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL1                                                                   0x2d79
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL1_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL2                                                                   0x2d7a
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL2_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL3                                                                   0x2d7b
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL3_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL4                                                                   0x2d7c
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL4_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL5                                                                   0x2d7d
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL5_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL6                                                                   0x2d7e
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL6_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL7                                                                   0x2d7f
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL7_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL8                                                                   0x2d80
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL8_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL9                                                                   0x2d81
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL9_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL10                                                                  0x2d82
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL10_BASE_IDX                                                         2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL11                                                                  0x2d83
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL11_BASE_IDX                                                         2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL12                                                                  0x2d84
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL12_BASE_IDX                                                         2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL13                                                                  0x2d85
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL13_BASE_IDX                                                         2
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL14                                                                  0x2d86
+#define mmRDPCSTX5_RDPCSTX_PHY_CNTL14_BASE_IDX                                                         2
+#define mmRDPCSTX5_RDPCSTX_PHY_FUSE0                                                                   0x2d87
+#define mmRDPCSTX5_RDPCSTX_PHY_FUSE0_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_FUSE1                                                                   0x2d88
+#define mmRDPCSTX5_RDPCSTX_PHY_FUSE1_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_FUSE2                                                                   0x2d89
+#define mmRDPCSTX5_RDPCSTX_PHY_FUSE2_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_FUSE3                                                                   0x2d8a
+#define mmRDPCSTX5_RDPCSTX_PHY_FUSE3_BASE_IDX                                                          2
+#define mmRDPCSTX5_RDPCSTX_PHY_RX_LD_VAL                                                               0x2d8b
+#define mmRDPCSTX5_RDPCSTX_PHY_RX_LD_VAL_BASE_IDX                                                      2
+#define mmRDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3                                                        0x2d8c
+#define mmRDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3_BASE_IDX                                               2
+#define mmRDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6                                                        0x2d8d
+#define mmRDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6_BASE_IDX                                               2
+#define mmRDPCSTX5_RDPCSTX_DPALT_CONTROL_REG                                                           0x2d8e
+#define mmRDPCSTX5_RDPCSTX_DPALT_CONTROL_REG_BASE_IDX                                                  2
+
+
+// addressBlock: dpcssys_dpcssys_cr5_dispdec
+// base address: 0x10e0
+#define mmDPCSSYS_CR5_DPCSSYS_CR_ADDR                                                                  0x2d6c
+#define mmDPCSSYS_CR5_DPCSSYS_CR_ADDR_BASE_IDX                                                         2
+#define mmDPCSSYS_CR5_DPCSSYS_CR_DATA                                                                  0x2d6d
+#define mmDPCSSYS_CR5_DPCSSYS_CR_DATA_BASE_IDX                                                         2
+
+#endif
diff --git a/drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_2_0_0_sh_mask.h b/drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_2_0_0_sh_mask.h
new file mode 100644 (file)
index 0000000..25e0569
--- /dev/null
@@ -0,0 +1,3912 @@
+/*
+ * Copyright (C) 2019  Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
+ * AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+#ifndef _dpcs_2_0_0_SH_MASK_HEADER
+#define _dpcs_2_0_0_SH_MASK_HEADER
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx0_dispdec
+//DPCSTX0_DPCSTX_TX_CLOCK_CNTL
+#define DPCSTX0_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS__SHIFT                                             0x0
+#define DPCSTX0_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN__SHIFT                                                   0x1
+#define DPCSTX0_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON__SHIFT                                             0x2
+#define DPCSTX0_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0x3
+#define DPCSTX0_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS_MASK                                               0x00000001L
+#define DPCSTX0_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN_MASK                                                     0x00000002L
+#define DPCSTX0_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON_MASK                                               0x00000004L
+#define DPCSTX0_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000008L
+//DPCSTX0_DPCSTX_TX_CNTL
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ__SHIFT                                                 0xc
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING__SHIFT                                             0xd
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP__SHIFT                                                      0xe
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT__SHIFT                                              0xf
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ_MASK                                                   0x00001000L
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING_MASK                                               0x00002000L
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP_MASK                                                        0x00004000L
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT_MASK                                                0x00008000L
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define DPCSTX0_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//DPCSTX0_DPCSTX_CBUS_CNTL
+#define DPCSTX0_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY__SHIFT                                               0x0
+#define DPCSTX0_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET__SHIFT                                                 0x1f
+#define DPCSTX0_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY_MASK                                                 0x000000FFL
+#define DPCSTX0_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET_MASK                                                   0x80000000L
+//DPCSTX0_DPCSTX_INTERRUPT_CNTL
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW__SHIFT                                          0x0
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR__SHIFT                                              0x1
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK__SHIFT                                        0x4
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR__SHIFT                                             0x8
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR__SHIFT                                             0x9
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR__SHIFT                                             0xa
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR__SHIFT                                             0xb
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR__SHIFT                                               0xc
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK__SHIFT                                         0x10
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK__SHIFT                                             0x14
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW_MASK                                            0x00000001L
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR_MASK                                                0x00000002L
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK_MASK                                          0x00000010L
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR_MASK                                               0x00000100L
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR_MASK                                               0x00000200L
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR_MASK                                               0x00000400L
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR_MASK                                               0x00000800L
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR_MASK                                                 0x00001000L
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK_MASK                                           0x00010000L
+#define DPCSTX0_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK_MASK                                               0x00100000L
+//DPCSTX0_DPCSTX_PLL_UPDATE_ADDR
+#define DPCSTX0_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR__SHIFT                                           0x0
+#define DPCSTX0_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR_MASK                                             0x0003FFFFL
+//DPCSTX0_DPCSTX_PLL_UPDATE_DATA
+#define DPCSTX0_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA__SHIFT                                           0x0
+#define DPCSTX0_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA_MASK                                             0xFFFFFFFFL
+//DPCSTX0_DPCSTX_DEBUG_CONFIG
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN__SHIFT                                                       0x0
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL__SHIFT                                               0x1
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL__SHIFT                                            0x4
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL__SHIFT                                       0x8
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS__SHIFT                                                 0xe
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN__SHIFT                                          0x10
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN_MASK                                                         0x00000001L
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL_MASK                                                 0x0000000EL
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL_MASK                                              0x00000070L
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL_MASK                                         0x00000700L
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS_MASK                                                   0x00004000L
+#define DPCSTX0_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN_MASK                                            0x00010000L
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx0_dispdec
+//RDPCSTX0_RDPCSTX_CNTL
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET__SHIFT                                                   0x0
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET__SHIFT                                                   0x4
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN__SHIFT                                                  0xc
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN__SHIFT                                                  0xd
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN__SHIFT                                                  0xe
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN__SHIFT                                                  0xf
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN__SHIFT                                              0x18
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN__SHIFT                                       0x19
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS__SHIFT                                                0x1a
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET_MASK                                                     0x00000001L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET_MASK                                                     0x00000010L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN_MASK                                                    0x00001000L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN_MASK                                                    0x00002000L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN_MASK                                                    0x00004000L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN_MASK                                                    0x00008000L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN_MASK                                                0x01000000L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN_MASK                                         0x02000000L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS_MASK                                                  0x04000000L
+#define RDPCSTX0_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//RDPCSTX0_RDPCSTX_CLOCK_CNTL
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN__SHIFT                                               0x0
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN__SHIFT                                          0x4
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN__SHIFT                                          0x5
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN__SHIFT                                          0x6
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN__SHIFT                                          0x7
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS__SHIFT                                        0x8
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN__SHIFT                                              0x9
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0xa
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS__SHIFT                                            0xc
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN__SHIFT                                                  0xd
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON__SHIFT                                            0xe
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS__SHIFT                                              0x10
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN_MASK                                                 0x00000001L
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN_MASK                                            0x00000010L
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN_MASK                                            0x00000020L
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN_MASK                                            0x00000040L
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN_MASK                                            0x00000080L
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS_MASK                                          0x00000100L
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN_MASK                                                0x00000200L
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000400L
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS_MASK                                              0x00001000L
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN_MASK                                                    0x00002000L
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON_MASK                                              0x00004000L
+#define RDPCSTX0_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS_MASK                                                0x00010000L
+//RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW__SHIFT                                    0x0
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE__SHIFT                                 0x1
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE__SHIFT                                   0x2
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR__SHIFT                                       0x4
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR__SHIFT                                       0x5
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR__SHIFT                                       0x6
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR__SHIFT                                       0x7
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR__SHIFT                                        0x8
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR__SHIFT                             0x9
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR__SHIFT                               0xa
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR__SHIFT                                         0xc
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK__SHIFT                                  0x10
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK__SHIFT                            0x11
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK__SHIFT                              0x12
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK__SHIFT                                   0x14
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW_MASK                                      0x00000001L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK                                   0x00000002L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK                                     0x00000004L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR_MASK                                         0x00000010L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR_MASK                                         0x00000020L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR_MASK                                         0x00000040L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR_MASK                                         0x00000080L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR_MASK                                          0x00000100L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR_MASK                               0x00000200L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR_MASK                                 0x00000400L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR_MASK                                           0x00001000L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK_MASK                                    0x00010000L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK_MASK                              0x00020000L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK_MASK                                0x00040000L
+#define RDPCSTX0_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK_MASK                                     0x00100000L
+//RDPCSTX0_RDPCSTX_PLL_UPDATE_DATA
+#define RDPCSTX0_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA__SHIFT                                        0x0
+#define RDPCSTX0_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA_MASK                                          0x00000001L
+//RDPCSTX0_RDPCS_TX_CR_ADDR
+#define RDPCSTX0_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                    0x0
+#define RDPCSTX0_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                      0x0000FFFFL
+//RDPCSTX0_RDPCS_TX_CR_DATA
+#define RDPCSTX0_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                    0x0
+#define RDPCSTX0_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                      0x0000FFFFL
+//RDPCSTX0_RDPCS_TX_SRAM_CNTL
+#define RDPCSTX0_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS__SHIFT                                                 0x14
+#define RDPCSTX0_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE__SHIFT                                               0x18
+#define RDPCSTX0_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE__SHIFT                                           0x1c
+#define RDPCSTX0_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS_MASK                                                   0x00100000L
+#define RDPCSTX0_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE_MASK                                                 0x03000000L
+#define RDPCSTX0_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE_MASK                                             0x30000000L
+//RDPCSTX0_RDPCSTX_MEM_POWER_CTRL
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES__SHIFT                                           0x0
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES__SHIFT                                    0xc
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1__SHIFT                                  0x1a
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2__SHIFT                                  0x1b
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1__SHIFT                                   0x1c
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2__SHIFT                                   0x1d
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM__SHIFT                                         0x1e
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES_MASK                                             0x00000FFFL
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES_MASK                                      0x03FFF000L
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1_MASK                                    0x04000000L
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2_MASK                                    0x08000000L
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1_MASK                                     0x10000000L
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2_MASK                                     0x20000000L
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM_MASK                                           0x40000000L
+//RDPCSTX0_RDPCSTX_MEM_POWER_CTRL2
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF__SHIFT                                    0x0
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO__SHIFT                                    0x2
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF_MASK                                      0x00000003L
+#define RDPCSTX0_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO_MASK                                      0x00000004L
+//RDPCSTX0_RDPCSTX_SCRATCH
+#define RDPCSTX0_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH__SHIFT                                                      0x0
+#define RDPCSTX0_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH_MASK                                                        0xFFFFFFFFL
+//RDPCSTX0_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG__SHIFT                      0x0
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS__SHIFT              0x4
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE__SHIFT                      0x8
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG_MASK                        0x00000001L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS_MASK                0x00000010L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE_MASK                        0x0000FF00L
+//RDPCSTX0_RDPCSTX_DEBUG_CONFIG
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN__SHIFT                                                    0x0
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT__SHIFT                                        0x4
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP__SHIFT                                        0x7
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK__SHIFT                                          0x8
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE__SHIFT                                       0xf
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX__SHIFT                                          0x10
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT__SHIFT                                              0x18
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN_MASK                                                      0x00000001L
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT_MASK                                          0x00000070L
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP_MASK                                          0x00000080L
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK_MASK                                            0x00001F00L
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE_MASK                                         0x00008000L
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX_MASK                                            0x00FF0000L
+#define RDPCSTX0_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MASK                                                0xFF000000L
+//RDPCSTX0_RDPCSTX_PHY_CNTL0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET__SHIFT                                                    0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET__SHIFT                                            0x1
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N__SHIFT                                          0x2
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN__SHIFT                                           0x3
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT__SHIFT                                                  0x4
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE__SHIFT                                          0x8
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE__SHIFT                                                0x9
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL__SHIFT                                            0xe
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ__SHIFT                                                0x11
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK__SHIFT                                                0x12
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL__SHIFT                                              0x14
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL__SHIFT                                               0x15
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN__SHIFT                                            0x18
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT__SHIFT                                        0x19
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE__SHIFT                                               0x1c
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE__SHIFT                                             0x1d
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS__SHIFT                                                  0x1f
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET_MASK                                                      0x00000001L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET_MASK                                              0x00000002L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N_MASK                                            0x00000004L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN_MASK                                             0x00000008L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT_MASK                                                    0x00000030L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE_MASK                                            0x00000100L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE_MASK                                                  0x00003E00L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL_MASK                                              0x0001C000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ_MASK                                                  0x00020000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK_MASK                                                  0x00040000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL_MASK                                                0x00100000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL_MASK                                                 0x00200000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN_MASK                                              0x01000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT_MASK                                          0x02000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE_MASK                                                 0x10000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE_MASK                                               0x20000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS_MASK                                                    0x80000000L
+//RDPCSTX0_RDPCSTX_PHY_CNTL1
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN__SHIFT                                               0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN__SHIFT                                               0x1
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE__SHIFT                                           0x2
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN__SHIFT                                               0x3
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE__SHIFT                                           0x4
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET__SHIFT                                              0x5
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN__SHIFT                                               0x6
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE__SHIFT                                           0x7
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN_MASK                                                 0x00000001L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN_MASK                                                 0x00000002L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE_MASK                                             0x00000004L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN_MASK                                                 0x00000008L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE_MASK                                             0x00000010L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET_MASK                                                0x00000020L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN_MASK                                                 0x00000040L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE_MASK                                             0x00000080L
+//RDPCSTX0_RDPCSTX_PHY_CNTL2
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR__SHIFT                                                  0x3
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN__SHIFT                                 0x4
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN__SHIFT                                 0x5
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN__SHIFT                                 0x6
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN__SHIFT                                 0x7
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN__SHIFT                                 0x8
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN__SHIFT                                 0x9
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN__SHIFT                                 0xa
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN__SHIFT                                 0xb
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR_MASK                                                    0x00000008L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN_MASK                                   0x00000010L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN_MASK                                   0x00000020L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN_MASK                                   0x00000040L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN_MASK                                   0x00000080L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN_MASK                                   0x00000100L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN_MASK                                   0x00000200L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN_MASK                                   0x00000400L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN_MASK                                   0x00000800L
+//RDPCSTX0_RDPCSTX_PHY_CNTL3
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET__SHIFT                                             0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE__SHIFT                                           0x1
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY__SHIFT                                           0x2
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN__SHIFT                                           0x3
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ__SHIFT                                               0x4
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK__SHIFT                                               0x5
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET__SHIFT                                             0x8
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE__SHIFT                                           0x9
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY__SHIFT                                           0xa
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN__SHIFT                                           0xb
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ__SHIFT                                               0xc
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK__SHIFT                                               0xd
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET__SHIFT                                             0x10
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE__SHIFT                                           0x11
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY__SHIFT                                           0x12
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN__SHIFT                                           0x13
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ__SHIFT                                               0x14
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK__SHIFT                                               0x15
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET__SHIFT                                             0x18
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE__SHIFT                                           0x19
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY__SHIFT                                           0x1a
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN__SHIFT                                           0x1b
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ__SHIFT                                               0x1c
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK__SHIFT                                               0x1d
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_MASK                                               0x00000001L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_MASK                                             0x00000002L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_MASK                                             0x00000004L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_MASK                                             0x00000008L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_MASK                                                 0x00000010L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_MASK                                                 0x00000020L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_MASK                                               0x00000100L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_MASK                                             0x00000200L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_MASK                                             0x00000400L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_MASK                                             0x00000800L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_MASK                                                 0x00001000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_MASK                                                 0x00002000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_MASK                                               0x00010000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_MASK                                             0x00020000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_MASK                                             0x00040000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_MASK                                             0x00080000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_MASK                                                 0x00100000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_MASK                                                 0x00200000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_MASK                                               0x01000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_MASK                                             0x02000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_MASK                                             0x04000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_MASK                                             0x08000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_MASK                                                 0x10000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_MASK                                                 0x20000000L
+//RDPCSTX0_RDPCSTX_PHY_CNTL4
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL__SHIFT                                         0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT__SHIFT                                            0x4
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC__SHIFT                                    0x6
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN__SHIFT                                        0x7
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL__SHIFT                                         0x8
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT__SHIFT                                            0xc
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC__SHIFT                                    0xe
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN__SHIFT                                        0xf
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL__SHIFT                                         0x10
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT__SHIFT                                            0x14
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC__SHIFT                                    0x16
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN__SHIFT                                        0x17
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL__SHIFT                                         0x18
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT__SHIFT                                            0x1c
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC__SHIFT                                    0x1e
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN__SHIFT                                        0x1f
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL_MASK                                           0x00000007L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT_MASK                                              0x00000010L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC_MASK                                      0x00000040L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN_MASK                                          0x00000080L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL_MASK                                           0x00000700L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT_MASK                                              0x00001000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC_MASK                                      0x00004000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN_MASK                                          0x00008000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL_MASK                                           0x00070000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT_MASK                                              0x00100000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC_MASK                                      0x00400000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN_MASK                                          0x00800000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL_MASK                                           0x07000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT_MASK                                              0x10000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC_MASK                                      0x40000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN_MASK                                          0x80000000L
+//RDPCSTX0_RDPCSTX_PHY_CNTL5
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD__SHIFT                                               0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE__SHIFT                                              0x1
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH__SHIFT                                             0x4
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ__SHIFT                                         0x6
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT__SHIFT                                      0x7
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD__SHIFT                                               0x8
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE__SHIFT                                              0x9
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH__SHIFT                                             0xc
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ__SHIFT                                         0xe
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT__SHIFT                                      0xf
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD__SHIFT                                               0x10
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE__SHIFT                                              0x11
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH__SHIFT                                             0x14
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ__SHIFT                                         0x16
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT__SHIFT                                      0x17
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD__SHIFT                                               0x18
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE__SHIFT                                              0x19
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH__SHIFT                                             0x1c
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ__SHIFT                                         0x1e
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT__SHIFT                                      0x1f
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD_MASK                                                 0x00000001L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE_MASK                                                0x0000000EL
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH_MASK                                               0x00000030L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ_MASK                                           0x00000040L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT_MASK                                        0x00000080L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD_MASK                                                 0x00000100L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE_MASK                                                0x00000E00L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH_MASK                                               0x00003000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ_MASK                                           0x00004000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT_MASK                                        0x00008000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD_MASK                                                 0x00010000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE_MASK                                                0x000E0000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH_MASK                                               0x00300000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ_MASK                                           0x00400000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT_MASK                                        0x00800000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD_MASK                                                 0x01000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE_MASK                                                0x0E000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH_MASK                                               0x30000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ_MASK                                           0x40000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT_MASK                                        0x80000000L
+//RDPCSTX0_RDPCSTX_PHY_CNTL6
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE__SHIFT                                            0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN__SHIFT                                           0x2
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE__SHIFT                                            0x4
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN__SHIFT                                           0x6
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE__SHIFT                                            0x8
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN__SHIFT                                           0xa
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE__SHIFT                                            0xc
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN__SHIFT                                           0xe
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4__SHIFT                                                0x10
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE__SHIFT                                            0x11
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK__SHIFT                                        0x12
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN__SHIFT                                            0x13
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ__SHIFT                                           0x14
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_MASK                                              0x00000003L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_MASK                                             0x00000004L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_MASK                                              0x00000030L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_MASK                                             0x00000040L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_MASK                                              0x00000300L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_MASK                                             0x00000400L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_MASK                                              0x00003000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_MASK                                             0x00004000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_MASK                                                  0x00010000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_MASK                                              0x00020000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_MASK                                          0x00040000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_MASK                                              0x00080000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_MASK                                             0x00100000L
+//RDPCSTX0_RDPCSTX_PHY_CNTL7
+#define RDPCSTX0_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN__SHIFT                                       0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT__SHIFT                                      0x10
+#define RDPCSTX0_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN_MASK                                         0x0000FFFFL
+#define RDPCSTX0_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT_MASK                                        0xFFFF0000L
+//RDPCSTX0_RDPCSTX_PHY_CNTL8
+#define RDPCSTX0_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK__SHIFT                                        0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK_MASK                                          0x000FFFFFL
+//RDPCSTX0_RDPCSTX_PHY_CNTL9
+#define RDPCSTX0_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE__SHIFT                                    0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD__SHIFT                                   0x18
+#define RDPCSTX0_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE_MASK                                      0x001FFFFFL
+#define RDPCSTX0_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD_MASK                                     0x01000000L
+//RDPCSTX0_RDPCSTX_PHY_CNTL10
+#define RDPCSTX0_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM__SHIFT                                      0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM_MASK                                        0x0000FFFFL
+//RDPCSTX0_RDPCSTX_PHY_CNTL11
+#define RDPCSTX0_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER__SHIFT                                     0x4
+#define RDPCSTX0_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV__SHIFT                                     0x10
+#define RDPCSTX0_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV__SHIFT                                    0x14
+#define RDPCSTX0_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV__SHIFT                           0x18
+#define RDPCSTX0_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER_MASK                                       0x0000FFF0L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV_MASK                                       0x00070000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV_MASK                                      0x00700000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV_MASK                             0x03000000L
+//RDPCSTX0_RDPCSTX_PHY_CNTL12
+#define RDPCSTX0_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN__SHIFT                                    0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN__SHIFT                                   0x2
+#define RDPCSTX0_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV__SHIFT                                     0x4
+#define RDPCSTX0_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE__SHIFT                                          0x7
+#define RDPCSTX0_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN__SHIFT                                         0x8
+#define RDPCSTX0_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN_MASK                                      0x00000001L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN_MASK                                     0x00000004L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV_MASK                                       0x00000070L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE_MASK                                            0x00000080L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN_MASK                                           0x00000100L
+//RDPCSTX0_RDPCSTX_PHY_CNTL13
+#define RDPCSTX0_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER__SHIFT                                 0x14
+#define RDPCSTX0_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN__SHIFT                                     0x1c
+#define RDPCSTX0_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN__SHIFT                                       0x1d
+#define RDPCSTX0_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE__SHIFT                               0x1e
+#define RDPCSTX0_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER_MASK                                   0x0FF00000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN_MASK                                       0x10000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN_MASK                                         0x20000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE_MASK                                 0x40000000L
+//RDPCSTX0_RDPCSTX_PHY_CNTL14
+#define RDPCSTX0_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE__SHIFT                                      0x0
+#define RDPCSTX0_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN__SHIFT                                       0x18
+#define RDPCSTX0_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN__SHIFT                                        0x1c
+#define RDPCSTX0_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE_MASK                                        0x00000001L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN_MASK                                         0x01000000L
+#define RDPCSTX0_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN_MASK                                          0x10000000L
+//RDPCSTX0_RDPCSTX_PHY_FUSE0
+#define RDPCSTX0_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX0_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX0_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX0_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I__SHIFT                                             0x12
+#define RDPCSTX0_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO__SHIFT                                        0x14
+#define RDPCSTX0_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX0_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX0_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX0_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I_MASK                                               0x000C0000L
+#define RDPCSTX0_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO_MASK                                          0x00300000L
+//RDPCSTX0_RDPCSTX_PHY_FUSE1
+#define RDPCSTX0_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX0_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX0_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX0_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT__SHIFT                                          0x12
+#define RDPCSTX0_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP__SHIFT                                         0x19
+#define RDPCSTX0_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX0_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX0_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX0_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT_MASK                                            0x01FC0000L
+#define RDPCSTX0_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP_MASK                                           0xFE000000L
+//RDPCSTX0_RDPCSTX_PHY_FUSE2
+#define RDPCSTX0_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX0_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX0_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX0_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX0_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX0_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST_MASK                                             0x0003F000L
+//RDPCSTX0_RDPCSTX_PHY_FUSE3
+#define RDPCSTX0_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX0_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX0_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX0_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE__SHIFT                                             0x12
+#define RDPCSTX0_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE__SHIFT                                                0x18
+#define RDPCSTX0_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX0_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX0_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX0_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE_MASK                                               0x00FC0000L
+#define RDPCSTX0_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE_MASK                                                  0x03000000L
+//RDPCSTX0_RDPCSTX_PHY_RX_LD_VAL
+#define RDPCSTX0_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL__SHIFT                                        0x0
+#define RDPCSTX0_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL__SHIFT                                        0x8
+#define RDPCSTX0_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL_MASK                                          0x0000007FL
+#define RDPCSTX0_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL_MASK                                          0x001FFF00L
+//RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED__SHIFT                         0x0
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED__SHIFT                       0x1
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED__SHIFT                       0x2
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED__SHIFT                       0x3
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED__SHIFT                           0x4
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED__SHIFT                           0x5
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED__SHIFT                         0x8
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED__SHIFT                       0x9
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED__SHIFT                       0xa
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED__SHIFT                       0xb
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED__SHIFT                           0xc
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED__SHIFT                           0xd
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED__SHIFT                         0x10
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED__SHIFT                       0x11
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED__SHIFT                       0x12
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED__SHIFT                       0x13
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED__SHIFT                           0x14
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED__SHIFT                           0x15
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED__SHIFT                         0x18
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED__SHIFT                       0x19
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED__SHIFT                       0x1a
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED__SHIFT                       0x1b
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED__SHIFT                           0x1c
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED__SHIFT                           0x1d
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED_MASK                           0x00000001L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED_MASK                         0x00000002L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED_MASK                         0x00000004L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED_MASK                         0x00000008L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED_MASK                             0x00000010L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED_MASK                             0x00000020L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED_MASK                           0x00000100L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED_MASK                         0x00000200L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED_MASK                         0x00000400L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED_MASK                         0x00000800L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED_MASK                             0x00001000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED_MASK                             0x00002000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED_MASK                           0x00010000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED_MASK                         0x00020000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED_MASK                         0x00040000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED_MASK                         0x00080000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED_MASK                             0x00100000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED_MASK                             0x00200000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED_MASK                           0x01000000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED_MASK                         0x02000000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED_MASK                         0x04000000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED_MASK                         0x08000000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED_MASK                             0x10000000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED_MASK                             0x20000000L
+//RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED__SHIFT                        0x0
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED__SHIFT                       0x2
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED__SHIFT                        0x4
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED__SHIFT                       0x6
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED__SHIFT                        0x8
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED__SHIFT                       0xa
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED__SHIFT                        0xc
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED__SHIFT                       0xe
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED__SHIFT                            0x10
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED__SHIFT                        0x11
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED__SHIFT                    0x12
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED__SHIFT                        0x13
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED__SHIFT                       0x14
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED_MASK                          0x00000003L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED_MASK                         0x00000004L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED_MASK                          0x00000030L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED_MASK                         0x00000040L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED_MASK                          0x00000300L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED_MASK                         0x00000400L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED_MASK                          0x00003000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED_MASK                         0x00004000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED_MASK                              0x00010000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED_MASK                          0x00020000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED_MASK                      0x00040000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED_MASK                          0x00080000L
+#define RDPCSTX0_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED_MASK                         0x00100000L
+//RDPCSTX0_RDPCSTX_DPALT_CONTROL_REG
+#define RDPCSTX0_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS__SHIFT                                  0x0
+#define RDPCSTX0_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED__SHIFT                                0x4
+#define RDPCSTX0_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE__SHIFT                                  0x8
+#define RDPCSTX0_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS_MASK                                    0x00000001L
+#define RDPCSTX0_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED_MASK                                  0x00000010L
+#define RDPCSTX0_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE_MASK                                    0x0000FF00L
+
+
+// addressBlock: dpcssys_dpcssys_cr0_dispdec
+//DPCSSYS_CR0_DPCSSYS_CR_ADDR
+#define DPCSSYS_CR0_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                  0x0
+#define DPCSSYS_CR0_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                    0x0000FFFFL
+//DPCSSYS_CR0_DPCSSYS_CR_DATA
+#define DPCSSYS_CR0_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                  0x0
+#define DPCSSYS_CR0_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                    0x0000FFFFL
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx1_dispdec
+//DPCSTX1_DPCSTX_TX_CLOCK_CNTL
+#define DPCSTX1_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS__SHIFT                                             0x0
+#define DPCSTX1_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN__SHIFT                                                   0x1
+#define DPCSTX1_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON__SHIFT                                             0x2
+#define DPCSTX1_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0x3
+#define DPCSTX1_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS_MASK                                               0x00000001L
+#define DPCSTX1_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN_MASK                                                     0x00000002L
+#define DPCSTX1_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON_MASK                                               0x00000004L
+#define DPCSTX1_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000008L
+//DPCSTX1_DPCSTX_TX_CNTL
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ__SHIFT                                                 0xc
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING__SHIFT                                             0xd
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP__SHIFT                                                      0xe
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT__SHIFT                                              0xf
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ_MASK                                                   0x00001000L
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING_MASK                                               0x00002000L
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP_MASK                                                        0x00004000L
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT_MASK                                                0x00008000L
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define DPCSTX1_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//DPCSTX1_DPCSTX_CBUS_CNTL
+#define DPCSTX1_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY__SHIFT                                               0x0
+#define DPCSTX1_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET__SHIFT                                                 0x1f
+#define DPCSTX1_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY_MASK                                                 0x000000FFL
+#define DPCSTX1_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET_MASK                                                   0x80000000L
+//DPCSTX1_DPCSTX_INTERRUPT_CNTL
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW__SHIFT                                          0x0
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR__SHIFT                                              0x1
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK__SHIFT                                        0x4
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR__SHIFT                                             0x8
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR__SHIFT                                             0x9
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR__SHIFT                                             0xa
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR__SHIFT                                             0xb
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR__SHIFT                                               0xc
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK__SHIFT                                         0x10
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK__SHIFT                                             0x14
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW_MASK                                            0x00000001L
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR_MASK                                                0x00000002L
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK_MASK                                          0x00000010L
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR_MASK                                               0x00000100L
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR_MASK                                               0x00000200L
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR_MASK                                               0x00000400L
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR_MASK                                               0x00000800L
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR_MASK                                                 0x00001000L
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK_MASK                                           0x00010000L
+#define DPCSTX1_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK_MASK                                               0x00100000L
+//DPCSTX1_DPCSTX_PLL_UPDATE_ADDR
+#define DPCSTX1_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR__SHIFT                                           0x0
+#define DPCSTX1_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR_MASK                                             0x0003FFFFL
+//DPCSTX1_DPCSTX_PLL_UPDATE_DATA
+#define DPCSTX1_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA__SHIFT                                           0x0
+#define DPCSTX1_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA_MASK                                             0xFFFFFFFFL
+//DPCSTX1_DPCSTX_DEBUG_CONFIG
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN__SHIFT                                                       0x0
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL__SHIFT                                               0x1
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL__SHIFT                                            0x4
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL__SHIFT                                       0x8
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS__SHIFT                                                 0xe
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN__SHIFT                                          0x10
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN_MASK                                                         0x00000001L
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL_MASK                                                 0x0000000EL
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL_MASK                                              0x00000070L
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL_MASK                                         0x00000700L
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS_MASK                                                   0x00004000L
+#define DPCSTX1_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN_MASK                                            0x00010000L
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx1_dispdec
+//RDPCSTX1_RDPCSTX_CNTL
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET__SHIFT                                                   0x0
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET__SHIFT                                                   0x4
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN__SHIFT                                                  0xc
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN__SHIFT                                                  0xd
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN__SHIFT                                                  0xe
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN__SHIFT                                                  0xf
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN__SHIFT                                              0x18
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN__SHIFT                                       0x19
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS__SHIFT                                                0x1a
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET_MASK                                                     0x00000001L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET_MASK                                                     0x00000010L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN_MASK                                                    0x00001000L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN_MASK                                                    0x00002000L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN_MASK                                                    0x00004000L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN_MASK                                                    0x00008000L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN_MASK                                                0x01000000L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN_MASK                                         0x02000000L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS_MASK                                                  0x04000000L
+#define RDPCSTX1_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//RDPCSTX1_RDPCSTX_CLOCK_CNTL
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN__SHIFT                                               0x0
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN__SHIFT                                          0x4
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN__SHIFT                                          0x5
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN__SHIFT                                          0x6
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN__SHIFT                                          0x7
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS__SHIFT                                        0x8
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN__SHIFT                                              0x9
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0xa
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS__SHIFT                                            0xc
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN__SHIFT                                                  0xd
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON__SHIFT                                            0xe
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS__SHIFT                                              0x10
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN_MASK                                                 0x00000001L
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN_MASK                                            0x00000010L
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN_MASK                                            0x00000020L
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN_MASK                                            0x00000040L
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN_MASK                                            0x00000080L
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS_MASK                                          0x00000100L
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN_MASK                                                0x00000200L
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000400L
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS_MASK                                              0x00001000L
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN_MASK                                                    0x00002000L
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON_MASK                                              0x00004000L
+#define RDPCSTX1_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS_MASK                                                0x00010000L
+//RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW__SHIFT                                    0x0
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE__SHIFT                                 0x1
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE__SHIFT                                   0x2
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR__SHIFT                                       0x4
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR__SHIFT                                       0x5
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR__SHIFT                                       0x6
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR__SHIFT                                       0x7
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR__SHIFT                                        0x8
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR__SHIFT                             0x9
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR__SHIFT                               0xa
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR__SHIFT                                         0xc
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK__SHIFT                                  0x10
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK__SHIFT                            0x11
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK__SHIFT                              0x12
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK__SHIFT                                   0x14
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW_MASK                                      0x00000001L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK                                   0x00000002L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK                                     0x00000004L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR_MASK                                         0x00000010L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR_MASK                                         0x00000020L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR_MASK                                         0x00000040L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR_MASK                                         0x00000080L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR_MASK                                          0x00000100L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR_MASK                               0x00000200L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR_MASK                                 0x00000400L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR_MASK                                           0x00001000L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK_MASK                                    0x00010000L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK_MASK                              0x00020000L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK_MASK                                0x00040000L
+#define RDPCSTX1_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK_MASK                                     0x00100000L
+//RDPCSTX1_RDPCSTX_PLL_UPDATE_DATA
+#define RDPCSTX1_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA__SHIFT                                        0x0
+#define RDPCSTX1_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA_MASK                                          0x00000001L
+//RDPCSTX1_RDPCS_TX_CR_ADDR
+#define RDPCSTX1_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                    0x0
+#define RDPCSTX1_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                      0x0000FFFFL
+//RDPCSTX1_RDPCS_TX_CR_DATA
+#define RDPCSTX1_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                    0x0
+#define RDPCSTX1_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                      0x0000FFFFL
+//RDPCSTX1_RDPCS_TX_SRAM_CNTL
+#define RDPCSTX1_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS__SHIFT                                                 0x14
+#define RDPCSTX1_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE__SHIFT                                               0x18
+#define RDPCSTX1_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE__SHIFT                                           0x1c
+#define RDPCSTX1_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS_MASK                                                   0x00100000L
+#define RDPCSTX1_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE_MASK                                                 0x03000000L
+#define RDPCSTX1_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE_MASK                                             0x30000000L
+//RDPCSTX1_RDPCSTX_MEM_POWER_CTRL
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES__SHIFT                                           0x0
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES__SHIFT                                    0xc
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1__SHIFT                                  0x1a
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2__SHIFT                                  0x1b
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1__SHIFT                                   0x1c
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2__SHIFT                                   0x1d
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM__SHIFT                                         0x1e
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES_MASK                                             0x00000FFFL
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES_MASK                                      0x03FFF000L
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1_MASK                                    0x04000000L
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2_MASK                                    0x08000000L
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1_MASK                                     0x10000000L
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2_MASK                                     0x20000000L
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM_MASK                                           0x40000000L
+//RDPCSTX1_RDPCSTX_MEM_POWER_CTRL2
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF__SHIFT                                    0x0
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO__SHIFT                                    0x2
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF_MASK                                      0x00000003L
+#define RDPCSTX1_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO_MASK                                      0x00000004L
+//RDPCSTX1_RDPCSTX_SCRATCH
+#define RDPCSTX1_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH__SHIFT                                                      0x0
+#define RDPCSTX1_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH_MASK                                                        0xFFFFFFFFL
+//RDPCSTX1_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG__SHIFT                      0x0
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS__SHIFT              0x4
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE__SHIFT                      0x8
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG_MASK                        0x00000001L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS_MASK                0x00000010L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE_MASK                        0x0000FF00L
+//RDPCSTX1_RDPCSTX_DEBUG_CONFIG
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN__SHIFT                                                    0x0
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT__SHIFT                                        0x4
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP__SHIFT                                        0x7
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK__SHIFT                                          0x8
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE__SHIFT                                       0xf
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX__SHIFT                                          0x10
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT__SHIFT                                              0x18
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN_MASK                                                      0x00000001L
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT_MASK                                          0x00000070L
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP_MASK                                          0x00000080L
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK_MASK                                            0x00001F00L
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE_MASK                                         0x00008000L
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX_MASK                                            0x00FF0000L
+#define RDPCSTX1_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MASK                                                0xFF000000L
+//RDPCSTX1_RDPCSTX_PHY_CNTL0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET__SHIFT                                                    0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET__SHIFT                                            0x1
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N__SHIFT                                          0x2
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN__SHIFT                                           0x3
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT__SHIFT                                                  0x4
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE__SHIFT                                          0x8
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE__SHIFT                                                0x9
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL__SHIFT                                            0xe
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ__SHIFT                                                0x11
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK__SHIFT                                                0x12
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL__SHIFT                                              0x14
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL__SHIFT                                               0x15
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN__SHIFT                                            0x18
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT__SHIFT                                        0x19
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE__SHIFT                                               0x1c
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE__SHIFT                                             0x1d
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS__SHIFT                                                  0x1f
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET_MASK                                                      0x00000001L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET_MASK                                              0x00000002L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N_MASK                                            0x00000004L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN_MASK                                             0x00000008L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT_MASK                                                    0x00000030L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE_MASK                                            0x00000100L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE_MASK                                                  0x00003E00L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL_MASK                                              0x0001C000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ_MASK                                                  0x00020000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK_MASK                                                  0x00040000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL_MASK                                                0x00100000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL_MASK                                                 0x00200000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN_MASK                                              0x01000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT_MASK                                          0x02000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE_MASK                                                 0x10000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE_MASK                                               0x20000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS_MASK                                                    0x80000000L
+//RDPCSTX1_RDPCSTX_PHY_CNTL1
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN__SHIFT                                               0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN__SHIFT                                               0x1
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE__SHIFT                                           0x2
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN__SHIFT                                               0x3
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE__SHIFT                                           0x4
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET__SHIFT                                              0x5
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN__SHIFT                                               0x6
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE__SHIFT                                           0x7
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN_MASK                                                 0x00000001L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN_MASK                                                 0x00000002L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE_MASK                                             0x00000004L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN_MASK                                                 0x00000008L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE_MASK                                             0x00000010L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET_MASK                                                0x00000020L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN_MASK                                                 0x00000040L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE_MASK                                             0x00000080L
+//RDPCSTX1_RDPCSTX_PHY_CNTL2
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR__SHIFT                                                  0x3
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN__SHIFT                                 0x4
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN__SHIFT                                 0x5
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN__SHIFT                                 0x6
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN__SHIFT                                 0x7
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN__SHIFT                                 0x8
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN__SHIFT                                 0x9
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN__SHIFT                                 0xa
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN__SHIFT                                 0xb
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR_MASK                                                    0x00000008L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN_MASK                                   0x00000010L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN_MASK                                   0x00000020L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN_MASK                                   0x00000040L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN_MASK                                   0x00000080L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN_MASK                                   0x00000100L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN_MASK                                   0x00000200L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN_MASK                                   0x00000400L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN_MASK                                   0x00000800L
+//RDPCSTX1_RDPCSTX_PHY_CNTL3
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET__SHIFT                                             0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE__SHIFT                                           0x1
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY__SHIFT                                           0x2
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN__SHIFT                                           0x3
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ__SHIFT                                               0x4
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK__SHIFT                                               0x5
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET__SHIFT                                             0x8
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE__SHIFT                                           0x9
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY__SHIFT                                           0xa
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN__SHIFT                                           0xb
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ__SHIFT                                               0xc
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK__SHIFT                                               0xd
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET__SHIFT                                             0x10
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE__SHIFT                                           0x11
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY__SHIFT                                           0x12
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN__SHIFT                                           0x13
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ__SHIFT                                               0x14
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK__SHIFT                                               0x15
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET__SHIFT                                             0x18
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE__SHIFT                                           0x19
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY__SHIFT                                           0x1a
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN__SHIFT                                           0x1b
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ__SHIFT                                               0x1c
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK__SHIFT                                               0x1d
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_MASK                                               0x00000001L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_MASK                                             0x00000002L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_MASK                                             0x00000004L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_MASK                                             0x00000008L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_MASK                                                 0x00000010L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_MASK                                                 0x00000020L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_MASK                                               0x00000100L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_MASK                                             0x00000200L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_MASK                                             0x00000400L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_MASK                                             0x00000800L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_MASK                                                 0x00001000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_MASK                                                 0x00002000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_MASK                                               0x00010000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_MASK                                             0x00020000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_MASK                                             0x00040000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_MASK                                             0x00080000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_MASK                                                 0x00100000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_MASK                                                 0x00200000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_MASK                                               0x01000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_MASK                                             0x02000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_MASK                                             0x04000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_MASK                                             0x08000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_MASK                                                 0x10000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_MASK                                                 0x20000000L
+//RDPCSTX1_RDPCSTX_PHY_CNTL4
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL__SHIFT                                         0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT__SHIFT                                            0x4
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC__SHIFT                                    0x6
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN__SHIFT                                        0x7
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL__SHIFT                                         0x8
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT__SHIFT                                            0xc
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC__SHIFT                                    0xe
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN__SHIFT                                        0xf
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL__SHIFT                                         0x10
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT__SHIFT                                            0x14
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC__SHIFT                                    0x16
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN__SHIFT                                        0x17
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL__SHIFT                                         0x18
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT__SHIFT                                            0x1c
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC__SHIFT                                    0x1e
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN__SHIFT                                        0x1f
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL_MASK                                           0x00000007L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT_MASK                                              0x00000010L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC_MASK                                      0x00000040L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN_MASK                                          0x00000080L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL_MASK                                           0x00000700L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT_MASK                                              0x00001000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC_MASK                                      0x00004000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN_MASK                                          0x00008000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL_MASK                                           0x00070000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT_MASK                                              0x00100000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC_MASK                                      0x00400000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN_MASK                                          0x00800000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL_MASK                                           0x07000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT_MASK                                              0x10000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC_MASK                                      0x40000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN_MASK                                          0x80000000L
+//RDPCSTX1_RDPCSTX_PHY_CNTL5
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD__SHIFT                                               0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE__SHIFT                                              0x1
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH__SHIFT                                             0x4
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ__SHIFT                                         0x6
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT__SHIFT                                      0x7
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD__SHIFT                                               0x8
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE__SHIFT                                              0x9
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH__SHIFT                                             0xc
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ__SHIFT                                         0xe
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT__SHIFT                                      0xf
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD__SHIFT                                               0x10
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE__SHIFT                                              0x11
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH__SHIFT                                             0x14
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ__SHIFT                                         0x16
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT__SHIFT                                      0x17
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD__SHIFT                                               0x18
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE__SHIFT                                              0x19
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH__SHIFT                                             0x1c
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ__SHIFT                                         0x1e
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT__SHIFT                                      0x1f
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD_MASK                                                 0x00000001L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE_MASK                                                0x0000000EL
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH_MASK                                               0x00000030L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ_MASK                                           0x00000040L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT_MASK                                        0x00000080L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD_MASK                                                 0x00000100L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE_MASK                                                0x00000E00L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH_MASK                                               0x00003000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ_MASK                                           0x00004000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT_MASK                                        0x00008000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD_MASK                                                 0x00010000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE_MASK                                                0x000E0000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH_MASK                                               0x00300000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ_MASK                                           0x00400000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT_MASK                                        0x00800000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD_MASK                                                 0x01000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE_MASK                                                0x0E000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH_MASK                                               0x30000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ_MASK                                           0x40000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT_MASK                                        0x80000000L
+//RDPCSTX1_RDPCSTX_PHY_CNTL6
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE__SHIFT                                            0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN__SHIFT                                           0x2
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE__SHIFT                                            0x4
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN__SHIFT                                           0x6
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE__SHIFT                                            0x8
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN__SHIFT                                           0xa
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE__SHIFT                                            0xc
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN__SHIFT                                           0xe
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4__SHIFT                                                0x10
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE__SHIFT                                            0x11
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK__SHIFT                                        0x12
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN__SHIFT                                            0x13
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ__SHIFT                                           0x14
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_MASK                                              0x00000003L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_MASK                                             0x00000004L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_MASK                                              0x00000030L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_MASK                                             0x00000040L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_MASK                                              0x00000300L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_MASK                                             0x00000400L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_MASK                                              0x00003000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_MASK                                             0x00004000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_MASK                                                  0x00010000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_MASK                                              0x00020000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_MASK                                          0x00040000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_MASK                                              0x00080000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_MASK                                             0x00100000L
+//RDPCSTX1_RDPCSTX_PHY_CNTL7
+#define RDPCSTX1_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN__SHIFT                                       0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT__SHIFT                                      0x10
+#define RDPCSTX1_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN_MASK                                         0x0000FFFFL
+#define RDPCSTX1_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT_MASK                                        0xFFFF0000L
+//RDPCSTX1_RDPCSTX_PHY_CNTL8
+#define RDPCSTX1_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK__SHIFT                                        0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK_MASK                                          0x000FFFFFL
+//RDPCSTX1_RDPCSTX_PHY_CNTL9
+#define RDPCSTX1_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE__SHIFT                                    0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD__SHIFT                                   0x18
+#define RDPCSTX1_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE_MASK                                      0x001FFFFFL
+#define RDPCSTX1_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD_MASK                                     0x01000000L
+//RDPCSTX1_RDPCSTX_PHY_CNTL10
+#define RDPCSTX1_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM__SHIFT                                      0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM_MASK                                        0x0000FFFFL
+//RDPCSTX1_RDPCSTX_PHY_CNTL11
+#define RDPCSTX1_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER__SHIFT                                     0x4
+#define RDPCSTX1_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV__SHIFT                                     0x10
+#define RDPCSTX1_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV__SHIFT                                    0x14
+#define RDPCSTX1_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV__SHIFT                           0x18
+#define RDPCSTX1_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER_MASK                                       0x0000FFF0L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV_MASK                                       0x00070000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV_MASK                                      0x00700000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV_MASK                             0x03000000L
+//RDPCSTX1_RDPCSTX_PHY_CNTL12
+#define RDPCSTX1_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN__SHIFT                                    0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN__SHIFT                                   0x2
+#define RDPCSTX1_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV__SHIFT                                     0x4
+#define RDPCSTX1_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE__SHIFT                                          0x7
+#define RDPCSTX1_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN__SHIFT                                         0x8
+#define RDPCSTX1_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN_MASK                                      0x00000001L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN_MASK                                     0x00000004L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV_MASK                                       0x00000070L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE_MASK                                            0x00000080L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN_MASK                                           0x00000100L
+//RDPCSTX1_RDPCSTX_PHY_CNTL13
+#define RDPCSTX1_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER__SHIFT                                 0x14
+#define RDPCSTX1_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN__SHIFT                                     0x1c
+#define RDPCSTX1_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN__SHIFT                                       0x1d
+#define RDPCSTX1_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE__SHIFT                               0x1e
+#define RDPCSTX1_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER_MASK                                   0x0FF00000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN_MASK                                       0x10000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN_MASK                                         0x20000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE_MASK                                 0x40000000L
+//RDPCSTX1_RDPCSTX_PHY_CNTL14
+#define RDPCSTX1_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE__SHIFT                                      0x0
+#define RDPCSTX1_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN__SHIFT                                       0x18
+#define RDPCSTX1_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN__SHIFT                                        0x1c
+#define RDPCSTX1_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE_MASK                                        0x00000001L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN_MASK                                         0x01000000L
+#define RDPCSTX1_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN_MASK                                          0x10000000L
+//RDPCSTX1_RDPCSTX_PHY_FUSE0
+#define RDPCSTX1_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX1_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX1_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX1_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I__SHIFT                                             0x12
+#define RDPCSTX1_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO__SHIFT                                        0x14
+#define RDPCSTX1_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX1_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX1_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX1_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I_MASK                                               0x000C0000L
+#define RDPCSTX1_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO_MASK                                          0x00300000L
+//RDPCSTX1_RDPCSTX_PHY_FUSE1
+#define RDPCSTX1_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX1_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX1_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX1_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT__SHIFT                                          0x12
+#define RDPCSTX1_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP__SHIFT                                         0x19
+#define RDPCSTX1_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX1_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX1_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX1_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT_MASK                                            0x01FC0000L
+#define RDPCSTX1_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP_MASK                                           0xFE000000L
+//RDPCSTX1_RDPCSTX_PHY_FUSE2
+#define RDPCSTX1_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX1_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX1_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX1_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX1_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX1_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST_MASK                                             0x0003F000L
+//RDPCSTX1_RDPCSTX_PHY_FUSE3
+#define RDPCSTX1_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX1_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX1_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX1_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE__SHIFT                                             0x12
+#define RDPCSTX1_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE__SHIFT                                                0x18
+#define RDPCSTX1_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX1_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX1_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX1_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE_MASK                                               0x00FC0000L
+#define RDPCSTX1_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE_MASK                                                  0x03000000L
+//RDPCSTX1_RDPCSTX_PHY_RX_LD_VAL
+#define RDPCSTX1_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL__SHIFT                                        0x0
+#define RDPCSTX1_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL__SHIFT                                        0x8
+#define RDPCSTX1_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL_MASK                                          0x0000007FL
+#define RDPCSTX1_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL_MASK                                          0x001FFF00L
+//RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED__SHIFT                         0x0
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED__SHIFT                       0x1
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED__SHIFT                       0x2
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED__SHIFT                       0x3
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED__SHIFT                           0x4
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED__SHIFT                           0x5
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED__SHIFT                         0x8
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED__SHIFT                       0x9
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED__SHIFT                       0xa
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED__SHIFT                       0xb
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED__SHIFT                           0xc
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED__SHIFT                           0xd
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED__SHIFT                         0x10
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED__SHIFT                       0x11
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED__SHIFT                       0x12
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED__SHIFT                       0x13
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED__SHIFT                           0x14
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED__SHIFT                           0x15
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED__SHIFT                         0x18
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED__SHIFT                       0x19
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED__SHIFT                       0x1a
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED__SHIFT                       0x1b
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED__SHIFT                           0x1c
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED__SHIFT                           0x1d
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED_MASK                           0x00000001L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED_MASK                         0x00000002L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED_MASK                         0x00000004L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED_MASK                         0x00000008L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED_MASK                             0x00000010L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED_MASK                             0x00000020L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED_MASK                           0x00000100L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED_MASK                         0x00000200L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED_MASK                         0x00000400L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED_MASK                         0x00000800L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED_MASK                             0x00001000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED_MASK                             0x00002000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED_MASK                           0x00010000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED_MASK                         0x00020000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED_MASK                         0x00040000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED_MASK                         0x00080000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED_MASK                             0x00100000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED_MASK                             0x00200000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED_MASK                           0x01000000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED_MASK                         0x02000000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED_MASK                         0x04000000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED_MASK                         0x08000000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED_MASK                             0x10000000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED_MASK                             0x20000000L
+//RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED__SHIFT                        0x0
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED__SHIFT                       0x2
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED__SHIFT                        0x4
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED__SHIFT                       0x6
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED__SHIFT                        0x8
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED__SHIFT                       0xa
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED__SHIFT                        0xc
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED__SHIFT                       0xe
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED__SHIFT                            0x10
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED__SHIFT                        0x11
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED__SHIFT                    0x12
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED__SHIFT                        0x13
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED__SHIFT                       0x14
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED_MASK                          0x00000003L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED_MASK                         0x00000004L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED_MASK                          0x00000030L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED_MASK                         0x00000040L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED_MASK                          0x00000300L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED_MASK                         0x00000400L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED_MASK                          0x00003000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED_MASK                         0x00004000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED_MASK                              0x00010000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED_MASK                          0x00020000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED_MASK                      0x00040000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED_MASK                          0x00080000L
+#define RDPCSTX1_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED_MASK                         0x00100000L
+//RDPCSTX1_RDPCSTX_DPALT_CONTROL_REG
+#define RDPCSTX1_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS__SHIFT                                  0x0
+#define RDPCSTX1_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED__SHIFT                                0x4
+#define RDPCSTX1_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE__SHIFT                                  0x8
+#define RDPCSTX1_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS_MASK                                    0x00000001L
+#define RDPCSTX1_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED_MASK                                  0x00000010L
+#define RDPCSTX1_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE_MASK                                    0x0000FF00L
+
+
+// addressBlock: dpcssys_dpcssys_cr1_dispdec
+//DPCSSYS_CR1_DPCSSYS_CR_ADDR
+#define DPCSSYS_CR1_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                  0x0
+#define DPCSSYS_CR1_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                    0x0000FFFFL
+//DPCSSYS_CR1_DPCSSYS_CR_DATA
+#define DPCSSYS_CR1_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                  0x0
+#define DPCSSYS_CR1_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                    0x0000FFFFL
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx2_dispdec
+//DPCSTX2_DPCSTX_TX_CLOCK_CNTL
+#define DPCSTX2_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS__SHIFT                                             0x0
+#define DPCSTX2_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN__SHIFT                                                   0x1
+#define DPCSTX2_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON__SHIFT                                             0x2
+#define DPCSTX2_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0x3
+#define DPCSTX2_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS_MASK                                               0x00000001L
+#define DPCSTX2_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN_MASK                                                     0x00000002L
+#define DPCSTX2_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON_MASK                                               0x00000004L
+#define DPCSTX2_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000008L
+//DPCSTX2_DPCSTX_TX_CNTL
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ__SHIFT                                                 0xc
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING__SHIFT                                             0xd
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP__SHIFT                                                      0xe
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT__SHIFT                                              0xf
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ_MASK                                                   0x00001000L
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING_MASK                                               0x00002000L
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP_MASK                                                        0x00004000L
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT_MASK                                                0x00008000L
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define DPCSTX2_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//DPCSTX2_DPCSTX_CBUS_CNTL
+#define DPCSTX2_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY__SHIFT                                               0x0
+#define DPCSTX2_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET__SHIFT                                                 0x1f
+#define DPCSTX2_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY_MASK                                                 0x000000FFL
+#define DPCSTX2_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET_MASK                                                   0x80000000L
+//DPCSTX2_DPCSTX_INTERRUPT_CNTL
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW__SHIFT                                          0x0
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR__SHIFT                                              0x1
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK__SHIFT                                        0x4
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR__SHIFT                                             0x8
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR__SHIFT                                             0x9
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR__SHIFT                                             0xa
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR__SHIFT                                             0xb
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR__SHIFT                                               0xc
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK__SHIFT                                         0x10
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK__SHIFT                                             0x14
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW_MASK                                            0x00000001L
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR_MASK                                                0x00000002L
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK_MASK                                          0x00000010L
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR_MASK                                               0x00000100L
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR_MASK                                               0x00000200L
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR_MASK                                               0x00000400L
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR_MASK                                               0x00000800L
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR_MASK                                                 0x00001000L
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK_MASK                                           0x00010000L
+#define DPCSTX2_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK_MASK                                               0x00100000L
+//DPCSTX2_DPCSTX_PLL_UPDATE_ADDR
+#define DPCSTX2_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR__SHIFT                                           0x0
+#define DPCSTX2_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR_MASK                                             0x0003FFFFL
+//DPCSTX2_DPCSTX_PLL_UPDATE_DATA
+#define DPCSTX2_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA__SHIFT                                           0x0
+#define DPCSTX2_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA_MASK                                             0xFFFFFFFFL
+//DPCSTX2_DPCSTX_DEBUG_CONFIG
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN__SHIFT                                                       0x0
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL__SHIFT                                               0x1
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL__SHIFT                                            0x4
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL__SHIFT                                       0x8
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS__SHIFT                                                 0xe
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN__SHIFT                                          0x10
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN_MASK                                                         0x00000001L
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL_MASK                                                 0x0000000EL
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL_MASK                                              0x00000070L
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL_MASK                                         0x00000700L
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS_MASK                                                   0x00004000L
+#define DPCSTX2_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN_MASK                                            0x00010000L
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx2_dispdec
+//RDPCSTX2_RDPCSTX_CNTL
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET__SHIFT                                                   0x0
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET__SHIFT                                                   0x4
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN__SHIFT                                                  0xc
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN__SHIFT                                                  0xd
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN__SHIFT                                                  0xe
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN__SHIFT                                                  0xf
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN__SHIFT                                              0x18
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN__SHIFT                                       0x19
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS__SHIFT                                                0x1a
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET_MASK                                                     0x00000001L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET_MASK                                                     0x00000010L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN_MASK                                                    0x00001000L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN_MASK                                                    0x00002000L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN_MASK                                                    0x00004000L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN_MASK                                                    0x00008000L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN_MASK                                                0x01000000L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN_MASK                                         0x02000000L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS_MASK                                                  0x04000000L
+#define RDPCSTX2_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//RDPCSTX2_RDPCSTX_CLOCK_CNTL
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN__SHIFT                                               0x0
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN__SHIFT                                          0x4
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN__SHIFT                                          0x5
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN__SHIFT                                          0x6
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN__SHIFT                                          0x7
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS__SHIFT                                        0x8
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN__SHIFT                                              0x9
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0xa
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS__SHIFT                                            0xc
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN__SHIFT                                                  0xd
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON__SHIFT                                            0xe
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS__SHIFT                                              0x10
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN_MASK                                                 0x00000001L
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN_MASK                                            0x00000010L
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN_MASK                                            0x00000020L
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN_MASK                                            0x00000040L
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN_MASK                                            0x00000080L
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS_MASK                                          0x00000100L
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN_MASK                                                0x00000200L
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000400L
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS_MASK                                              0x00001000L
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN_MASK                                                    0x00002000L
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON_MASK                                              0x00004000L
+#define RDPCSTX2_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS_MASK                                                0x00010000L
+//RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW__SHIFT                                    0x0
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE__SHIFT                                 0x1
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE__SHIFT                                   0x2
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR__SHIFT                                       0x4
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR__SHIFT                                       0x5
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR__SHIFT                                       0x6
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR__SHIFT                                       0x7
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR__SHIFT                                        0x8
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR__SHIFT                             0x9
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR__SHIFT                               0xa
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR__SHIFT                                         0xc
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK__SHIFT                                  0x10
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK__SHIFT                            0x11
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK__SHIFT                              0x12
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK__SHIFT                                   0x14
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW_MASK                                      0x00000001L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK                                   0x00000002L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK                                     0x00000004L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR_MASK                                         0x00000010L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR_MASK                                         0x00000020L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR_MASK                                         0x00000040L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR_MASK                                         0x00000080L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR_MASK                                          0x00000100L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR_MASK                               0x00000200L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR_MASK                                 0x00000400L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR_MASK                                           0x00001000L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK_MASK                                    0x00010000L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK_MASK                              0x00020000L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK_MASK                                0x00040000L
+#define RDPCSTX2_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK_MASK                                     0x00100000L
+//RDPCSTX2_RDPCSTX_PLL_UPDATE_DATA
+#define RDPCSTX2_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA__SHIFT                                        0x0
+#define RDPCSTX2_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA_MASK                                          0x00000001L
+//RDPCSTX2_RDPCS_TX_CR_ADDR
+#define RDPCSTX2_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                    0x0
+#define RDPCSTX2_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                      0x0000FFFFL
+//RDPCSTX2_RDPCS_TX_CR_DATA
+#define RDPCSTX2_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                    0x0
+#define RDPCSTX2_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                      0x0000FFFFL
+//RDPCSTX2_RDPCS_TX_SRAM_CNTL
+#define RDPCSTX2_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS__SHIFT                                                 0x14
+#define RDPCSTX2_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE__SHIFT                                               0x18
+#define RDPCSTX2_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE__SHIFT                                           0x1c
+#define RDPCSTX2_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS_MASK                                                   0x00100000L
+#define RDPCSTX2_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE_MASK                                                 0x03000000L
+#define RDPCSTX2_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE_MASK                                             0x30000000L
+//RDPCSTX2_RDPCSTX_MEM_POWER_CTRL
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES__SHIFT                                           0x0
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES__SHIFT                                    0xc
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1__SHIFT                                  0x1a
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2__SHIFT                                  0x1b
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1__SHIFT                                   0x1c
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2__SHIFT                                   0x1d
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM__SHIFT                                         0x1e
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES_MASK                                             0x00000FFFL
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES_MASK                                      0x03FFF000L
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1_MASK                                    0x04000000L
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2_MASK                                    0x08000000L
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1_MASK                                     0x10000000L
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2_MASK                                     0x20000000L
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM_MASK                                           0x40000000L
+//RDPCSTX2_RDPCSTX_MEM_POWER_CTRL2
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF__SHIFT                                    0x0
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO__SHIFT                                    0x2
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF_MASK                                      0x00000003L
+#define RDPCSTX2_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO_MASK                                      0x00000004L
+//RDPCSTX2_RDPCSTX_SCRATCH
+#define RDPCSTX2_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH__SHIFT                                                      0x0
+#define RDPCSTX2_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH_MASK                                                        0xFFFFFFFFL
+//RDPCSTX2_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG__SHIFT                      0x0
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS__SHIFT              0x4
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE__SHIFT                      0x8
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG_MASK                        0x00000001L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS_MASK                0x00000010L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE_MASK                        0x0000FF00L
+//RDPCSTX2_RDPCSTX_DEBUG_CONFIG
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN__SHIFT                                                    0x0
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT__SHIFT                                        0x4
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP__SHIFT                                        0x7
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK__SHIFT                                          0x8
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE__SHIFT                                       0xf
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX__SHIFT                                          0x10
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT__SHIFT                                              0x18
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN_MASK                                                      0x00000001L
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT_MASK                                          0x00000070L
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP_MASK                                          0x00000080L
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK_MASK                                            0x00001F00L
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE_MASK                                         0x00008000L
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX_MASK                                            0x00FF0000L
+#define RDPCSTX2_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MASK                                                0xFF000000L
+//RDPCSTX2_RDPCSTX_PHY_CNTL0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET__SHIFT                                                    0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET__SHIFT                                            0x1
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N__SHIFT                                          0x2
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN__SHIFT                                           0x3
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT__SHIFT                                                  0x4
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE__SHIFT                                          0x8
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE__SHIFT                                                0x9
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL__SHIFT                                            0xe
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ__SHIFT                                                0x11
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK__SHIFT                                                0x12
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL__SHIFT                                              0x14
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL__SHIFT                                               0x15
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN__SHIFT                                            0x18
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT__SHIFT                                        0x19
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE__SHIFT                                               0x1c
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE__SHIFT                                             0x1d
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS__SHIFT                                                  0x1f
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET_MASK                                                      0x00000001L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET_MASK                                              0x00000002L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N_MASK                                            0x00000004L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN_MASK                                             0x00000008L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT_MASK                                                    0x00000030L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE_MASK                                            0x00000100L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE_MASK                                                  0x00003E00L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL_MASK                                              0x0001C000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ_MASK                                                  0x00020000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK_MASK                                                  0x00040000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL_MASK                                                0x00100000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL_MASK                                                 0x00200000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN_MASK                                              0x01000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT_MASK                                          0x02000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE_MASK                                                 0x10000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE_MASK                                               0x20000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS_MASK                                                    0x80000000L
+//RDPCSTX2_RDPCSTX_PHY_CNTL1
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN__SHIFT                                               0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN__SHIFT                                               0x1
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE__SHIFT                                           0x2
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN__SHIFT                                               0x3
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE__SHIFT                                           0x4
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET__SHIFT                                              0x5
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN__SHIFT                                               0x6
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE__SHIFT                                           0x7
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN_MASK                                                 0x00000001L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN_MASK                                                 0x00000002L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE_MASK                                             0x00000004L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN_MASK                                                 0x00000008L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE_MASK                                             0x00000010L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET_MASK                                                0x00000020L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN_MASK                                                 0x00000040L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE_MASK                                             0x00000080L
+//RDPCSTX2_RDPCSTX_PHY_CNTL2
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR__SHIFT                                                  0x3
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN__SHIFT                                 0x4
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN__SHIFT                                 0x5
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN__SHIFT                                 0x6
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN__SHIFT                                 0x7
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN__SHIFT                                 0x8
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN__SHIFT                                 0x9
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN__SHIFT                                 0xa
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN__SHIFT                                 0xb
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR_MASK                                                    0x00000008L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN_MASK                                   0x00000010L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN_MASK                                   0x00000020L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN_MASK                                   0x00000040L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN_MASK                                   0x00000080L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN_MASK                                   0x00000100L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN_MASK                                   0x00000200L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN_MASK                                   0x00000400L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN_MASK                                   0x00000800L
+//RDPCSTX2_RDPCSTX_PHY_CNTL3
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET__SHIFT                                             0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE__SHIFT                                           0x1
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY__SHIFT                                           0x2
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN__SHIFT                                           0x3
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ__SHIFT                                               0x4
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK__SHIFT                                               0x5
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET__SHIFT                                             0x8
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE__SHIFT                                           0x9
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY__SHIFT                                           0xa
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN__SHIFT                                           0xb
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ__SHIFT                                               0xc
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK__SHIFT                                               0xd
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET__SHIFT                                             0x10
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE__SHIFT                                           0x11
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY__SHIFT                                           0x12
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN__SHIFT                                           0x13
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ__SHIFT                                               0x14
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK__SHIFT                                               0x15
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET__SHIFT                                             0x18
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE__SHIFT                                           0x19
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY__SHIFT                                           0x1a
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN__SHIFT                                           0x1b
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ__SHIFT                                               0x1c
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK__SHIFT                                               0x1d
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_MASK                                               0x00000001L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_MASK                                             0x00000002L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_MASK                                             0x00000004L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_MASK                                             0x00000008L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_MASK                                                 0x00000010L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_MASK                                                 0x00000020L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_MASK                                               0x00000100L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_MASK                                             0x00000200L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_MASK                                             0x00000400L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_MASK                                             0x00000800L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_MASK                                                 0x00001000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_MASK                                                 0x00002000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_MASK                                               0x00010000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_MASK                                             0x00020000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_MASK                                             0x00040000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_MASK                                             0x00080000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_MASK                                                 0x00100000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_MASK                                                 0x00200000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_MASK                                               0x01000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_MASK                                             0x02000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_MASK                                             0x04000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_MASK                                             0x08000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_MASK                                                 0x10000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_MASK                                                 0x20000000L
+//RDPCSTX2_RDPCSTX_PHY_CNTL4
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL__SHIFT                                         0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT__SHIFT                                            0x4
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC__SHIFT                                    0x6
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN__SHIFT                                        0x7
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL__SHIFT                                         0x8
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT__SHIFT                                            0xc
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC__SHIFT                                    0xe
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN__SHIFT                                        0xf
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL__SHIFT                                         0x10
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT__SHIFT                                            0x14
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC__SHIFT                                    0x16
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN__SHIFT                                        0x17
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL__SHIFT                                         0x18
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT__SHIFT                                            0x1c
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC__SHIFT                                    0x1e
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN__SHIFT                                        0x1f
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL_MASK                                           0x00000007L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT_MASK                                              0x00000010L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC_MASK                                      0x00000040L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN_MASK                                          0x00000080L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL_MASK                                           0x00000700L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT_MASK                                              0x00001000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC_MASK                                      0x00004000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN_MASK                                          0x00008000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL_MASK                                           0x00070000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT_MASK                                              0x00100000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC_MASK                                      0x00400000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN_MASK                                          0x00800000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL_MASK                                           0x07000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT_MASK                                              0x10000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC_MASK                                      0x40000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN_MASK                                          0x80000000L
+//RDPCSTX2_RDPCSTX_PHY_CNTL5
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD__SHIFT                                               0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE__SHIFT                                              0x1
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH__SHIFT                                             0x4
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ__SHIFT                                         0x6
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT__SHIFT                                      0x7
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD__SHIFT                                               0x8
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE__SHIFT                                              0x9
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH__SHIFT                                             0xc
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ__SHIFT                                         0xe
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT__SHIFT                                      0xf
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD__SHIFT                                               0x10
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE__SHIFT                                              0x11
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH__SHIFT                                             0x14
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ__SHIFT                                         0x16
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT__SHIFT                                      0x17
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD__SHIFT                                               0x18
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE__SHIFT                                              0x19
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH__SHIFT                                             0x1c
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ__SHIFT                                         0x1e
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT__SHIFT                                      0x1f
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD_MASK                                                 0x00000001L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE_MASK                                                0x0000000EL
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH_MASK                                               0x00000030L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ_MASK                                           0x00000040L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT_MASK                                        0x00000080L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD_MASK                                                 0x00000100L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE_MASK                                                0x00000E00L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH_MASK                                               0x00003000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ_MASK                                           0x00004000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT_MASK                                        0x00008000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD_MASK                                                 0x00010000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE_MASK                                                0x000E0000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH_MASK                                               0x00300000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ_MASK                                           0x00400000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT_MASK                                        0x00800000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD_MASK                                                 0x01000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE_MASK                                                0x0E000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH_MASK                                               0x30000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ_MASK                                           0x40000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT_MASK                                        0x80000000L
+//RDPCSTX2_RDPCSTX_PHY_CNTL6
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE__SHIFT                                            0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN__SHIFT                                           0x2
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE__SHIFT                                            0x4
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN__SHIFT                                           0x6
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE__SHIFT                                            0x8
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN__SHIFT                                           0xa
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE__SHIFT                                            0xc
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN__SHIFT                                           0xe
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4__SHIFT                                                0x10
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE__SHIFT                                            0x11
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK__SHIFT                                        0x12
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN__SHIFT                                            0x13
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ__SHIFT                                           0x14
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_MASK                                              0x00000003L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_MASK                                             0x00000004L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_MASK                                              0x00000030L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_MASK                                             0x00000040L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_MASK                                              0x00000300L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_MASK                                             0x00000400L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_MASK                                              0x00003000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_MASK                                             0x00004000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_MASK                                                  0x00010000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_MASK                                              0x00020000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_MASK                                          0x00040000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_MASK                                              0x00080000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_MASK                                             0x00100000L
+//RDPCSTX2_RDPCSTX_PHY_CNTL7
+#define RDPCSTX2_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN__SHIFT                                       0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT__SHIFT                                      0x10
+#define RDPCSTX2_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN_MASK                                         0x0000FFFFL
+#define RDPCSTX2_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT_MASK                                        0xFFFF0000L
+//RDPCSTX2_RDPCSTX_PHY_CNTL8
+#define RDPCSTX2_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK__SHIFT                                        0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK_MASK                                          0x000FFFFFL
+//RDPCSTX2_RDPCSTX_PHY_CNTL9
+#define RDPCSTX2_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE__SHIFT                                    0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD__SHIFT                                   0x18
+#define RDPCSTX2_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE_MASK                                      0x001FFFFFL
+#define RDPCSTX2_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD_MASK                                     0x01000000L
+//RDPCSTX2_RDPCSTX_PHY_CNTL10
+#define RDPCSTX2_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM__SHIFT                                      0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM_MASK                                        0x0000FFFFL
+//RDPCSTX2_RDPCSTX_PHY_CNTL11
+#define RDPCSTX2_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER__SHIFT                                     0x4
+#define RDPCSTX2_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV__SHIFT                                     0x10
+#define RDPCSTX2_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV__SHIFT                                    0x14
+#define RDPCSTX2_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV__SHIFT                           0x18
+#define RDPCSTX2_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER_MASK                                       0x0000FFF0L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV_MASK                                       0x00070000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV_MASK                                      0x00700000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV_MASK                             0x03000000L
+//RDPCSTX2_RDPCSTX_PHY_CNTL12
+#define RDPCSTX2_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN__SHIFT                                    0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN__SHIFT                                   0x2
+#define RDPCSTX2_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV__SHIFT                                     0x4
+#define RDPCSTX2_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE__SHIFT                                          0x7
+#define RDPCSTX2_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN__SHIFT                                         0x8
+#define RDPCSTX2_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN_MASK                                      0x00000001L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN_MASK                                     0x00000004L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV_MASK                                       0x00000070L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE_MASK                                            0x00000080L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN_MASK                                           0x00000100L
+//RDPCSTX2_RDPCSTX_PHY_CNTL13
+#define RDPCSTX2_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER__SHIFT                                 0x14
+#define RDPCSTX2_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN__SHIFT                                     0x1c
+#define RDPCSTX2_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN__SHIFT                                       0x1d
+#define RDPCSTX2_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE__SHIFT                               0x1e
+#define RDPCSTX2_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER_MASK                                   0x0FF00000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN_MASK                                       0x10000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN_MASK                                         0x20000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE_MASK                                 0x40000000L
+//RDPCSTX2_RDPCSTX_PHY_CNTL14
+#define RDPCSTX2_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE__SHIFT                                      0x0
+#define RDPCSTX2_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN__SHIFT                                       0x18
+#define RDPCSTX2_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN__SHIFT                                        0x1c
+#define RDPCSTX2_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE_MASK                                        0x00000001L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN_MASK                                         0x01000000L
+#define RDPCSTX2_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN_MASK                                          0x10000000L
+//RDPCSTX2_RDPCSTX_PHY_FUSE0
+#define RDPCSTX2_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX2_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX2_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX2_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I__SHIFT                                             0x12
+#define RDPCSTX2_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO__SHIFT                                        0x14
+#define RDPCSTX2_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX2_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX2_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX2_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I_MASK                                               0x000C0000L
+#define RDPCSTX2_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO_MASK                                          0x00300000L
+//RDPCSTX2_RDPCSTX_PHY_FUSE1
+#define RDPCSTX2_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX2_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX2_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX2_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT__SHIFT                                          0x12
+#define RDPCSTX2_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP__SHIFT                                         0x19
+#define RDPCSTX2_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX2_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX2_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX2_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT_MASK                                            0x01FC0000L
+#define RDPCSTX2_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP_MASK                                           0xFE000000L
+//RDPCSTX2_RDPCSTX_PHY_FUSE2
+#define RDPCSTX2_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX2_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX2_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX2_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX2_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX2_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST_MASK                                             0x0003F000L
+//RDPCSTX2_RDPCSTX_PHY_FUSE3
+#define RDPCSTX2_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX2_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX2_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX2_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE__SHIFT                                             0x12
+#define RDPCSTX2_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE__SHIFT                                                0x18
+#define RDPCSTX2_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX2_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX2_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX2_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE_MASK                                               0x00FC0000L
+#define RDPCSTX2_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE_MASK                                                  0x03000000L
+//RDPCSTX2_RDPCSTX_PHY_RX_LD_VAL
+#define RDPCSTX2_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL__SHIFT                                        0x0
+#define RDPCSTX2_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL__SHIFT                                        0x8
+#define RDPCSTX2_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL_MASK                                          0x0000007FL
+#define RDPCSTX2_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL_MASK                                          0x001FFF00L
+//RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED__SHIFT                         0x0
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED__SHIFT                       0x1
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED__SHIFT                       0x2
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED__SHIFT                       0x3
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED__SHIFT                           0x4
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED__SHIFT                           0x5
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED__SHIFT                         0x8
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED__SHIFT                       0x9
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED__SHIFT                       0xa
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED__SHIFT                       0xb
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED__SHIFT                           0xc
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED__SHIFT                           0xd
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED__SHIFT                         0x10
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED__SHIFT                       0x11
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED__SHIFT                       0x12
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED__SHIFT                       0x13
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED__SHIFT                           0x14
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED__SHIFT                           0x15
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED__SHIFT                         0x18
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED__SHIFT                       0x19
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED__SHIFT                       0x1a
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED__SHIFT                       0x1b
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED__SHIFT                           0x1c
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED__SHIFT                           0x1d
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED_MASK                           0x00000001L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED_MASK                         0x00000002L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED_MASK                         0x00000004L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED_MASK                         0x00000008L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED_MASK                             0x00000010L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED_MASK                             0x00000020L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED_MASK                           0x00000100L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED_MASK                         0x00000200L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED_MASK                         0x00000400L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED_MASK                         0x00000800L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED_MASK                             0x00001000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED_MASK                             0x00002000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED_MASK                           0x00010000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED_MASK                         0x00020000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED_MASK                         0x00040000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED_MASK                         0x00080000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED_MASK                             0x00100000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED_MASK                             0x00200000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED_MASK                           0x01000000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED_MASK                         0x02000000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED_MASK                         0x04000000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED_MASK                         0x08000000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED_MASK                             0x10000000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED_MASK                             0x20000000L
+//RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED__SHIFT                        0x0
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED__SHIFT                       0x2
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED__SHIFT                        0x4
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED__SHIFT                       0x6
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED__SHIFT                        0x8
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED__SHIFT                       0xa
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED__SHIFT                        0xc
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED__SHIFT                       0xe
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED__SHIFT                            0x10
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED__SHIFT                        0x11
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED__SHIFT                    0x12
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED__SHIFT                        0x13
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED__SHIFT                       0x14
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED_MASK                          0x00000003L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED_MASK                         0x00000004L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED_MASK                          0x00000030L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED_MASK                         0x00000040L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED_MASK                          0x00000300L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED_MASK                         0x00000400L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED_MASK                          0x00003000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED_MASK                         0x00004000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED_MASK                              0x00010000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED_MASK                          0x00020000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED_MASK                      0x00040000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED_MASK                          0x00080000L
+#define RDPCSTX2_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED_MASK                         0x00100000L
+//RDPCSTX2_RDPCSTX_DPALT_CONTROL_REG
+#define RDPCSTX2_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS__SHIFT                                  0x0
+#define RDPCSTX2_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED__SHIFT                                0x4
+#define RDPCSTX2_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE__SHIFT                                  0x8
+#define RDPCSTX2_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS_MASK                                    0x00000001L
+#define RDPCSTX2_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED_MASK                                  0x00000010L
+#define RDPCSTX2_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE_MASK                                    0x0000FF00L
+
+
+// addressBlock: dpcssys_dpcssys_cr2_dispdec
+//DPCSSYS_CR2_DPCSSYS_CR_ADDR
+#define DPCSSYS_CR2_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                  0x0
+#define DPCSSYS_CR2_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                    0x0000FFFFL
+//DPCSSYS_CR2_DPCSSYS_CR_DATA
+#define DPCSSYS_CR2_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                  0x0
+#define DPCSSYS_CR2_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                    0x0000FFFFL
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx3_dispdec
+//DPCSTX3_DPCSTX_TX_CLOCK_CNTL
+#define DPCSTX3_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS__SHIFT                                             0x0
+#define DPCSTX3_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN__SHIFT                                                   0x1
+#define DPCSTX3_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON__SHIFT                                             0x2
+#define DPCSTX3_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0x3
+#define DPCSTX3_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS_MASK                                               0x00000001L
+#define DPCSTX3_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN_MASK                                                     0x00000002L
+#define DPCSTX3_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON_MASK                                               0x00000004L
+#define DPCSTX3_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000008L
+//DPCSTX3_DPCSTX_TX_CNTL
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ__SHIFT                                                 0xc
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING__SHIFT                                             0xd
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP__SHIFT                                                      0xe
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT__SHIFT                                              0xf
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ_MASK                                                   0x00001000L
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING_MASK                                               0x00002000L
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP_MASK                                                        0x00004000L
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT_MASK                                                0x00008000L
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define DPCSTX3_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//DPCSTX3_DPCSTX_CBUS_CNTL
+#define DPCSTX3_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY__SHIFT                                               0x0
+#define DPCSTX3_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET__SHIFT                                                 0x1f
+#define DPCSTX3_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY_MASK                                                 0x000000FFL
+#define DPCSTX3_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET_MASK                                                   0x80000000L
+//DPCSTX3_DPCSTX_INTERRUPT_CNTL
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW__SHIFT                                          0x0
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR__SHIFT                                              0x1
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK__SHIFT                                        0x4
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR__SHIFT                                             0x8
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR__SHIFT                                             0x9
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR__SHIFT                                             0xa
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR__SHIFT                                             0xb
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR__SHIFT                                               0xc
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK__SHIFT                                         0x10
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK__SHIFT                                             0x14
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW_MASK                                            0x00000001L
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR_MASK                                                0x00000002L
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK_MASK                                          0x00000010L
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR_MASK                                               0x00000100L
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR_MASK                                               0x00000200L
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR_MASK                                               0x00000400L
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR_MASK                                               0x00000800L
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR_MASK                                                 0x00001000L
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK_MASK                                           0x00010000L
+#define DPCSTX3_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK_MASK                                               0x00100000L
+//DPCSTX3_DPCSTX_PLL_UPDATE_ADDR
+#define DPCSTX3_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR__SHIFT                                           0x0
+#define DPCSTX3_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR_MASK                                             0x0003FFFFL
+//DPCSTX3_DPCSTX_PLL_UPDATE_DATA
+#define DPCSTX3_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA__SHIFT                                           0x0
+#define DPCSTX3_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA_MASK                                             0xFFFFFFFFL
+//DPCSTX3_DPCSTX_DEBUG_CONFIG
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN__SHIFT                                                       0x0
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL__SHIFT                                               0x1
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL__SHIFT                                            0x4
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL__SHIFT                                       0x8
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS__SHIFT                                                 0xe
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN__SHIFT                                          0x10
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN_MASK                                                         0x00000001L
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL_MASK                                                 0x0000000EL
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL_MASK                                              0x00000070L
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL_MASK                                         0x00000700L
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS_MASK                                                   0x00004000L
+#define DPCSTX3_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN_MASK                                            0x00010000L
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx3_dispdec
+//RDPCSTX3_RDPCSTX_CNTL
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET__SHIFT                                                   0x0
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET__SHIFT                                                   0x4
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN__SHIFT                                                  0xc
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN__SHIFT                                                  0xd
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN__SHIFT                                                  0xe
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN__SHIFT                                                  0xf
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN__SHIFT                                              0x18
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN__SHIFT                                       0x19
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS__SHIFT                                                0x1a
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET_MASK                                                     0x00000001L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET_MASK                                                     0x00000010L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN_MASK                                                    0x00001000L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN_MASK                                                    0x00002000L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN_MASK                                                    0x00004000L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN_MASK                                                    0x00008000L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN_MASK                                                0x01000000L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN_MASK                                         0x02000000L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS_MASK                                                  0x04000000L
+#define RDPCSTX3_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//RDPCSTX3_RDPCSTX_CLOCK_CNTL
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN__SHIFT                                               0x0
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN__SHIFT                                          0x4
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN__SHIFT                                          0x5
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN__SHIFT                                          0x6
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN__SHIFT                                          0x7
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS__SHIFT                                        0x8
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN__SHIFT                                              0x9
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0xa
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS__SHIFT                                            0xc
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN__SHIFT                                                  0xd
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON__SHIFT                                            0xe
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS__SHIFT                                              0x10
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN_MASK                                                 0x00000001L
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN_MASK                                            0x00000010L
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN_MASK                                            0x00000020L
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN_MASK                                            0x00000040L
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN_MASK                                            0x00000080L
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS_MASK                                          0x00000100L
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN_MASK                                                0x00000200L
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000400L
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS_MASK                                              0x00001000L
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN_MASK                                                    0x00002000L
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON_MASK                                              0x00004000L
+#define RDPCSTX3_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS_MASK                                                0x00010000L
+//RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW__SHIFT                                    0x0
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE__SHIFT                                 0x1
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE__SHIFT                                   0x2
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR__SHIFT                                       0x4
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR__SHIFT                                       0x5
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR__SHIFT                                       0x6
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR__SHIFT                                       0x7
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR__SHIFT                                        0x8
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR__SHIFT                             0x9
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR__SHIFT                               0xa
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR__SHIFT                                         0xc
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK__SHIFT                                  0x10
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK__SHIFT                            0x11
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK__SHIFT                              0x12
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK__SHIFT                                   0x14
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW_MASK                                      0x00000001L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK                                   0x00000002L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK                                     0x00000004L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR_MASK                                         0x00000010L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR_MASK                                         0x00000020L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR_MASK                                         0x00000040L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR_MASK                                         0x00000080L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR_MASK                                          0x00000100L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR_MASK                               0x00000200L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR_MASK                                 0x00000400L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR_MASK                                           0x00001000L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK_MASK                                    0x00010000L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK_MASK                              0x00020000L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK_MASK                                0x00040000L
+#define RDPCSTX3_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK_MASK                                     0x00100000L
+//RDPCSTX3_RDPCSTX_PLL_UPDATE_DATA
+#define RDPCSTX3_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA__SHIFT                                        0x0
+#define RDPCSTX3_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA_MASK                                          0x00000001L
+//RDPCSTX3_RDPCS_TX_CR_ADDR
+#define RDPCSTX3_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                    0x0
+#define RDPCSTX3_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                      0x0000FFFFL
+//RDPCSTX3_RDPCS_TX_CR_DATA
+#define RDPCSTX3_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                    0x0
+#define RDPCSTX3_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                      0x0000FFFFL
+//RDPCSTX3_RDPCS_TX_SRAM_CNTL
+#define RDPCSTX3_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS__SHIFT                                                 0x14
+#define RDPCSTX3_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE__SHIFT                                               0x18
+#define RDPCSTX3_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE__SHIFT                                           0x1c
+#define RDPCSTX3_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS_MASK                                                   0x00100000L
+#define RDPCSTX3_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE_MASK                                                 0x03000000L
+#define RDPCSTX3_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE_MASK                                             0x30000000L
+//RDPCSTX3_RDPCSTX_MEM_POWER_CTRL
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES__SHIFT                                           0x0
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES__SHIFT                                    0xc
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1__SHIFT                                  0x1a
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2__SHIFT                                  0x1b
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1__SHIFT                                   0x1c
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2__SHIFT                                   0x1d
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM__SHIFT                                         0x1e
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES_MASK                                             0x00000FFFL
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES_MASK                                      0x03FFF000L
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1_MASK                                    0x04000000L
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2_MASK                                    0x08000000L
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1_MASK                                     0x10000000L
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2_MASK                                     0x20000000L
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM_MASK                                           0x40000000L
+//RDPCSTX3_RDPCSTX_MEM_POWER_CTRL2
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF__SHIFT                                    0x0
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO__SHIFT                                    0x2
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF_MASK                                      0x00000003L
+#define RDPCSTX3_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO_MASK                                      0x00000004L
+//RDPCSTX3_RDPCSTX_SCRATCH
+#define RDPCSTX3_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH__SHIFT                                                      0x0
+#define RDPCSTX3_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH_MASK                                                        0xFFFFFFFFL
+//RDPCSTX3_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG__SHIFT                      0x0
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS__SHIFT              0x4
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE__SHIFT                      0x8
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG_MASK                        0x00000001L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS_MASK                0x00000010L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE_MASK                        0x0000FF00L
+//RDPCSTX3_RDPCSTX_DEBUG_CONFIG
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN__SHIFT                                                    0x0
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT__SHIFT                                        0x4
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP__SHIFT                                        0x7
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK__SHIFT                                          0x8
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE__SHIFT                                       0xf
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX__SHIFT                                          0x10
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT__SHIFT                                              0x18
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN_MASK                                                      0x00000001L
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT_MASK                                          0x00000070L
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP_MASK                                          0x00000080L
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK_MASK                                            0x00001F00L
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE_MASK                                         0x00008000L
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX_MASK                                            0x00FF0000L
+#define RDPCSTX3_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MASK                                                0xFF000000L
+//RDPCSTX3_RDPCSTX_PHY_CNTL0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET__SHIFT                                                    0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET__SHIFT                                            0x1
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N__SHIFT                                          0x2
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN__SHIFT                                           0x3
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT__SHIFT                                                  0x4
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE__SHIFT                                          0x8
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE__SHIFT                                                0x9
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL__SHIFT                                            0xe
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ__SHIFT                                                0x11
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK__SHIFT                                                0x12
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL__SHIFT                                              0x14
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL__SHIFT                                               0x15
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN__SHIFT                                            0x18
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT__SHIFT                                        0x19
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE__SHIFT                                               0x1c
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE__SHIFT                                             0x1d
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS__SHIFT                                                  0x1f
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET_MASK                                                      0x00000001L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET_MASK                                              0x00000002L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N_MASK                                            0x00000004L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN_MASK                                             0x00000008L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT_MASK                                                    0x00000030L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE_MASK                                            0x00000100L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE_MASK                                                  0x00003E00L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL_MASK                                              0x0001C000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ_MASK                                                  0x00020000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK_MASK                                                  0x00040000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL_MASK                                                0x00100000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL_MASK                                                 0x00200000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN_MASK                                              0x01000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT_MASK                                          0x02000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE_MASK                                                 0x10000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE_MASK                                               0x20000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS_MASK                                                    0x80000000L
+//RDPCSTX3_RDPCSTX_PHY_CNTL1
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN__SHIFT                                               0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN__SHIFT                                               0x1
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE__SHIFT                                           0x2
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN__SHIFT                                               0x3
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE__SHIFT                                           0x4
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET__SHIFT                                              0x5
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN__SHIFT                                               0x6
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE__SHIFT                                           0x7
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN_MASK                                                 0x00000001L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN_MASK                                                 0x00000002L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE_MASK                                             0x00000004L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN_MASK                                                 0x00000008L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE_MASK                                             0x00000010L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET_MASK                                                0x00000020L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN_MASK                                                 0x00000040L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE_MASK                                             0x00000080L
+//RDPCSTX3_RDPCSTX_PHY_CNTL2
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR__SHIFT                                                  0x3
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN__SHIFT                                 0x4
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN__SHIFT                                 0x5
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN__SHIFT                                 0x6
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN__SHIFT                                 0x7
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN__SHIFT                                 0x8
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN__SHIFT                                 0x9
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN__SHIFT                                 0xa
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN__SHIFT                                 0xb
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR_MASK                                                    0x00000008L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN_MASK                                   0x00000010L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN_MASK                                   0x00000020L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN_MASK                                   0x00000040L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN_MASK                                   0x00000080L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN_MASK                                   0x00000100L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN_MASK                                   0x00000200L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN_MASK                                   0x00000400L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN_MASK                                   0x00000800L
+//RDPCSTX3_RDPCSTX_PHY_CNTL3
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET__SHIFT                                             0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE__SHIFT                                           0x1
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY__SHIFT                                           0x2
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN__SHIFT                                           0x3
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ__SHIFT                                               0x4
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK__SHIFT                                               0x5
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET__SHIFT                                             0x8
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE__SHIFT                                           0x9
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY__SHIFT                                           0xa
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN__SHIFT                                           0xb
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ__SHIFT                                               0xc
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK__SHIFT                                               0xd
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET__SHIFT                                             0x10
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE__SHIFT                                           0x11
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY__SHIFT                                           0x12
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN__SHIFT                                           0x13
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ__SHIFT                                               0x14
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK__SHIFT                                               0x15
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET__SHIFT                                             0x18
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE__SHIFT                                           0x19
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY__SHIFT                                           0x1a
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN__SHIFT                                           0x1b
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ__SHIFT                                               0x1c
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK__SHIFT                                               0x1d
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_MASK                                               0x00000001L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_MASK                                             0x00000002L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_MASK                                             0x00000004L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_MASK                                             0x00000008L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_MASK                                                 0x00000010L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_MASK                                                 0x00000020L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_MASK                                               0x00000100L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_MASK                                             0x00000200L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_MASK                                             0x00000400L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_MASK                                             0x00000800L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_MASK                                                 0x00001000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_MASK                                                 0x00002000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_MASK                                               0x00010000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_MASK                                             0x00020000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_MASK                                             0x00040000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_MASK                                             0x00080000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_MASK                                                 0x00100000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_MASK                                                 0x00200000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_MASK                                               0x01000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_MASK                                             0x02000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_MASK                                             0x04000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_MASK                                             0x08000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_MASK                                                 0x10000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_MASK                                                 0x20000000L
+//RDPCSTX3_RDPCSTX_PHY_CNTL4
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL__SHIFT                                         0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT__SHIFT                                            0x4
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC__SHIFT                                    0x6
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN__SHIFT                                        0x7
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL__SHIFT                                         0x8
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT__SHIFT                                            0xc
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC__SHIFT                                    0xe
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN__SHIFT                                        0xf
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL__SHIFT                                         0x10
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT__SHIFT                                            0x14
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC__SHIFT                                    0x16
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN__SHIFT                                        0x17
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL__SHIFT                                         0x18
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT__SHIFT                                            0x1c
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC__SHIFT                                    0x1e
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN__SHIFT                                        0x1f
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL_MASK                                           0x00000007L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT_MASK                                              0x00000010L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC_MASK                                      0x00000040L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN_MASK                                          0x00000080L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL_MASK                                           0x00000700L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT_MASK                                              0x00001000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC_MASK                                      0x00004000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN_MASK                                          0x00008000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL_MASK                                           0x00070000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT_MASK                                              0x00100000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC_MASK                                      0x00400000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN_MASK                                          0x00800000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL_MASK                                           0x07000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT_MASK                                              0x10000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC_MASK                                      0x40000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN_MASK                                          0x80000000L
+//RDPCSTX3_RDPCSTX_PHY_CNTL5
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD__SHIFT                                               0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE__SHIFT                                              0x1
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH__SHIFT                                             0x4
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ__SHIFT                                         0x6
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT__SHIFT                                      0x7
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD__SHIFT                                               0x8
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE__SHIFT                                              0x9
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH__SHIFT                                             0xc
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ__SHIFT                                         0xe
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT__SHIFT                                      0xf
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD__SHIFT                                               0x10
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE__SHIFT                                              0x11
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH__SHIFT                                             0x14
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ__SHIFT                                         0x16
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT__SHIFT                                      0x17
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD__SHIFT                                               0x18
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE__SHIFT                                              0x19
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH__SHIFT                                             0x1c
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ__SHIFT                                         0x1e
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT__SHIFT                                      0x1f
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD_MASK                                                 0x00000001L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE_MASK                                                0x0000000EL
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH_MASK                                               0x00000030L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ_MASK                                           0x00000040L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT_MASK                                        0x00000080L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD_MASK                                                 0x00000100L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE_MASK                                                0x00000E00L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH_MASK                                               0x00003000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ_MASK                                           0x00004000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT_MASK                                        0x00008000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD_MASK                                                 0x00010000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE_MASK                                                0x000E0000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH_MASK                                               0x00300000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ_MASK                                           0x00400000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT_MASK                                        0x00800000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD_MASK                                                 0x01000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE_MASK                                                0x0E000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH_MASK                                               0x30000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ_MASK                                           0x40000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT_MASK                                        0x80000000L
+//RDPCSTX3_RDPCSTX_PHY_CNTL6
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE__SHIFT                                            0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN__SHIFT                                           0x2
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE__SHIFT                                            0x4
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN__SHIFT                                           0x6
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE__SHIFT                                            0x8
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN__SHIFT                                           0xa
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE__SHIFT                                            0xc
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN__SHIFT                                           0xe
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4__SHIFT                                                0x10
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE__SHIFT                                            0x11
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK__SHIFT                                        0x12
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN__SHIFT                                            0x13
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ__SHIFT                                           0x14
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_MASK                                              0x00000003L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_MASK                                             0x00000004L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_MASK                                              0x00000030L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_MASK                                             0x00000040L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_MASK                                              0x00000300L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_MASK                                             0x00000400L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_MASK                                              0x00003000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_MASK                                             0x00004000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_MASK                                                  0x00010000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_MASK                                              0x00020000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_MASK                                          0x00040000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_MASK                                              0x00080000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_MASK                                             0x00100000L
+//RDPCSTX3_RDPCSTX_PHY_CNTL7
+#define RDPCSTX3_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN__SHIFT                                       0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT__SHIFT                                      0x10
+#define RDPCSTX3_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN_MASK                                         0x0000FFFFL
+#define RDPCSTX3_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT_MASK                                        0xFFFF0000L
+//RDPCSTX3_RDPCSTX_PHY_CNTL8
+#define RDPCSTX3_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK__SHIFT                                        0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK_MASK                                          0x000FFFFFL
+//RDPCSTX3_RDPCSTX_PHY_CNTL9
+#define RDPCSTX3_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE__SHIFT                                    0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD__SHIFT                                   0x18
+#define RDPCSTX3_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE_MASK                                      0x001FFFFFL
+#define RDPCSTX3_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD_MASK                                     0x01000000L
+//RDPCSTX3_RDPCSTX_PHY_CNTL10
+#define RDPCSTX3_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM__SHIFT                                      0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM_MASK                                        0x0000FFFFL
+//RDPCSTX3_RDPCSTX_PHY_CNTL11
+#define RDPCSTX3_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER__SHIFT                                     0x4
+#define RDPCSTX3_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV__SHIFT                                     0x10
+#define RDPCSTX3_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV__SHIFT                                    0x14
+#define RDPCSTX3_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV__SHIFT                           0x18
+#define RDPCSTX3_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER_MASK                                       0x0000FFF0L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV_MASK                                       0x00070000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV_MASK                                      0x00700000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV_MASK                             0x03000000L
+//RDPCSTX3_RDPCSTX_PHY_CNTL12
+#define RDPCSTX3_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN__SHIFT                                    0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN__SHIFT                                   0x2
+#define RDPCSTX3_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV__SHIFT                                     0x4
+#define RDPCSTX3_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE__SHIFT                                          0x7
+#define RDPCSTX3_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN__SHIFT                                         0x8
+#define RDPCSTX3_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN_MASK                                      0x00000001L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN_MASK                                     0x00000004L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV_MASK                                       0x00000070L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE_MASK                                            0x00000080L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN_MASK                                           0x00000100L
+//RDPCSTX3_RDPCSTX_PHY_CNTL13
+#define RDPCSTX3_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER__SHIFT                                 0x14
+#define RDPCSTX3_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN__SHIFT                                     0x1c
+#define RDPCSTX3_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN__SHIFT                                       0x1d
+#define RDPCSTX3_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE__SHIFT                               0x1e
+#define RDPCSTX3_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER_MASK                                   0x0FF00000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN_MASK                                       0x10000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN_MASK                                         0x20000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE_MASK                                 0x40000000L
+//RDPCSTX3_RDPCSTX_PHY_CNTL14
+#define RDPCSTX3_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE__SHIFT                                      0x0
+#define RDPCSTX3_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN__SHIFT                                       0x18
+#define RDPCSTX3_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN__SHIFT                                        0x1c
+#define RDPCSTX3_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE_MASK                                        0x00000001L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN_MASK                                         0x01000000L
+#define RDPCSTX3_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN_MASK                                          0x10000000L
+//RDPCSTX3_RDPCSTX_PHY_FUSE0
+#define RDPCSTX3_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX3_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX3_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX3_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I__SHIFT                                             0x12
+#define RDPCSTX3_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO__SHIFT                                        0x14
+#define RDPCSTX3_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX3_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX3_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX3_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I_MASK                                               0x000C0000L
+#define RDPCSTX3_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO_MASK                                          0x00300000L
+//RDPCSTX3_RDPCSTX_PHY_FUSE1
+#define RDPCSTX3_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX3_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX3_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX3_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT__SHIFT                                          0x12
+#define RDPCSTX3_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP__SHIFT                                         0x19
+#define RDPCSTX3_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX3_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX3_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX3_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT_MASK                                            0x01FC0000L
+#define RDPCSTX3_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP_MASK                                           0xFE000000L
+//RDPCSTX3_RDPCSTX_PHY_FUSE2
+#define RDPCSTX3_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX3_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX3_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX3_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX3_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX3_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST_MASK                                             0x0003F000L
+//RDPCSTX3_RDPCSTX_PHY_FUSE3
+#define RDPCSTX3_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX3_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX3_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX3_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE__SHIFT                                             0x12
+#define RDPCSTX3_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE__SHIFT                                                0x18
+#define RDPCSTX3_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX3_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX3_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX3_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE_MASK                                               0x00FC0000L
+#define RDPCSTX3_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE_MASK                                                  0x03000000L
+//RDPCSTX3_RDPCSTX_PHY_RX_LD_VAL
+#define RDPCSTX3_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL__SHIFT                                        0x0
+#define RDPCSTX3_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL__SHIFT                                        0x8
+#define RDPCSTX3_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL_MASK                                          0x0000007FL
+#define RDPCSTX3_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL_MASK                                          0x001FFF00L
+//RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED__SHIFT                         0x0
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED__SHIFT                       0x1
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED__SHIFT                       0x2
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED__SHIFT                       0x3
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED__SHIFT                           0x4
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED__SHIFT                           0x5
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED__SHIFT                         0x8
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED__SHIFT                       0x9
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED__SHIFT                       0xa
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED__SHIFT                       0xb
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED__SHIFT                           0xc
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED__SHIFT                           0xd
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED__SHIFT                         0x10
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED__SHIFT                       0x11
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED__SHIFT                       0x12
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED__SHIFT                       0x13
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED__SHIFT                           0x14
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED__SHIFT                           0x15
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED__SHIFT                         0x18
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED__SHIFT                       0x19
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED__SHIFT                       0x1a
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED__SHIFT                       0x1b
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED__SHIFT                           0x1c
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED__SHIFT                           0x1d
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED_MASK                           0x00000001L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED_MASK                         0x00000002L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED_MASK                         0x00000004L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED_MASK                         0x00000008L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED_MASK                             0x00000010L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED_MASK                             0x00000020L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED_MASK                           0x00000100L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED_MASK                         0x00000200L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED_MASK                         0x00000400L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED_MASK                         0x00000800L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED_MASK                             0x00001000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED_MASK                             0x00002000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED_MASK                           0x00010000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED_MASK                         0x00020000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED_MASK                         0x00040000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED_MASK                         0x00080000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED_MASK                             0x00100000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED_MASK                             0x00200000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED_MASK                           0x01000000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED_MASK                         0x02000000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED_MASK                         0x04000000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED_MASK                         0x08000000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED_MASK                             0x10000000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED_MASK                             0x20000000L
+//RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED__SHIFT                        0x0
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED__SHIFT                       0x2
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED__SHIFT                        0x4
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED__SHIFT                       0x6
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED__SHIFT                        0x8
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED__SHIFT                       0xa
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED__SHIFT                        0xc
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED__SHIFT                       0xe
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED__SHIFT                            0x10
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED__SHIFT                        0x11
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED__SHIFT                    0x12
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED__SHIFT                        0x13
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED__SHIFT                       0x14
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED_MASK                          0x00000003L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED_MASK                         0x00000004L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED_MASK                          0x00000030L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED_MASK                         0x00000040L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED_MASK                          0x00000300L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED_MASK                         0x00000400L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED_MASK                          0x00003000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED_MASK                         0x00004000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED_MASK                              0x00010000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED_MASK                          0x00020000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED_MASK                      0x00040000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED_MASK                          0x00080000L
+#define RDPCSTX3_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED_MASK                         0x00100000L
+//RDPCSTX3_RDPCSTX_DPALT_CONTROL_REG
+#define RDPCSTX3_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS__SHIFT                                  0x0
+#define RDPCSTX3_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED__SHIFT                                0x4
+#define RDPCSTX3_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE__SHIFT                                  0x8
+#define RDPCSTX3_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS_MASK                                    0x00000001L
+#define RDPCSTX3_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED_MASK                                  0x00000010L
+#define RDPCSTX3_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE_MASK                                    0x0000FF00L
+
+
+// addressBlock: dpcssys_dpcssys_cr3_dispdec
+//DPCSSYS_CR3_DPCSSYS_CR_ADDR
+#define DPCSSYS_CR3_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                  0x0
+#define DPCSSYS_CR3_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                    0x0000FFFFL
+//DPCSSYS_CR3_DPCSSYS_CR_DATA
+#define DPCSSYS_CR3_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                  0x0
+#define DPCSSYS_CR3_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                    0x0000FFFFL
+
+
+// addressBlock: dpcssys_dpcs0_dpcsrx_dispdec
+//DPCSRX_PHY_CNTL
+#define DPCSRX_PHY_CNTL__DPCS_PHY_RESET__SHIFT                                                                0x0
+#define DPCSRX_PHY_CNTL__DPCS_PHY_RESET_MASK                                                                  0x00000001L
+//DPCSRX_RX_CLOCK_CNTL
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX_GATE_DIS__SHIFT                                                  0x0
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX_EN__SHIFT                                                        0x1
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX_SEL__SHIFT                                                       0x2
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX_CLOCK_ON__SHIFT                                                  0x4
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX0_GATE_DIS__SHIFT                                                 0x10
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX0_EN__SHIFT                                                       0x11
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX0_CLOCK_ON__SHIFT                                                 0x12
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX1_GATE_DIS__SHIFT                                                 0x14
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX1_EN__SHIFT                                                       0x15
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX1_CLOCK_ON__SHIFT                                                 0x16
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX2_GATE_DIS__SHIFT                                                 0x18
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX2_EN__SHIFT                                                       0x19
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX2_CLOCK_ON__SHIFT                                                 0x1a
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX3_GATE_DIS__SHIFT                                                 0x1c
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX3_EN__SHIFT                                                       0x1d
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX3_CLOCK_ON__SHIFT                                                 0x1e
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX_GATE_DIS_MASK                                                    0x00000001L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX_EN_MASK                                                          0x00000002L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX_SEL_MASK                                                         0x0000000CL
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX_CLOCK_ON_MASK                                                    0x00000010L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX0_GATE_DIS_MASK                                                   0x00010000L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX0_EN_MASK                                                         0x00020000L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX0_CLOCK_ON_MASK                                                   0x00040000L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX1_GATE_DIS_MASK                                                   0x00100000L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX1_EN_MASK                                                         0x00200000L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX1_CLOCK_ON_MASK                                                   0x00400000L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX2_GATE_DIS_MASK                                                   0x01000000L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX2_EN_MASK                                                         0x02000000L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX2_CLOCK_ON_MASK                                                   0x04000000L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX3_GATE_DIS_MASK                                                   0x10000000L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX3_EN_MASK                                                         0x20000000L
+#define DPCSRX_RX_CLOCK_CNTL__DPCS_SYMCLK_RX3_CLOCK_ON_MASK                                                   0x40000000L
+//DPCSRX_RX_CNTL
+#define DPCSRX_RX_CNTL__DPCS_RX_LANE0_EN__SHIFT                                                               0x0
+#define DPCSRX_RX_CNTL__DPCS_RX_LANE1_EN__SHIFT                                                               0x1
+#define DPCSRX_RX_CNTL__DPCS_RX_LANE2_EN__SHIFT                                                               0x2
+#define DPCSRX_RX_CNTL__DPCS_RX_LANE3_EN__SHIFT                                                               0x3
+#define DPCSRX_RX_CNTL__DPCS_RX_FIFO_EN__SHIFT                                                                0x4
+#define DPCSRX_RX_CNTL__DPCS_RX_FIFO_START__SHIFT                                                             0x5
+#define DPCSRX_RX_CNTL__DPCS_RX_FIFO_RD_START_DELAY__SHIFT                                                    0x8
+#define DPCSRX_RX_CNTL__DPCS_RX_SOFT_RESET__SHIFT                                                             0x1f
+#define DPCSRX_RX_CNTL__DPCS_RX_LANE0_EN_MASK                                                                 0x00000001L
+#define DPCSRX_RX_CNTL__DPCS_RX_LANE1_EN_MASK                                                                 0x00000002L
+#define DPCSRX_RX_CNTL__DPCS_RX_LANE2_EN_MASK                                                                 0x00000004L
+#define DPCSRX_RX_CNTL__DPCS_RX_LANE3_EN_MASK                                                                 0x00000008L
+#define DPCSRX_RX_CNTL__DPCS_RX_FIFO_EN_MASK                                                                  0x00000010L
+#define DPCSRX_RX_CNTL__DPCS_RX_FIFO_START_MASK                                                               0x00000020L
+#define DPCSRX_RX_CNTL__DPCS_RX_FIFO_RD_START_DELAY_MASK                                                      0x00000F00L
+#define DPCSRX_RX_CNTL__DPCS_RX_SOFT_RESET_MASK                                                               0x80000000L
+//DPCSRX_CBUS_CNTL
+#define DPCSRX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY__SHIFT                                                       0x0
+#define DPCSRX_CBUS_CNTL__DPCS_PHY_MASTER_REQ_DELAY__SHIFT                                                    0x8
+#define DPCSRX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET__SHIFT                                                         0x1f
+#define DPCSRX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY_MASK                                                         0x0000000FL
+#define DPCSRX_CBUS_CNTL__DPCS_PHY_MASTER_REQ_DELAY_MASK                                                      0x0000FF00L
+#define DPCSRX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET_MASK                                                           0x80000000L
+//DPCSRX_REG_ERROR_STATUS
+#define DPCSRX_REG_ERROR_STATUS__DPCS_REG_FIFO_OVERFLOW__SHIFT                                                0x0
+#define DPCSRX_REG_ERROR_STATUS__DPCS_REG_ERROR_CLR__SHIFT                                                    0x1
+#define DPCSRX_REG_ERROR_STATUS__DPCS_REG_FIFO_ERROR_MASK__SHIFT                                              0x4
+#define DPCSRX_REG_ERROR_STATUS__DPCS_REG_FIFO_OVERFLOW_MASK                                                  0x00000001L
+#define DPCSRX_REG_ERROR_STATUS__DPCS_REG_ERROR_CLR_MASK                                                      0x00000002L
+#define DPCSRX_REG_ERROR_STATUS__DPCS_REG_FIFO_ERROR_MASK_MASK                                                0x00000010L
+//DPCSRX_RX_ERROR_STATUS
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX0_FIFO_ERROR__SHIFT                                                    0x0
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX1_FIFO_ERROR__SHIFT                                                    0x1
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX2_FIFO_ERROR__SHIFT                                                    0x2
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX3_FIFO_ERROR__SHIFT                                                    0x3
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX_ERROR_CLR__SHIFT                                                      0x8
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX_FIFO_ERROR_MASK__SHIFT                                                0xc
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX0_FIFO_ERROR_MASK                                                      0x00000001L
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX1_FIFO_ERROR_MASK                                                      0x00000002L
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX2_FIFO_ERROR_MASK                                                      0x00000004L
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX3_FIFO_ERROR_MASK                                                      0x00000008L
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX_ERROR_CLR_MASK                                                        0x00000100L
+#define DPCSRX_RX_ERROR_STATUS__DPCS_RX_FIFO_ERROR_MASK_MASK                                                  0x00001000L
+//DPCSRX_INDEX_MODE_ADDR
+#define DPCSRX_INDEX_MODE_ADDR__DPCS_INDEX_MODE_ADDR__SHIFT                                                   0x0
+#define DPCSRX_INDEX_MODE_ADDR__DPCS_INDEX_MODE_ADDR_MASK                                                     0x0003FFFFL
+//DPCSRX_INDEX_MODE_DATA
+#define DPCSRX_INDEX_MODE_DATA__DPCS_INDEX_MODE_DATA__SHIFT                                                   0x0
+#define DPCSRX_INDEX_MODE_DATA__DPCS_INDEX_MODE_DATA_MASK                                                     0xFFFFFFFFL
+//DPCSRX_DEBUG_CONFIG
+#define DPCSRX_DEBUG_CONFIG__DPCS_DBG_EN__SHIFT                                                               0x0
+#define DPCSRX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL__SHIFT                                                       0x1
+#define DPCSRX_DEBUG_CONFIG__DPCS_DBG_RX_SYMCLK_SEL__SHIFT                                                    0x6
+#define DPCSRX_DEBUG_CONFIG__DPCS_DBG_BLOCK_SEL__SHIFT                                                        0xb
+#define DPCSRX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS__SHIFT                                                         0xe
+#define DPCSRX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN__SHIFT                                                  0x10
+#define DPCSRX_DEBUG_CONFIG__DPCS_DBG_EN_MASK                                                                 0x00000001L
+#define DPCSRX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL_MASK                                                         0x0000000EL
+#define DPCSRX_DEBUG_CONFIG__DPCS_DBG_RX_SYMCLK_SEL_MASK                                                      0x000000C0L
+#define DPCSRX_DEBUG_CONFIG__DPCS_DBG_BLOCK_SEL_MASK                                                          0x00003800L
+#define DPCSRX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS_MASK                                                           0x00004000L
+#define DPCSRX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN_MASK                                                    0x00010000L
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx4_dispdec
+//DPCSTX4_DPCSTX_TX_CLOCK_CNTL
+#define DPCSTX4_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS__SHIFT                                             0x0
+#define DPCSTX4_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN__SHIFT                                                   0x1
+#define DPCSTX4_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON__SHIFT                                             0x2
+#define DPCSTX4_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0x3
+#define DPCSTX4_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS_MASK                                               0x00000001L
+#define DPCSTX4_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN_MASK                                                     0x00000002L
+#define DPCSTX4_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON_MASK                                               0x00000004L
+#define DPCSTX4_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000008L
+//DPCSTX4_DPCSTX_TX_CNTL
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ__SHIFT                                                 0xc
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING__SHIFT                                             0xd
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP__SHIFT                                                      0xe
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT__SHIFT                                              0xf
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ_MASK                                                   0x00001000L
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING_MASK                                               0x00002000L
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP_MASK                                                        0x00004000L
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT_MASK                                                0x00008000L
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define DPCSTX4_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//DPCSTX4_DPCSTX_CBUS_CNTL
+#define DPCSTX4_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY__SHIFT                                               0x0
+#define DPCSTX4_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET__SHIFT                                                 0x1f
+#define DPCSTX4_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY_MASK                                                 0x000000FFL
+#define DPCSTX4_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET_MASK                                                   0x80000000L
+//DPCSTX4_DPCSTX_INTERRUPT_CNTL
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW__SHIFT                                          0x0
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR__SHIFT                                              0x1
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK__SHIFT                                        0x4
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR__SHIFT                                             0x8
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR__SHIFT                                             0x9
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR__SHIFT                                             0xa
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR__SHIFT                                             0xb
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR__SHIFT                                               0xc
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK__SHIFT                                         0x10
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK__SHIFT                                             0x14
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW_MASK                                            0x00000001L
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR_MASK                                                0x00000002L
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK_MASK                                          0x00000010L
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR_MASK                                               0x00000100L
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR_MASK                                               0x00000200L
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR_MASK                                               0x00000400L
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR_MASK                                               0x00000800L
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR_MASK                                                 0x00001000L
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK_MASK                                           0x00010000L
+#define DPCSTX4_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK_MASK                                               0x00100000L
+//DPCSTX4_DPCSTX_PLL_UPDATE_ADDR
+#define DPCSTX4_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR__SHIFT                                           0x0
+#define DPCSTX4_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR_MASK                                             0x0003FFFFL
+//DPCSTX4_DPCSTX_PLL_UPDATE_DATA
+#define DPCSTX4_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA__SHIFT                                           0x0
+#define DPCSTX4_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA_MASK                                             0xFFFFFFFFL
+//DPCSTX4_DPCSTX_DEBUG_CONFIG
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN__SHIFT                                                       0x0
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL__SHIFT                                               0x1
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL__SHIFT                                            0x4
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL__SHIFT                                       0x8
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS__SHIFT                                                 0xe
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN__SHIFT                                          0x10
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN_MASK                                                         0x00000001L
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL_MASK                                                 0x0000000EL
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL_MASK                                              0x00000070L
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL_MASK                                         0x00000700L
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS_MASK                                                   0x00004000L
+#define DPCSTX4_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN_MASK                                            0x00010000L
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx4_dispdec
+//RDPCSTX4_RDPCSTX_CNTL
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET__SHIFT                                                   0x0
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET__SHIFT                                                   0x4
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN__SHIFT                                                  0xc
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN__SHIFT                                                  0xd
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN__SHIFT                                                  0xe
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN__SHIFT                                                  0xf
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN__SHIFT                                              0x18
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN__SHIFT                                       0x19
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS__SHIFT                                                0x1a
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET_MASK                                                     0x00000001L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET_MASK                                                     0x00000010L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN_MASK                                                    0x00001000L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN_MASK                                                    0x00002000L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN_MASK                                                    0x00004000L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN_MASK                                                    0x00008000L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN_MASK                                                0x01000000L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN_MASK                                         0x02000000L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS_MASK                                                  0x04000000L
+#define RDPCSTX4_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//RDPCSTX4_RDPCSTX_CLOCK_CNTL
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN__SHIFT                                               0x0
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN__SHIFT                                          0x4
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN__SHIFT                                          0x5
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN__SHIFT                                          0x6
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN__SHIFT                                          0x7
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS__SHIFT                                        0x8
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN__SHIFT                                              0x9
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0xa
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS__SHIFT                                            0xc
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN__SHIFT                                                  0xd
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON__SHIFT                                            0xe
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS__SHIFT                                              0x10
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN_MASK                                                 0x00000001L
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN_MASK                                            0x00000010L
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN_MASK                                            0x00000020L
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN_MASK                                            0x00000040L
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN_MASK                                            0x00000080L
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS_MASK                                          0x00000100L
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN_MASK                                                0x00000200L
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000400L
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS_MASK                                              0x00001000L
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN_MASK                                                    0x00002000L
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON_MASK                                              0x00004000L
+#define RDPCSTX4_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS_MASK                                                0x00010000L
+//RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW__SHIFT                                    0x0
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE__SHIFT                                 0x1
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE__SHIFT                                   0x2
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR__SHIFT                                       0x4
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR__SHIFT                                       0x5
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR__SHIFT                                       0x6
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR__SHIFT                                       0x7
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR__SHIFT                                        0x8
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR__SHIFT                             0x9
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR__SHIFT                               0xa
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR__SHIFT                                         0xc
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK__SHIFT                                  0x10
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK__SHIFT                            0x11
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK__SHIFT                              0x12
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK__SHIFT                                   0x14
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW_MASK                                      0x00000001L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK                                   0x00000002L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK                                     0x00000004L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR_MASK                                         0x00000010L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR_MASK                                         0x00000020L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR_MASK                                         0x00000040L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR_MASK                                         0x00000080L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR_MASK                                          0x00000100L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR_MASK                               0x00000200L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR_MASK                                 0x00000400L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR_MASK                                           0x00001000L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK_MASK                                    0x00010000L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK_MASK                              0x00020000L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK_MASK                                0x00040000L
+#define RDPCSTX4_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK_MASK                                     0x00100000L
+//RDPCSTX4_RDPCSTX_PLL_UPDATE_DATA
+#define RDPCSTX4_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA__SHIFT                                        0x0
+#define RDPCSTX4_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA_MASK                                          0x00000001L
+//RDPCSTX4_RDPCS_TX_CR_ADDR
+#define RDPCSTX4_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                    0x0
+#define RDPCSTX4_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                      0x0000FFFFL
+//RDPCSTX4_RDPCS_TX_CR_DATA
+#define RDPCSTX4_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                    0x0
+#define RDPCSTX4_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                      0x0000FFFFL
+//RDPCSTX4_RDPCS_TX_SRAM_CNTL
+#define RDPCSTX4_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS__SHIFT                                                 0x14
+#define RDPCSTX4_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE__SHIFT                                               0x18
+#define RDPCSTX4_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE__SHIFT                                           0x1c
+#define RDPCSTX4_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS_MASK                                                   0x00100000L
+#define RDPCSTX4_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE_MASK                                                 0x03000000L
+#define RDPCSTX4_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE_MASK                                             0x30000000L
+//RDPCSTX4_RDPCSTX_MEM_POWER_CTRL
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES__SHIFT                                           0x0
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES__SHIFT                                    0xc
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1__SHIFT                                  0x1a
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2__SHIFT                                  0x1b
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1__SHIFT                                   0x1c
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2__SHIFT                                   0x1d
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM__SHIFT                                         0x1e
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES_MASK                                             0x00000FFFL
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES_MASK                                      0x03FFF000L
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1_MASK                                    0x04000000L
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2_MASK                                    0x08000000L
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1_MASK                                     0x10000000L
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2_MASK                                     0x20000000L
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM_MASK                                           0x40000000L
+//RDPCSTX4_RDPCSTX_MEM_POWER_CTRL2
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF__SHIFT                                    0x0
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO__SHIFT                                    0x2
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF_MASK                                      0x00000003L
+#define RDPCSTX4_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO_MASK                                      0x00000004L
+//RDPCSTX4_RDPCSTX_SCRATCH
+#define RDPCSTX4_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH__SHIFT                                                      0x0
+#define RDPCSTX4_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH_MASK                                                        0xFFFFFFFFL
+//RDPCSTX4_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG__SHIFT                      0x0
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS__SHIFT              0x4
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE__SHIFT                      0x8
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG_MASK                        0x00000001L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS_MASK                0x00000010L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE_MASK                        0x0000FF00L
+//RDPCSTX4_RDPCSTX_DEBUG_CONFIG
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN__SHIFT                                                    0x0
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT__SHIFT                                        0x4
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP__SHIFT                                        0x7
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK__SHIFT                                          0x8
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE__SHIFT                                       0xf
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX__SHIFT                                          0x10
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT__SHIFT                                              0x18
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN_MASK                                                      0x00000001L
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT_MASK                                          0x00000070L
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP_MASK                                          0x00000080L
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK_MASK                                            0x00001F00L
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE_MASK                                         0x00008000L
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX_MASK                                            0x00FF0000L
+#define RDPCSTX4_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MASK                                                0xFF000000L
+//RDPCSTX4_RDPCSTX_PHY_CNTL0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET__SHIFT                                                    0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET__SHIFT                                            0x1
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N__SHIFT                                          0x2
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN__SHIFT                                           0x3
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT__SHIFT                                                  0x4
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE__SHIFT                                          0x8
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE__SHIFT                                                0x9
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL__SHIFT                                            0xe
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ__SHIFT                                                0x11
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK__SHIFT                                                0x12
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL__SHIFT                                              0x14
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL__SHIFT                                               0x15
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN__SHIFT                                            0x18
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT__SHIFT                                        0x19
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE__SHIFT                                               0x1c
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE__SHIFT                                             0x1d
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS__SHIFT                                                  0x1f
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET_MASK                                                      0x00000001L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET_MASK                                              0x00000002L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N_MASK                                            0x00000004L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN_MASK                                             0x00000008L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT_MASK                                                    0x00000030L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE_MASK                                            0x00000100L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE_MASK                                                  0x00003E00L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL_MASK                                              0x0001C000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ_MASK                                                  0x00020000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK_MASK                                                  0x00040000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL_MASK                                                0x00100000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL_MASK                                                 0x00200000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN_MASK                                              0x01000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT_MASK                                          0x02000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE_MASK                                                 0x10000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE_MASK                                               0x20000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS_MASK                                                    0x80000000L
+//RDPCSTX4_RDPCSTX_PHY_CNTL1
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN__SHIFT                                               0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN__SHIFT                                               0x1
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE__SHIFT                                           0x2
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN__SHIFT                                               0x3
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE__SHIFT                                           0x4
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET__SHIFT                                              0x5
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN__SHIFT                                               0x6
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE__SHIFT                                           0x7
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN_MASK                                                 0x00000001L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN_MASK                                                 0x00000002L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE_MASK                                             0x00000004L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN_MASK                                                 0x00000008L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE_MASK                                             0x00000010L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET_MASK                                                0x00000020L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN_MASK                                                 0x00000040L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE_MASK                                             0x00000080L
+//RDPCSTX4_RDPCSTX_PHY_CNTL2
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR__SHIFT                                                  0x3
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN__SHIFT                                 0x4
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN__SHIFT                                 0x5
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN__SHIFT                                 0x6
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN__SHIFT                                 0x7
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN__SHIFT                                 0x8
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN__SHIFT                                 0x9
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN__SHIFT                                 0xa
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN__SHIFT                                 0xb
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR_MASK                                                    0x00000008L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN_MASK                                   0x00000010L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN_MASK                                   0x00000020L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN_MASK                                   0x00000040L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN_MASK                                   0x00000080L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN_MASK                                   0x00000100L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN_MASK                                   0x00000200L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN_MASK                                   0x00000400L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN_MASK                                   0x00000800L
+//RDPCSTX4_RDPCSTX_PHY_CNTL3
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET__SHIFT                                             0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE__SHIFT                                           0x1
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY__SHIFT                                           0x2
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN__SHIFT                                           0x3
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ__SHIFT                                               0x4
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK__SHIFT                                               0x5
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET__SHIFT                                             0x8
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE__SHIFT                                           0x9
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY__SHIFT                                           0xa
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN__SHIFT                                           0xb
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ__SHIFT                                               0xc
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK__SHIFT                                               0xd
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET__SHIFT                                             0x10
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE__SHIFT                                           0x11
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY__SHIFT                                           0x12
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN__SHIFT                                           0x13
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ__SHIFT                                               0x14
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK__SHIFT                                               0x15
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET__SHIFT                                             0x18
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE__SHIFT                                           0x19
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY__SHIFT                                           0x1a
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN__SHIFT                                           0x1b
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ__SHIFT                                               0x1c
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK__SHIFT                                               0x1d
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_MASK                                               0x00000001L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_MASK                                             0x00000002L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_MASK                                             0x00000004L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_MASK                                             0x00000008L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_MASK                                                 0x00000010L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_MASK                                                 0x00000020L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_MASK                                               0x00000100L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_MASK                                             0x00000200L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_MASK                                             0x00000400L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_MASK                                             0x00000800L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_MASK                                                 0x00001000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_MASK                                                 0x00002000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_MASK                                               0x00010000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_MASK                                             0x00020000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_MASK                                             0x00040000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_MASK                                             0x00080000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_MASK                                                 0x00100000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_MASK                                                 0x00200000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_MASK                                               0x01000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_MASK                                             0x02000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_MASK                                             0x04000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_MASK                                             0x08000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_MASK                                                 0x10000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_MASK                                                 0x20000000L
+//RDPCSTX4_RDPCSTX_PHY_CNTL4
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL__SHIFT                                         0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT__SHIFT                                            0x4
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC__SHIFT                                    0x6
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN__SHIFT                                        0x7
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL__SHIFT                                         0x8
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT__SHIFT                                            0xc
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC__SHIFT                                    0xe
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN__SHIFT                                        0xf
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL__SHIFT                                         0x10
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT__SHIFT                                            0x14
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC__SHIFT                                    0x16
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN__SHIFT                                        0x17
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL__SHIFT                                         0x18
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT__SHIFT                                            0x1c
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC__SHIFT                                    0x1e
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN__SHIFT                                        0x1f
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL_MASK                                           0x00000007L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT_MASK                                              0x00000010L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC_MASK                                      0x00000040L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN_MASK                                          0x00000080L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL_MASK                                           0x00000700L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT_MASK                                              0x00001000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC_MASK                                      0x00004000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN_MASK                                          0x00008000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL_MASK                                           0x00070000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT_MASK                                              0x00100000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC_MASK                                      0x00400000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN_MASK                                          0x00800000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL_MASK                                           0x07000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT_MASK                                              0x10000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC_MASK                                      0x40000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN_MASK                                          0x80000000L
+//RDPCSTX4_RDPCSTX_PHY_CNTL5
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD__SHIFT                                               0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE__SHIFT                                              0x1
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH__SHIFT                                             0x4
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ__SHIFT                                         0x6
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT__SHIFT                                      0x7
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD__SHIFT                                               0x8
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE__SHIFT                                              0x9
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH__SHIFT                                             0xc
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ__SHIFT                                         0xe
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT__SHIFT                                      0xf
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD__SHIFT                                               0x10
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE__SHIFT                                              0x11
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH__SHIFT                                             0x14
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ__SHIFT                                         0x16
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT__SHIFT                                      0x17
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD__SHIFT                                               0x18
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE__SHIFT                                              0x19
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH__SHIFT                                             0x1c
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ__SHIFT                                         0x1e
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT__SHIFT                                      0x1f
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD_MASK                                                 0x00000001L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE_MASK                                                0x0000000EL
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH_MASK                                               0x00000030L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ_MASK                                           0x00000040L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT_MASK                                        0x00000080L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD_MASK                                                 0x00000100L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE_MASK                                                0x00000E00L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH_MASK                                               0x00003000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ_MASK                                           0x00004000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT_MASK                                        0x00008000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD_MASK                                                 0x00010000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE_MASK                                                0x000E0000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH_MASK                                               0x00300000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ_MASK                                           0x00400000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT_MASK                                        0x00800000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD_MASK                                                 0x01000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE_MASK                                                0x0E000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH_MASK                                               0x30000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ_MASK                                           0x40000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT_MASK                                        0x80000000L
+//RDPCSTX4_RDPCSTX_PHY_CNTL6
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE__SHIFT                                            0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN__SHIFT                                           0x2
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE__SHIFT                                            0x4
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN__SHIFT                                           0x6
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE__SHIFT                                            0x8
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN__SHIFT                                           0xa
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE__SHIFT                                            0xc
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN__SHIFT                                           0xe
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4__SHIFT                                                0x10
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE__SHIFT                                            0x11
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK__SHIFT                                        0x12
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN__SHIFT                                            0x13
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ__SHIFT                                           0x14
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_MASK                                              0x00000003L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_MASK                                             0x00000004L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_MASK                                              0x00000030L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_MASK                                             0x00000040L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_MASK                                              0x00000300L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_MASK                                             0x00000400L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_MASK                                              0x00003000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_MASK                                             0x00004000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_MASK                                                  0x00010000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_MASK                                              0x00020000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_MASK                                          0x00040000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_MASK                                              0x00080000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_MASK                                             0x00100000L
+//RDPCSTX4_RDPCSTX_PHY_CNTL7
+#define RDPCSTX4_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN__SHIFT                                       0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT__SHIFT                                      0x10
+#define RDPCSTX4_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN_MASK                                         0x0000FFFFL
+#define RDPCSTX4_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT_MASK                                        0xFFFF0000L
+//RDPCSTX4_RDPCSTX_PHY_CNTL8
+#define RDPCSTX4_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK__SHIFT                                        0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK_MASK                                          0x000FFFFFL
+//RDPCSTX4_RDPCSTX_PHY_CNTL9
+#define RDPCSTX4_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE__SHIFT                                    0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD__SHIFT                                   0x18
+#define RDPCSTX4_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE_MASK                                      0x001FFFFFL
+#define RDPCSTX4_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD_MASK                                     0x01000000L
+//RDPCSTX4_RDPCSTX_PHY_CNTL10
+#define RDPCSTX4_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM__SHIFT                                      0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM_MASK                                        0x0000FFFFL
+//RDPCSTX4_RDPCSTX_PHY_CNTL11
+#define RDPCSTX4_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER__SHIFT                                     0x4
+#define RDPCSTX4_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV__SHIFT                                     0x10
+#define RDPCSTX4_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV__SHIFT                                    0x14
+#define RDPCSTX4_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV__SHIFT                           0x18
+#define RDPCSTX4_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER_MASK                                       0x0000FFF0L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV_MASK                                       0x00070000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV_MASK                                      0x00700000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV_MASK                             0x03000000L
+//RDPCSTX4_RDPCSTX_PHY_CNTL12
+#define RDPCSTX4_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN__SHIFT                                    0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN__SHIFT                                   0x2
+#define RDPCSTX4_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV__SHIFT                                     0x4
+#define RDPCSTX4_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE__SHIFT                                          0x7
+#define RDPCSTX4_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN__SHIFT                                         0x8
+#define RDPCSTX4_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN_MASK                                      0x00000001L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN_MASK                                     0x00000004L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV_MASK                                       0x00000070L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE_MASK                                            0x00000080L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN_MASK                                           0x00000100L
+//RDPCSTX4_RDPCSTX_PHY_CNTL13
+#define RDPCSTX4_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER__SHIFT                                 0x14
+#define RDPCSTX4_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN__SHIFT                                     0x1c
+#define RDPCSTX4_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN__SHIFT                                       0x1d
+#define RDPCSTX4_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE__SHIFT                               0x1e
+#define RDPCSTX4_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER_MASK                                   0x0FF00000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN_MASK                                       0x10000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN_MASK                                         0x20000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE_MASK                                 0x40000000L
+//RDPCSTX4_RDPCSTX_PHY_CNTL14
+#define RDPCSTX4_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE__SHIFT                                      0x0
+#define RDPCSTX4_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN__SHIFT                                       0x18
+#define RDPCSTX4_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN__SHIFT                                        0x1c
+#define RDPCSTX4_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE_MASK                                        0x00000001L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN_MASK                                         0x01000000L
+#define RDPCSTX4_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN_MASK                                          0x10000000L
+//RDPCSTX4_RDPCSTX_PHY_FUSE0
+#define RDPCSTX4_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX4_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX4_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX4_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I__SHIFT                                             0x12
+#define RDPCSTX4_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO__SHIFT                                        0x14
+#define RDPCSTX4_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX4_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX4_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX4_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I_MASK                                               0x000C0000L
+#define RDPCSTX4_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO_MASK                                          0x00300000L
+//RDPCSTX4_RDPCSTX_PHY_FUSE1
+#define RDPCSTX4_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX4_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX4_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX4_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT__SHIFT                                          0x12
+#define RDPCSTX4_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP__SHIFT                                         0x19
+#define RDPCSTX4_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX4_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX4_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX4_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT_MASK                                            0x01FC0000L
+#define RDPCSTX4_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP_MASK                                           0xFE000000L
+//RDPCSTX4_RDPCSTX_PHY_FUSE2
+#define RDPCSTX4_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX4_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX4_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX4_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX4_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX4_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST_MASK                                             0x0003F000L
+//RDPCSTX4_RDPCSTX_PHY_FUSE3
+#define RDPCSTX4_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX4_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX4_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX4_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE__SHIFT                                             0x12
+#define RDPCSTX4_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE__SHIFT                                                0x18
+#define RDPCSTX4_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX4_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX4_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX4_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE_MASK                                               0x00FC0000L
+#define RDPCSTX4_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE_MASK                                                  0x03000000L
+//RDPCSTX4_RDPCSTX_PHY_RX_LD_VAL
+#define RDPCSTX4_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL__SHIFT                                        0x0
+#define RDPCSTX4_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL__SHIFT                                        0x8
+#define RDPCSTX4_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL_MASK                                          0x0000007FL
+#define RDPCSTX4_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL_MASK                                          0x001FFF00L
+//RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED__SHIFT                         0x0
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED__SHIFT                       0x1
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED__SHIFT                       0x2
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED__SHIFT                       0x3
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED__SHIFT                           0x4
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED__SHIFT                           0x5
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED__SHIFT                         0x8
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED__SHIFT                       0x9
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED__SHIFT                       0xa
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED__SHIFT                       0xb
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED__SHIFT                           0xc
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED__SHIFT                           0xd
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED__SHIFT                         0x10
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED__SHIFT                       0x11
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED__SHIFT                       0x12
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED__SHIFT                       0x13
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED__SHIFT                           0x14
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED__SHIFT                           0x15
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED__SHIFT                         0x18
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED__SHIFT                       0x19
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED__SHIFT                       0x1a
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED__SHIFT                       0x1b
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED__SHIFT                           0x1c
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED__SHIFT                           0x1d
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED_MASK                           0x00000001L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED_MASK                         0x00000002L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED_MASK                         0x00000004L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED_MASK                         0x00000008L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED_MASK                             0x00000010L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED_MASK                             0x00000020L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED_MASK                           0x00000100L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED_MASK                         0x00000200L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED_MASK                         0x00000400L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED_MASK                         0x00000800L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED_MASK                             0x00001000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED_MASK                             0x00002000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED_MASK                           0x00010000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED_MASK                         0x00020000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED_MASK                         0x00040000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED_MASK                         0x00080000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED_MASK                             0x00100000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED_MASK                             0x00200000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED_MASK                           0x01000000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED_MASK                         0x02000000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED_MASK                         0x04000000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED_MASK                         0x08000000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED_MASK                             0x10000000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED_MASK                             0x20000000L
+//RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED__SHIFT                        0x0
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED__SHIFT                       0x2
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED__SHIFT                        0x4
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED__SHIFT                       0x6
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED__SHIFT                        0x8
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED__SHIFT                       0xa
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED__SHIFT                        0xc
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED__SHIFT                       0xe
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED__SHIFT                            0x10
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED__SHIFT                        0x11
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED__SHIFT                    0x12
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED__SHIFT                        0x13
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED__SHIFT                       0x14
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED_MASK                          0x00000003L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED_MASK                         0x00000004L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED_MASK                          0x00000030L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED_MASK                         0x00000040L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED_MASK                          0x00000300L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED_MASK                         0x00000400L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED_MASK                          0x00003000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED_MASK                         0x00004000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED_MASK                              0x00010000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED_MASK                          0x00020000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED_MASK                      0x00040000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED_MASK                          0x00080000L
+#define RDPCSTX4_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED_MASK                         0x00100000L
+//RDPCSTX4_RDPCSTX_DPALT_CONTROL_REG
+#define RDPCSTX4_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS__SHIFT                                  0x0
+#define RDPCSTX4_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED__SHIFT                                0x4
+#define RDPCSTX4_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE__SHIFT                                  0x8
+#define RDPCSTX4_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS_MASK                                    0x00000001L
+#define RDPCSTX4_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED_MASK                                  0x00000010L
+#define RDPCSTX4_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE_MASK                                    0x0000FF00L
+
+
+// addressBlock: dpcssys_dpcssys_cr4_dispdec
+//DPCSSYS_CR4_DPCSSYS_CR_ADDR
+#define DPCSSYS_CR4_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                  0x0
+#define DPCSSYS_CR4_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                    0x0000FFFFL
+//DPCSSYS_CR4_DPCSSYS_CR_DATA
+#define DPCSSYS_CR4_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                  0x0
+#define DPCSSYS_CR4_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                    0x0000FFFFL
+
+
+// addressBlock: dpcssys_dpcs0_dpcstx5_dispdec
+//DPCSTX5_DPCSTX_TX_CLOCK_CNTL
+#define DPCSTX5_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS__SHIFT                                             0x0
+#define DPCSTX5_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN__SHIFT                                                   0x1
+#define DPCSTX5_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON__SHIFT                                             0x2
+#define DPCSTX5_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0x3
+#define DPCSTX5_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_GATE_DIS_MASK                                               0x00000001L
+#define DPCSTX5_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_EN_MASK                                                     0x00000002L
+#define DPCSTX5_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_CLOCK_ON_MASK                                               0x00000004L
+#define DPCSTX5_DPCSTX_TX_CLOCK_CNTL__DPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000008L
+//DPCSTX5_DPCSTX_TX_CNTL
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ__SHIFT                                                 0xc
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING__SHIFT                                             0xd
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP__SHIFT                                                      0xe
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT__SHIFT                                              0xf
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_REQ_MASK                                                   0x00001000L
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_PLL_UPDATE_PENDING_MASK                                               0x00002000L
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_DATA_SWAP_MASK                                                        0x00004000L
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_DATA_ORDER_INVERT_MASK                                                0x00008000L
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define DPCSTX5_DPCSTX_TX_CNTL__DPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//DPCSTX5_DPCSTX_CBUS_CNTL
+#define DPCSTX5_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY__SHIFT                                               0x0
+#define DPCSTX5_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET__SHIFT                                                 0x1f
+#define DPCSTX5_DPCSTX_CBUS_CNTL__DPCS_CBUS_WR_CMD_DELAY_MASK                                                 0x000000FFL
+#define DPCSTX5_DPCSTX_CBUS_CNTL__DPCS_CBUS_SOFT_RESET_MASK                                                   0x80000000L
+//DPCSTX5_DPCSTX_INTERRUPT_CNTL
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW__SHIFT                                          0x0
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR__SHIFT                                              0x1
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK__SHIFT                                        0x4
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR__SHIFT                                             0x8
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR__SHIFT                                             0x9
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR__SHIFT                                             0xa
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR__SHIFT                                             0xb
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR__SHIFT                                               0xc
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK__SHIFT                                         0x10
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK__SHIFT                                             0x14
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_OVERFLOW_MASK                                            0x00000001L
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_REG_ERROR_CLR_MASK                                                0x00000002L
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_REG_FIFO_ERROR_MASK_MASK                                          0x00000010L
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX0_FIFO_ERROR_MASK                                               0x00000100L
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX1_FIFO_ERROR_MASK                                               0x00000200L
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX2_FIFO_ERROR_MASK                                               0x00000400L
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX3_FIFO_ERROR_MASK                                               0x00000800L
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX_ERROR_CLR_MASK                                                 0x00001000L
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_TX_FIFO_ERROR_MASK_MASK                                           0x00010000L
+#define DPCSTX5_DPCSTX_INTERRUPT_CNTL__DPCS_INTERRUPT_MASK_MASK                                               0x00100000L
+//DPCSTX5_DPCSTX_PLL_UPDATE_ADDR
+#define DPCSTX5_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR__SHIFT                                           0x0
+#define DPCSTX5_DPCSTX_PLL_UPDATE_ADDR__DPCS_PLL_UPDATE_ADDR_MASK                                             0x0003FFFFL
+//DPCSTX5_DPCSTX_PLL_UPDATE_DATA
+#define DPCSTX5_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA__SHIFT                                           0x0
+#define DPCSTX5_DPCSTX_PLL_UPDATE_DATA__DPCS_PLL_UPDATE_DATA_MASK                                             0xFFFFFFFFL
+//DPCSTX5_DPCSTX_DEBUG_CONFIG
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN__SHIFT                                                       0x0
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL__SHIFT                                               0x1
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL__SHIFT                                            0x4
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL__SHIFT                                       0x8
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS__SHIFT                                                 0xe
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN__SHIFT                                          0x10
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_DBG_EN_MASK                                                         0x00000001L
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CFGCLK_SEL_MASK                                                 0x0000000EL
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_SEL_MASK                                              0x00000070L
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_DBG_TX_SYMCLK_DIV2_SEL_MASK                                         0x00000700L
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_DBG_CBUS_DIS_MASK                                                   0x00004000L
+#define DPCSTX5_DPCSTX_DEBUG_CONFIG__DPCS_TEST_DEBUG_WRITE_EN_MASK                                            0x00010000L
+
+
+// addressBlock: dpcssys_dpcs0_rdpcstx5_dispdec
+//RDPCSTX5_RDPCSTX_CNTL
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET__SHIFT                                                   0x0
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET__SHIFT                                                   0x4
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN__SHIFT                                                  0xc
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN__SHIFT                                                  0xd
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN__SHIFT                                                  0xe
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN__SHIFT                                                  0xf
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN__SHIFT                                                        0x10
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_START__SHIFT                                                     0x11
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY__SHIFT                                            0x14
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN__SHIFT                                              0x18
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN__SHIFT                                       0x19
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS__SHIFT                                                0x1a
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET__SHIFT                                                     0x1f
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_CBUS_SOFT_RESET_MASK                                                     0x00000001L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_SRAM_SOFT_RESET_MASK                                                     0x00000010L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE0_EN_MASK                                                    0x00001000L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE1_EN_MASK                                                    0x00002000L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE2_EN_MASK                                                    0x00004000L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_LANE3_EN_MASK                                                    0x00008000L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_EN_MASK                                                          0x00010000L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_START_MASK                                                       0x00020000L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_FIFO_RD_START_DELAY_MASK                                              0x00F00000L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_CR_REGISTER_BLOCK_EN_MASK                                                0x01000000L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_NON_DPALT_REGISTER_BLOCK_EN_MASK                                         0x02000000L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_DPALT_BLOCK_STATUS_MASK                                                  0x04000000L
+#define RDPCSTX5_RDPCSTX_CNTL__RDPCS_TX_SOFT_RESET_MASK                                                       0x80000000L
+//RDPCSTX5_RDPCSTX_CLOCK_CNTL
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN__SHIFT                                               0x0
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN__SHIFT                                          0x4
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN__SHIFT                                          0x5
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN__SHIFT                                          0x6
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN__SHIFT                                          0x7
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS__SHIFT                                        0x8
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN__SHIFT                                              0x9
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON__SHIFT                                        0xa
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS__SHIFT                                            0xc
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN__SHIFT                                                  0xd
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON__SHIFT                                            0xe
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS__SHIFT                                              0x10
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_EXT_REFCLK_EN_MASK                                                 0x00000001L
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX0_EN_MASK                                            0x00000010L
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX1_EN_MASK                                            0x00000020L
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX2_EN_MASK                                            0x00000040L
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_TX3_EN_MASK                                            0x00000080L
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_GATE_DIS_MASK                                          0x00000100L
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_EN_MASK                                                0x00000200L
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SYMCLK_DIV2_CLOCK_ON_MASK                                          0x00000400L
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_GATE_DIS_MASK                                              0x00001000L
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_EN_MASK                                                    0x00002000L
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_CLOCK_ON_MASK                                              0x00004000L
+#define RDPCSTX5_RDPCSTX_CLOCK_CNTL__RDPCS_SRAMCLK_BYPASS_MASK                                                0x00010000L
+//RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW__SHIFT                                    0x0
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE__SHIFT                                 0x1
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE__SHIFT                                   0x2
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR__SHIFT                                       0x4
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR__SHIFT                                       0x5
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR__SHIFT                                       0x6
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR__SHIFT                                       0x7
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR__SHIFT                                        0x8
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR__SHIFT                             0x9
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR__SHIFT                               0xa
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR__SHIFT                                         0xc
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK__SHIFT                                  0x10
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK__SHIFT                            0x11
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK__SHIFT                              0x12
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK__SHIFT                                   0x14
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_OVERFLOW_MASK                                      0x00000001L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK                                   0x00000002L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK                                     0x00000004L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX0_FIFO_ERROR_MASK                                         0x00000010L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX1_FIFO_ERROR_MASK                                         0x00000020L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX2_FIFO_ERROR_MASK                                         0x00000040L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX3_FIFO_ERROR_MASK                                         0x00000080L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_ERROR_CLR_MASK                                          0x00000100L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_CLR_MASK                               0x00000200L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_CLR_MASK                                 0x00000400L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_ERROR_CLR_MASK                                           0x00001000L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_REG_FIFO_ERROR_MASK_MASK                                    0x00010000L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_DISABLE_TOGGLE_MASK_MASK                              0x00020000L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_DPALT_4LANE_TOGGLE_MASK_MASK                                0x00040000L
+#define RDPCSTX5_RDPCSTX_INTERRUPT_CONTROL__RDPCS_TX_FIFO_ERROR_MASK_MASK                                     0x00100000L
+//RDPCSTX5_RDPCSTX_PLL_UPDATE_DATA
+#define RDPCSTX5_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA__SHIFT                                        0x0
+#define RDPCSTX5_RDPCSTX_PLL_UPDATE_DATA__RDPCS_PLL_UPDATE_DATA_MASK                                          0x00000001L
+//RDPCSTX5_RDPCS_TX_CR_ADDR
+#define RDPCSTX5_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                    0x0
+#define RDPCSTX5_RDPCS_TX_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                      0x0000FFFFL
+//RDPCSTX5_RDPCS_TX_CR_DATA
+#define RDPCSTX5_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                    0x0
+#define RDPCSTX5_RDPCS_TX_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                      0x0000FFFFL
+//RDPCSTX5_RDPCS_TX_SRAM_CNTL
+#define RDPCSTX5_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS__SHIFT                                                 0x14
+#define RDPCSTX5_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE__SHIFT                                               0x18
+#define RDPCSTX5_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE__SHIFT                                           0x1c
+#define RDPCSTX5_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_DIS_MASK                                                   0x00100000L
+#define RDPCSTX5_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_FORCE_MASK                                                 0x03000000L
+#define RDPCSTX5_RDPCS_TX_SRAM_CNTL__RDPCS_MEM_PWR_PWR_STATE_MASK                                             0x30000000L
+//RDPCSTX5_RDPCSTX_MEM_POWER_CTRL
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES__SHIFT                                           0x0
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES__SHIFT                                    0xc
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1__SHIFT                                  0x1a
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2__SHIFT                                  0x1b
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1__SHIFT                                   0x1c
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2__SHIFT                                   0x1d
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM__SHIFT                                         0x1e
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_RM_FUSES_MASK                                             0x00000FFFL
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_FUSE_CUSTOM_RM_FUSES_MASK                                      0x03FFF000L
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC1_MASK                                    0x04000000L
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_PDP_BC2_MASK                                    0x08000000L
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC1_MASK                                     0x10000000L
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_MEM_POWER_CTRL_HD_BC2_MASK                                     0x20000000L
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL__RDPCS_LIVMIN_DIS_SRAM_MASK                                           0x40000000L
+//RDPCSTX5_RDPCSTX_MEM_POWER_CTRL2
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF__SHIFT                                    0x0
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO__SHIFT                                    0x2
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_POFF_MASK                                      0x00000003L
+#define RDPCSTX5_RDPCSTX_MEM_POWER_CTRL2__RDPCS_MEM_POWER_CTRL_FISO_MASK                                      0x00000004L
+//RDPCSTX5_RDPCSTX_SCRATCH
+#define RDPCSTX5_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH__SHIFT                                                      0x0
+#define RDPCSTX5_RDPCSTX_SCRATCH__RDPCSTX_SCRATCH_MASK                                                        0xFFFFFFFFL
+//RDPCSTX5_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG__SHIFT                      0x0
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS__SHIFT              0x4
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE__SHIFT                      0x8
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_DIS_BLOCK_REG_MASK                        0x00000001L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_FORCE_SYMCLK_DIV2_DIS_MASK                0x00000010L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_DIS_BLOCK_REG__RDPCS_DMCU_DPALT_CONTROL_SPARE_MASK                        0x0000FF00L
+//RDPCSTX5_RDPCSTX_DEBUG_CONFIG
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN__SHIFT                                                    0x0
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT__SHIFT                                        0x4
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP__SHIFT                                        0x7
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK__SHIFT                                          0x8
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE__SHIFT                                       0xf
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX__SHIFT                                          0x10
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT__SHIFT                                              0x18
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_EN_MASK                                                      0x00000001L
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_8BIT_MASK                                          0x00000070L
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_ASYNC_SWAP_MASK                                          0x00000080L
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_SEL_TEST_CLK_MASK                                            0x00001F00L
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_EXPIRE_MASK                                         0x00008000L
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MAX_MASK                                            0x00FF0000L
+#define RDPCSTX5_RDPCSTX_DEBUG_CONFIG__RDPCS_DBG_CR_COUNT_MASK                                                0xFF000000L
+//RDPCSTX5_RDPCSTX_PHY_CNTL0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET__SHIFT                                                    0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET__SHIFT                                            0x1
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N__SHIFT                                          0x2
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN__SHIFT                                           0x3
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT__SHIFT                                                  0x4
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE__SHIFT                                          0x8
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE__SHIFT                                                0x9
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL__SHIFT                                            0xe
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ__SHIFT                                                0x11
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK__SHIFT                                                0x12
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL__SHIFT                                              0x14
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL__SHIFT                                               0x15
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN__SHIFT                                            0x18
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT__SHIFT                                        0x19
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE__SHIFT                                               0x1c
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE__SHIFT                                             0x1d
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS__SHIFT                                                  0x1f
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RESET_MASK                                                      0x00000001L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_PHY_RESET_MASK                                              0x00000002L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TCA_APB_RESET_N_MASK                                            0x00000004L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TEST_POWERDOWN_MASK                                             0x00000008L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_DTB_OUT_MASK                                                    0x00000030L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_HDMIMODE_ENABLE_MASK                                            0x00000100L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_RANGE_MASK                                                  0x00003E00L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_TX_VBOOST_LVL_MASK                                              0x0001C000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_REQ_MASK                                                  0x00020000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_RTUNE_ACK_MASK                                                  0x00040000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_PARA_SEL_MASK                                                0x00100000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_CR_MUX_SEL_MASK                                                 0x00200000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_EN_MASK                                              0x01000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_PHY_REF_CLKDET_RESULT_MASK                                          0x02000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_INIT_DONE_MASK                                                 0x10000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_EXT_LD_DONE_MASK                                               0x20000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL0__RDPCS_SRAM_BYPASS_MASK                                                    0x80000000L
+//RDPCSTX5_RDPCSTX_PHY_CNTL1
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN__SHIFT                                               0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN__SHIFT                                               0x1
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE__SHIFT                                           0x2
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN__SHIFT                                               0x3
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE__SHIFT                                           0x4
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET__SHIFT                                              0x5
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN__SHIFT                                               0x6
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE__SHIFT                                           0x7
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PG_MODE_EN_MASK                                                 0x00000001L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_EN_MASK                                                 0x00000002L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PCS_PWR_STABLE_MASK                                             0x00000004L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_EN_MASK                                                 0x00000008L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_PMA_PWR_STABLE_MASK                                             0x00000010L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_DP_PG_RESET_MASK                                                0x00000020L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_EN_MASK                                                 0x00000040L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL1__RDPCS_PHY_ANA_PWR_STABLE_MASK                                             0x00000080L
+//RDPCSTX5_RDPCSTX_PHY_CNTL2
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR__SHIFT                                                  0x3
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN__SHIFT                                 0x4
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN__SHIFT                                 0x5
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN__SHIFT                                 0x6
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN__SHIFT                                 0x7
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN__SHIFT                                 0x8
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN__SHIFT                                 0x9
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN__SHIFT                                 0xa
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN__SHIFT                                 0xb
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP4_POR_MASK                                                    0x00000008L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_RX2TX_PAR_LB_EN_MASK                                   0x00000010L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_RX2TX_PAR_LB_EN_MASK                                   0x00000020L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_RX2TX_PAR_LB_EN_MASK                                   0x00000040L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_RX2TX_PAR_LB_EN_MASK                                   0x00000080L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE0_TX2RX_SER_LB_EN_MASK                                   0x00000100L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE1_TX2RX_SER_LB_EN_MASK                                   0x00000200L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE2_TX2RX_SER_LB_EN_MASK                                   0x00000400L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL2__RDPCS_PHY_DP_LANE3_TX2RX_SER_LB_EN_MASK                                   0x00000800L
+//RDPCSTX5_RDPCSTX_PHY_CNTL3
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET__SHIFT                                             0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE__SHIFT                                           0x1
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY__SHIFT                                           0x2
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN__SHIFT                                           0x3
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ__SHIFT                                               0x4
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK__SHIFT                                               0x5
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET__SHIFT                                             0x8
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE__SHIFT                                           0x9
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY__SHIFT                                           0xa
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN__SHIFT                                           0xb
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ__SHIFT                                               0xc
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK__SHIFT                                               0xd
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET__SHIFT                                             0x10
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE__SHIFT                                           0x11
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY__SHIFT                                           0x12
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN__SHIFT                                           0x13
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ__SHIFT                                               0x14
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK__SHIFT                                               0x15
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET__SHIFT                                             0x18
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE__SHIFT                                           0x19
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY__SHIFT                                           0x1a
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN__SHIFT                                           0x1b
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ__SHIFT                                               0x1c
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK__SHIFT                                               0x1d
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_MASK                                               0x00000001L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_MASK                                             0x00000002L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_MASK                                             0x00000004L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_MASK                                             0x00000008L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_MASK                                                 0x00000010L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_MASK                                                 0x00000020L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_MASK                                               0x00000100L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_MASK                                             0x00000200L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_MASK                                             0x00000400L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_MASK                                             0x00000800L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_MASK                                                 0x00001000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_MASK                                                 0x00002000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_MASK                                               0x00010000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_MASK                                             0x00020000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_MASK                                             0x00040000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_MASK                                             0x00080000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_MASK                                                 0x00100000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_MASK                                                 0x00200000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_MASK                                               0x01000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_MASK                                             0x02000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_MASK                                             0x04000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_MASK                                             0x08000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_MASK                                                 0x10000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_MASK                                                 0x20000000L
+//RDPCSTX5_RDPCSTX_PHY_CNTL4
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL__SHIFT                                         0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT__SHIFT                                            0x4
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC__SHIFT                                    0x6
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN__SHIFT                                        0x7
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL__SHIFT                                         0x8
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT__SHIFT                                            0xc
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC__SHIFT                                    0xe
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN__SHIFT                                        0xf
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL__SHIFT                                         0x10
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT__SHIFT                                            0x14
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC__SHIFT                                    0x16
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN__SHIFT                                        0x17
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL__SHIFT                                         0x18
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT__SHIFT                                            0x1c
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC__SHIFT                                    0x1e
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN__SHIFT                                        0x1f
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_TERM_CTRL_MASK                                           0x00000007L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_INVERT_MASK                                              0x00000010L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_BYPASS_EQ_CALC_MASK                                      0x00000040L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX0_HP_PROT_EN_MASK                                          0x00000080L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_TERM_CTRL_MASK                                           0x00000700L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_INVERT_MASK                                              0x00001000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_BYPASS_EQ_CALC_MASK                                      0x00004000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX1_HP_PROT_EN_MASK                                          0x00008000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_TERM_CTRL_MASK                                           0x00070000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_INVERT_MASK                                              0x00100000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_BYPASS_EQ_CALC_MASK                                      0x00400000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX2_HP_PROT_EN_MASK                                          0x00800000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_TERM_CTRL_MASK                                           0x07000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_INVERT_MASK                                              0x10000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_BYPASS_EQ_CALC_MASK                                      0x40000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL4__RDPCS_PHY_DP_TX3_HP_PROT_EN_MASK                                          0x80000000L
+//RDPCSTX5_RDPCSTX_PHY_CNTL5
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD__SHIFT                                               0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE__SHIFT                                              0x1
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH__SHIFT                                             0x4
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ__SHIFT                                         0x6
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT__SHIFT                                      0x7
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD__SHIFT                                               0x8
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE__SHIFT                                              0x9
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH__SHIFT                                             0xc
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ__SHIFT                                         0xe
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT__SHIFT                                      0xf
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD__SHIFT                                               0x10
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE__SHIFT                                              0x11
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH__SHIFT                                             0x14
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ__SHIFT                                         0x16
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT__SHIFT                                      0x17
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD__SHIFT                                               0x18
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE__SHIFT                                              0x19
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH__SHIFT                                             0x1c
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ__SHIFT                                         0x1e
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT__SHIFT                                      0x1f
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_LPD_MASK                                                 0x00000001L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_RATE_MASK                                                0x0000000EL
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_WIDTH_MASK                                               0x00000030L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_REQ_MASK                                           0x00000040L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX0_DETRX_RESULT_MASK                                        0x00000080L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_LPD_MASK                                                 0x00000100L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_RATE_MASK                                                0x00000E00L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_WIDTH_MASK                                               0x00003000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_REQ_MASK                                           0x00004000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX1_DETRX_RESULT_MASK                                        0x00008000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_LPD_MASK                                                 0x00010000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_RATE_MASK                                                0x000E0000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_WIDTH_MASK                                               0x00300000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_REQ_MASK                                           0x00400000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX2_DETRX_RESULT_MASK                                        0x00800000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_LPD_MASK                                                 0x01000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_RATE_MASK                                                0x0E000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_WIDTH_MASK                                               0x30000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_REQ_MASK                                           0x40000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL5__RDPCS_PHY_DP_TX3_DETRX_RESULT_MASK                                        0x80000000L
+//RDPCSTX5_RDPCSTX_PHY_CNTL6
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE__SHIFT                                            0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN__SHIFT                                           0x2
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE__SHIFT                                            0x4
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN__SHIFT                                           0x6
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE__SHIFT                                            0x8
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN__SHIFT                                           0xa
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE__SHIFT                                            0xc
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN__SHIFT                                           0xe
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4__SHIFT                                                0x10
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE__SHIFT                                            0x11
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK__SHIFT                                        0x12
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN__SHIFT                                            0x13
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ__SHIFT                                           0x14
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_MASK                                              0x00000003L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_MASK                                             0x00000004L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_MASK                                              0x00000030L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_MASK                                             0x00000040L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_MASK                                              0x00000300L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_MASK                                             0x00000400L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_MASK                                              0x00003000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_MASK                                             0x00004000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_MASK                                                  0x00010000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_MASK                                              0x00020000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_MASK                                          0x00040000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_MASK                                              0x00080000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_MASK                                             0x00100000L
+//RDPCSTX5_RDPCSTX_PHY_CNTL7
+#define RDPCSTX5_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN__SHIFT                                       0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT__SHIFT                                      0x10
+#define RDPCSTX5_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_DEN_MASK                                         0x0000FFFFL
+#define RDPCSTX5_RDPCSTX_PHY_CNTL7__RDPCS_PHY_DP_MPLLB_FRACN_QUOT_MASK                                        0xFFFF0000L
+//RDPCSTX5_RDPCSTX_PHY_CNTL8
+#define RDPCSTX5_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK__SHIFT                                        0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL8__RDPCS_PHY_DP_MPLLB_SSC_PEAK_MASK                                          0x000FFFFFL
+//RDPCSTX5_RDPCSTX_PHY_CNTL9
+#define RDPCSTX5_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE__SHIFT                                    0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD__SHIFT                                   0x18
+#define RDPCSTX5_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_STEPSIZE_MASK                                      0x001FFFFFL
+#define RDPCSTX5_RDPCSTX_PHY_CNTL9__RDPCS_PHY_DP_MPLLB_SSC_UP_SPREAD_MASK                                     0x01000000L
+//RDPCSTX5_RDPCSTX_PHY_CNTL10
+#define RDPCSTX5_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM__SHIFT                                      0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL10__RDPCS_PHY_DP_MPLLB_FRACN_REM_MASK                                        0x0000FFFFL
+//RDPCSTX5_RDPCSTX_PHY_CNTL11
+#define RDPCSTX5_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER__SHIFT                                     0x4
+#define RDPCSTX5_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV__SHIFT                                     0x10
+#define RDPCSTX5_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV__SHIFT                                    0x14
+#define RDPCSTX5_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV__SHIFT                           0x18
+#define RDPCSTX5_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_MPLLB_MULTIPLIER_MASK                                       0x0000FFF0L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_DIV_MASK                                       0x00070000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL11__RDPCS_PHY_DP_REF_CLK_MPLLB_DIV_MASK                                      0x00700000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL11__RDPCS_PHY_HDMI_MPLLB_HDMI_PIXEL_CLK_DIV_MASK                             0x03000000L
+//RDPCSTX5_RDPCSTX_PHY_CNTL12
+#define RDPCSTX5_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN__SHIFT                                    0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN__SHIFT                                   0x2
+#define RDPCSTX5_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV__SHIFT                                     0x4
+#define RDPCSTX5_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE__SHIFT                                          0x7
+#define RDPCSTX5_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN__SHIFT                                         0x8
+#define RDPCSTX5_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_DIV5_CLK_EN_MASK                                      0x00000001L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_WORD_DIV2_EN_MASK                                     0x00000004L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_TX_CLK_DIV_MASK                                       0x00000070L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_STATE_MASK                                            0x00000080L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL12__RDPCS_PHY_DP_MPLLB_SSC_EN_MASK                                           0x00000100L
+//RDPCSTX5_RDPCSTX_PHY_CNTL13
+#define RDPCSTX5_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER__SHIFT                                 0x14
+#define RDPCSTX5_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN__SHIFT                                     0x1c
+#define RDPCSTX5_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN__SHIFT                                       0x1d
+#define RDPCSTX5_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE__SHIFT                               0x1e
+#define RDPCSTX5_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_MULTIPLIER_MASK                                   0x0FF00000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_DIV_CLK_EN_MASK                                       0x10000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_FORCE_EN_MASK                                         0x20000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL13__RDPCS_PHY_DP_MPLLB_INIT_CAL_DISABLE_MASK                                 0x40000000L
+//RDPCSTX5_RDPCSTX_PHY_CNTL14
+#define RDPCSTX5_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE__SHIFT                                      0x0
+#define RDPCSTX5_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN__SHIFT                                       0x18
+#define RDPCSTX5_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN__SHIFT                                        0x1c
+#define RDPCSTX5_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_CAL_FORCE_MASK                                        0x00000001L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_FRACN_EN_MASK                                         0x01000000L
+#define RDPCSTX5_RDPCSTX_PHY_CNTL14__RDPCS_PHY_DP_MPLLB_PMIX_EN_MASK                                          0x10000000L
+//RDPCSTX5_RDPCSTX_PHY_FUSE0
+#define RDPCSTX5_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX5_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX5_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX5_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I__SHIFT                                             0x12
+#define RDPCSTX5_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO__SHIFT                                        0x14
+#define RDPCSTX5_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX5_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX5_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_TX0_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX5_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_V2I_MASK                                               0x000C0000L
+#define RDPCSTX5_RDPCSTX_PHY_FUSE0__RDPCS_PHY_DP_MPLLB_FREQ_VCO_MASK                                          0x00300000L
+//RDPCSTX5_RDPCSTX_PHY_FUSE1
+#define RDPCSTX5_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX5_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX5_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX5_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT__SHIFT                                          0x12
+#define RDPCSTX5_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP__SHIFT                                         0x19
+#define RDPCSTX5_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX5_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX5_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_TX1_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX5_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_INT_MASK                                            0x01FC0000L
+#define RDPCSTX5_RDPCSTX_PHY_FUSE1__RDPCS_PHY_DP_MPLLB_CP_PROP_MASK                                           0xFE000000L
+//RDPCSTX5_RDPCSTX_PHY_FUSE2
+#define RDPCSTX5_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX5_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX5_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX5_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX5_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX5_RDPCSTX_PHY_FUSE2__RDPCS_PHY_DP_TX2_EQ_POST_MASK                                             0x0003F000L
+//RDPCSTX5_RDPCSTX_PHY_FUSE3
+#define RDPCSTX5_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN__SHIFT                                           0x0
+#define RDPCSTX5_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE__SHIFT                                            0x6
+#define RDPCSTX5_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST__SHIFT                                           0xc
+#define RDPCSTX5_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE__SHIFT                                             0x12
+#define RDPCSTX5_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE__SHIFT                                                0x18
+#define RDPCSTX5_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_MAIN_MASK                                             0x0000003FL
+#define RDPCSTX5_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_PRE_MASK                                              0x00000FC0L
+#define RDPCSTX5_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DP_TX3_EQ_POST_MASK                                             0x0003F000L
+#define RDPCSTX5_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_FINETUNE_MASK                                               0x00FC0000L
+#define RDPCSTX5_RDPCSTX_PHY_FUSE3__RDPCS_PHY_DCO_RANGE_MASK                                                  0x03000000L
+//RDPCSTX5_RDPCSTX_PHY_RX_LD_VAL
+#define RDPCSTX5_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL__SHIFT                                        0x0
+#define RDPCSTX5_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL__SHIFT                                        0x8
+#define RDPCSTX5_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_REF_LD_VAL_MASK                                          0x0000007FL
+#define RDPCSTX5_RDPCSTX_PHY_RX_LD_VAL__RDPCS_PHY_RX_VCO_LD_VAL_MASK                                          0x001FFF00L
+//RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED__SHIFT                         0x0
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED__SHIFT                       0x1
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED__SHIFT                       0x2
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED__SHIFT                       0x3
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED__SHIFT                           0x4
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED__SHIFT                           0x5
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED__SHIFT                         0x8
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED__SHIFT                       0x9
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED__SHIFT                       0xa
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED__SHIFT                       0xb
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED__SHIFT                           0xc
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED__SHIFT                           0xd
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED__SHIFT                         0x10
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED__SHIFT                       0x11
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED__SHIFT                       0x12
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED__SHIFT                       0x13
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED__SHIFT                           0x14
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED__SHIFT                           0x15
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED__SHIFT                         0x18
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED__SHIFT                       0x19
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED__SHIFT                       0x1a
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED__SHIFT                       0x1b
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED__SHIFT                           0x1c
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED__SHIFT                           0x1d
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_RESET_RESERVED_MASK                           0x00000001L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DISABLE_RESERVED_MASK                         0x00000002L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_CLK_RDY_RESERVED_MASK                         0x00000004L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_DATA_EN_RESERVED_MASK                         0x00000008L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_REQ_RESERVED_MASK                             0x00000010L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX0_ACK_RESERVED_MASK                             0x00000020L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_RESET_RESERVED_MASK                           0x00000100L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DISABLE_RESERVED_MASK                         0x00000200L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_CLK_RDY_RESERVED_MASK                         0x00000400L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_DATA_EN_RESERVED_MASK                         0x00000800L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_REQ_RESERVED_MASK                             0x00001000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX1_ACK_RESERVED_MASK                             0x00002000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_RESET_RESERVED_MASK                           0x00010000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DISABLE_RESERVED_MASK                         0x00020000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_CLK_RDY_RESERVED_MASK                         0x00040000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_DATA_EN_RESERVED_MASK                         0x00080000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_REQ_RESERVED_MASK                             0x00100000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX2_ACK_RESERVED_MASK                             0x00200000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_RESET_RESERVED_MASK                           0x01000000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DISABLE_RESERVED_MASK                         0x02000000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_CLK_RDY_RESERVED_MASK                         0x04000000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_DATA_EN_RESERVED_MASK                         0x08000000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_REQ_RESERVED_MASK                             0x10000000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL3__RDPCS_PHY_DP_TX3_ACK_RESERVED_MASK                             0x20000000L
+//RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED__SHIFT                        0x0
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED__SHIFT                       0x2
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED__SHIFT                        0x4
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED__SHIFT                       0x6
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED__SHIFT                        0x8
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED__SHIFT                       0xa
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED__SHIFT                        0xc
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED__SHIFT                       0xe
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED__SHIFT                            0x10
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED__SHIFT                        0x11
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED__SHIFT                    0x12
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED__SHIFT                        0x13
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED__SHIFT                       0x14
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_PSTATE_RESERVED_MASK                          0x00000003L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX0_MPLL_EN_RESERVED_MASK                         0x00000004L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_PSTATE_RESERVED_MASK                          0x00000030L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX1_MPLL_EN_RESERVED_MASK                         0x00000040L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_PSTATE_RESERVED_MASK                          0x00000300L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX2_MPLL_EN_RESERVED_MASK                         0x00000400L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_PSTATE_RESERVED_MASK                          0x00003000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_TX3_MPLL_EN_RESERVED_MASK                         0x00004000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_RESERVED_MASK                              0x00010000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_RESERVED_MASK                          0x00020000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_RESERVED_MASK                      0x00040000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_EN_RESERVED_MASK                          0x00080000L
+#define RDPCSTX5_RDPCSTX_DMCU_DPALT_PHY_CNTL6__RDPCS_PHY_DP_REF_CLK_REQ_RESERVED_MASK                         0x00100000L
+//RDPCSTX5_RDPCSTX_DPALT_CONTROL_REG
+#define RDPCSTX5_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS__SHIFT                                  0x0
+#define RDPCSTX5_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED__SHIFT                                0x4
+#define RDPCSTX5_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE__SHIFT                                  0x8
+#define RDPCSTX5_RDPCSTX_DPALT_CONTROL_REG__RDPCS_ALLOW_DRIVER_ACCESS_MASK                                    0x00000001L
+#define RDPCSTX5_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DRIVER_ACCESS_BLOCKED_MASK                                  0x00000010L
+#define RDPCSTX5_RDPCSTX_DPALT_CONTROL_REG__RDPCS_DPALT_CONTROL_SPARE_MASK                                    0x0000FF00L
+
+
+// addressBlock: dpcssys_dpcssys_cr5_dispdec
+//DPCSSYS_CR5_DPCSSYS_CR_ADDR
+#define DPCSSYS_CR5_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR__SHIFT                                                  0x0
+#define DPCSSYS_CR5_DPCSSYS_CR_ADDR__RDPCS_TX_CR_ADDR_MASK                                                    0x0000FFFFL
+//DPCSSYS_CR5_DPCSSYS_CR_DATA
+#define DPCSSYS_CR5_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA__SHIFT                                                  0x0
+#define DPCSSYS_CR5_DPCSSYS_CR_DATA__RDPCS_TX_CR_DATA_MASK                                                    0x0000FFFFL
+
+#endif
index 2bfaaa8..d984c91 100644 (file)
 #define mmCOMPUTE_STATIC_THREAD_MGMT_SE2_BASE_IDX                                                      0
 #define mmCOMPUTE_STATIC_THREAD_MGMT_SE3                                                               0x0e1a
 #define mmCOMPUTE_STATIC_THREAD_MGMT_SE3_BASE_IDX                                                      0
+#define mmCOMPUTE_STATIC_THREAD_MGMT_SE4                                                               0x0e25
+#define mmCOMPUTE_STATIC_THREAD_MGMT_SE4_BASE_IDX                                                      0
+#define mmCOMPUTE_STATIC_THREAD_MGMT_SE5                                                               0x0e26
+#define mmCOMPUTE_STATIC_THREAD_MGMT_SE5_BASE_IDX                                                      0
+#define mmCOMPUTE_STATIC_THREAD_MGMT_SE6                                                               0x0e27
+#define mmCOMPUTE_STATIC_THREAD_MGMT_SE6_BASE_IDX                                                      0
+#define mmCOMPUTE_STATIC_THREAD_MGMT_SE7                                                               0x0e28
+#define mmCOMPUTE_STATIC_THREAD_MGMT_SE7_BASE_IDX                                                      0
 #define mmCOMPUTE_RESTART_X                                                                            0x0e1b
 #define mmCOMPUTE_RESTART_X_BASE_IDX                                                                   0
 #define mmCOMPUTE_RESTART_Y                                                                            0x0e1c
diff --git a/drivers/gpu/drm/amd/include/asic_reg/umc/umc_6_1_2_offset.h b/drivers/gpu/drm/amd/include/asic_reg/umc/umc_6_1_2_offset.h
new file mode 100644 (file)
index 0000000..03be415
--- /dev/null
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2019  Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
+ * AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+#ifndef _umc_6_1_2_OFFSET_HEADER
+#define _umc_6_1_2_OFFSET_HEADER
+
+#define mmUMCCH0_0_EccErrCntSel_ARCT                                     0x0360
+#define mmUMCCH0_0_EccErrCntSel_ARCT_BASE_IDX                            1
+#define mmUMCCH0_0_EccErrCnt_ARCT                                        0x0361
+#define mmUMCCH0_0_EccErrCnt_ARCT_BASE_IDX                               1
+#define mmMCA_UMC_UMC0_MCUMC_STATUST0_ARCT                               0x03c2
+#define mmMCA_UMC_UMC0_MCUMC_STATUST0_ARCT_BASE_IDX                      1
+
+#endif
index dd7cbc0..7014651 100644 (file)
@@ -672,20 +672,6 @@ struct vram_usagebyfirmware_v2_1
   uint16_t  used_by_driver_in_kb; 
 };
 
-/* This is part of vram_usagebyfirmware_v2_1 */
-struct vram_reserve_block
-{
-       uint32_t start_address_in_kb;
-       uint16_t used_by_firmware_in_kb;
-       uint16_t used_by_driver_in_kb;
-};
-
-/* Definitions for constance */
-enum atomfirmware_internal_constants
-{
-       ONE_KiB = 0x400,
-       ONE_MiB = 0x100000,
-};
 
 /* 
   ***************************************************************************
index 5087d6b..c195575 100644 (file)
@@ -275,9 +275,6 @@ static int pp_dpm_load_fw(void *handle)
 {
        struct pp_hwmgr *hwmgr = handle;
 
-       if (!hwmgr->not_vf)
-               return 0;
-
        if (!hwmgr || !hwmgr->smumgr_funcs || !hwmgr->smumgr_funcs->start_smu)
                return -EINVAL;
 
@@ -930,9 +927,12 @@ static int pp_dpm_set_mp1_state(void *handle, enum pp_mp1_state mp1_state)
 {
        struct pp_hwmgr *hwmgr = handle;
 
-       if (!hwmgr || !hwmgr->pm_en)
+       if (!hwmgr)
                return -EINVAL;
 
+       if (!hwmgr->pm_en)
+               return 0;
+
        if (hwmgr->hwmgr_func->set_mp1_state)
                return hwmgr->hwmgr_func->set_mp1_state(hwmgr, mp1_state);
 
index 6dddd78..9946947 100644 (file)
@@ -356,6 +356,35 @@ int smu_get_dpm_level_count(struct smu_context *smu, enum smu_clk_type clk_type,
        return smu_get_dpm_freq_by_index(smu, clk_type, 0xff, value);
 }
 
+int smu_get_dpm_level_range(struct smu_context *smu, enum smu_clk_type clk_type,
+                           uint32_t *min_value, uint32_t *max_value)
+{
+       int ret = 0;
+       uint32_t level_count = 0;
+
+       if (!min_value && !max_value)
+               return -EINVAL;
+
+       if (min_value) {
+               /* by default, level 0 clock value as min value */
+               ret = smu_get_dpm_freq_by_index(smu, clk_type, 0, min_value);
+               if (ret)
+                       return ret;
+       }
+
+       if (max_value) {
+               ret = smu_get_dpm_level_count(smu, clk_type, &level_count);
+               if (ret)
+                       return ret;
+
+               ret = smu_get_dpm_freq_by_index(smu, clk_type, level_count - 1, max_value);
+               if (ret)
+                       return ret;
+       }
+
+       return ret;
+}
+
 bool smu_clk_dpm_is_enabled(struct smu_context *smu, enum smu_clk_type clk_type)
 {
        enum smu_feature_mask feature_id = 0;
@@ -404,10 +433,10 @@ int smu_dpm_set_power_gate(struct smu_context *smu, uint32_t block_type,
 
        switch (block_type) {
        case AMD_IP_BLOCK_TYPE_UVD:
-               ret = smu_dpm_set_uvd_enable(smu, gate);
+               ret = smu_dpm_set_uvd_enable(smu, !gate);
                break;
        case AMD_IP_BLOCK_TYPE_VCE:
-               ret = smu_dpm_set_vce_enable(smu, gate);
+               ret = smu_dpm_set_vce_enable(smu, !gate);
                break;
        case AMD_IP_BLOCK_TYPE_GFX:
                ret = smu_gfx_off_control(smu, gate);
@@ -416,7 +445,7 @@ int smu_dpm_set_power_gate(struct smu_context *smu, uint32_t block_type,
                ret = smu_powergate_sdma(smu, gate);
                break;
        case AMD_IP_BLOCK_TYPE_JPEG:
-               ret = smu_dpm_set_jpeg_enable(smu, gate);
+               ret = smu_dpm_set_jpeg_enable(smu, !gate);
                break;
        default:
                break;
@@ -490,26 +519,25 @@ int smu_update_table(struct smu_context *smu, enum smu_table_id table_index, int
 {
        struct smu_table_context *smu_table = &smu->smu_table;
        struct amdgpu_device *adev = smu->adev;
-       struct smu_table *table = NULL;
-       int ret = 0;
+       struct smu_table *table = &smu_table->driver_table;
        int table_id = smu_table_get_index(smu, table_index);
+       uint32_t table_size;
+       int ret = 0;
 
        if (!table_data || table_id >= SMU_TABLE_COUNT || table_id < 0)
                return -EINVAL;
 
-       table = &smu_table->tables[table_index];
+       table_size = smu_table->tables[table_index].size;
 
-       if (drv2smu)
-               memcpy(table->cpu_addr, table_data, table->size);
+       if (drv2smu) {
+               memcpy(table->cpu_addr, table_data, table_size);
+               /*
+                * Flush hdp cache: to guard the content seen by
+                * GPU is consitent with CPU.
+                */
+               amdgpu_asic_flush_hdp(adev, NULL);
+       }
 
-       ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetDriverDramAddrHigh,
-                                         upper_32_bits(table->mc_address));
-       if (ret)
-               return ret;
-       ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetDriverDramAddrLow,
-                                         lower_32_bits(table->mc_address));
-       if (ret)
-               return ret;
        ret = smu_send_smc_msg_with_param(smu, drv2smu ?
                                          SMU_MSG_TransferTableDram2Smu :
                                          SMU_MSG_TransferTableSmu2Dram,
@@ -517,11 +545,10 @@ int smu_update_table(struct smu_context *smu, enum smu_table_id table_index, int
        if (ret)
                return ret;
 
-       /* flush hdp cache */
-       adev->nbio.funcs->hdp_flush(adev, NULL);
-
-       if (!drv2smu)
-               memcpy(table_data, table->cpu_addr, table->size);
+       if (!drv2smu) {
+               amdgpu_asic_flush_hdp(adev, NULL);
+               memcpy(table_data, table->cpu_addr, table_size);
+       }
 
        return ret;
 }
@@ -531,7 +558,7 @@ bool is_support_sw_smu(struct amdgpu_device *adev)
        if (adev->asic_type == CHIP_VEGA20)
                return (amdgpu_dpm == 2) ? true : false;
        else if (adev->asic_type >= CHIP_ARCTURUS) {
-               if (amdgpu_sriov_vf(adev))
+               if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
                        return false;
                else
                        return true;
@@ -643,12 +670,11 @@ int smu_feature_init_dpm(struct smu_context *smu)
 
 int smu_feature_is_enabled(struct smu_context *smu, enum smu_feature_mask mask)
 {
-       struct amdgpu_device *adev = smu->adev;
        struct smu_feature *feature = &smu->smu_feature;
        int feature_id;
        int ret = 0;
 
-       if (adev->flags & AMD_IS_APU)
+       if (smu->is_apu)
                return 1;
 
        feature_id = smu_feature_get_index(smu, mask);
@@ -872,6 +898,7 @@ static int smu_sw_init(void *handle)
        smu->smu_baco.platform_support = false;
 
        mutex_init(&smu->sensor_lock);
+       mutex_init(&smu->metrics_lock);
 
        smu->watermarks_bitmap = 0;
        smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
@@ -947,32 +974,56 @@ static int smu_init_fb_allocations(struct smu_context *smu)
        struct amdgpu_device *adev = smu->adev;
        struct smu_table_context *smu_table = &smu->smu_table;
        struct smu_table *tables = smu_table->tables;
+       struct smu_table *driver_table = &(smu_table->driver_table);
+       uint32_t max_table_size = 0;
        int ret, i;
 
-       for (i = 0; i < SMU_TABLE_COUNT; i++) {
-               if (tables[i].size == 0)
-                       continue;
+       /* VRAM allocation for tool table */
+       if (tables[SMU_TABLE_PMSTATUSLOG].size) {
                ret = amdgpu_bo_create_kernel(adev,
-                                             tables[i].size,
-                                             tables[i].align,
-                                             tables[i].domain,
-                                             &tables[i].bo,
-                                             &tables[i].mc_address,
-                                             &tables[i].cpu_addr);
-               if (ret)
-                       goto failed;
+                                             tables[SMU_TABLE_PMSTATUSLOG].size,
+                                             tables[SMU_TABLE_PMSTATUSLOG].align,
+                                             tables[SMU_TABLE_PMSTATUSLOG].domain,
+                                             &tables[SMU_TABLE_PMSTATUSLOG].bo,
+                                             &tables[SMU_TABLE_PMSTATUSLOG].mc_address,
+                                             &tables[SMU_TABLE_PMSTATUSLOG].cpu_addr);
+               if (ret) {
+                       pr_err("VRAM allocation for tool table failed!\n");
+                       return ret;
+               }
        }
 
-       return 0;
-failed:
-       while (--i >= 0) {
+       /* VRAM allocation for driver table */
+       for (i = 0; i < SMU_TABLE_COUNT; i++) {
                if (tables[i].size == 0)
                        continue;
-               amdgpu_bo_free_kernel(&tables[i].bo,
-                                     &tables[i].mc_address,
-                                     &tables[i].cpu_addr);
 
+               if (i == SMU_TABLE_PMSTATUSLOG)
+                       continue;
+
+               if (max_table_size < tables[i].size)
+                       max_table_size = tables[i].size;
+       }
+
+       driver_table->size = max_table_size;
+       driver_table->align = PAGE_SIZE;
+       driver_table->domain = AMDGPU_GEM_DOMAIN_VRAM;
+
+       ret = amdgpu_bo_create_kernel(adev,
+                                     driver_table->size,
+                                     driver_table->align,
+                                     driver_table->domain,
+                                     &driver_table->bo,
+                                     &driver_table->mc_address,
+                                     &driver_table->cpu_addr);
+       if (ret) {
+               pr_err("VRAM allocation for driver table failed!\n");
+               if (tables[SMU_TABLE_PMSTATUSLOG].mc_address)
+                       amdgpu_bo_free_kernel(&tables[SMU_TABLE_PMSTATUSLOG].bo,
+                                             &tables[SMU_TABLE_PMSTATUSLOG].mc_address,
+                                             &tables[SMU_TABLE_PMSTATUSLOG].cpu_addr);
        }
+
        return ret;
 }
 
@@ -980,18 +1031,19 @@ static int smu_fini_fb_allocations(struct smu_context *smu)
 {
        struct smu_table_context *smu_table = &smu->smu_table;
        struct smu_table *tables = smu_table->tables;
-       uint32_t i = 0;
+       struct smu_table *driver_table = &(smu_table->driver_table);
 
        if (!tables)
                return 0;
 
-       for (i = 0; i < SMU_TABLE_COUNT; i++) {
-               if (tables[i].size == 0)
-                       continue;
-               amdgpu_bo_free_kernel(&tables[i].bo,
-                                     &tables[i].mc_address,
-                                     &tables[i].cpu_addr);
-       }
+       if (tables[SMU_TABLE_PMSTATUSLOG].mc_address)
+               amdgpu_bo_free_kernel(&tables[SMU_TABLE_PMSTATUSLOG].bo,
+                                     &tables[SMU_TABLE_PMSTATUSLOG].mc_address,
+                                     &tables[SMU_TABLE_PMSTATUSLOG].cpu_addr);
+
+       amdgpu_bo_free_kernel(&driver_table->bo,
+                             &driver_table->mc_address,
+                             &driver_table->cpu_addr);
 
        return 0;
 }
@@ -1061,28 +1113,31 @@ static int smu_smc_table_hw_init(struct smu_context *smu,
        }
 
        /* smu_dump_pptable(smu); */
+       if (!amdgpu_sriov_vf(adev)) {
+               ret = smu_set_driver_table_location(smu);
+               if (ret)
+                       return ret;
 
-       /*
-        * Copy pptable bo in the vram to smc with SMU MSGs such as
-        * SetDriverDramAddr and TransferTableDram2Smu.
-        */
-       ret = smu_write_pptable(smu);
-       if (ret)
-               return ret;
-
-       /* issue Run*Btc msg */
-       ret = smu_run_btc(smu);
-       if (ret)
-               return ret;
-
-       ret = smu_feature_set_allowed_mask(smu);
-       if (ret)
-               return ret;
+               /*
+                * Copy pptable bo in the vram to smc with SMU MSGs such as
+                * SetDriverDramAddr and TransferTableDram2Smu.
+                */
+               ret = smu_write_pptable(smu);
+               if (ret)
+                       return ret;
 
-       ret = smu_system_features_control(smu, true);
-       if (ret)
-               return ret;
+               /* issue Run*Btc msg */
+               ret = smu_run_btc(smu);
+               if (ret)
+                       return ret;
+               ret = smu_feature_set_allowed_mask(smu);
+               if (ret)
+                       return ret;
 
+               ret = smu_system_features_control(smu, true);
+               if (ret)
+                       return ret;
+       }
        if (adev->asic_type != CHIP_ARCTURUS) {
                ret = smu_notify_display_change(smu);
                if (ret)
@@ -1135,8 +1190,9 @@ static int smu_smc_table_hw_init(struct smu_context *smu,
        /*
         * Set PMSTATUSLOG table bo address with SetToolsDramAddr MSG for tools.
         */
-       ret = smu_set_tool_table_location(smu);
-
+       if (!amdgpu_sriov_vf(adev)) {
+               ret = smu_set_tool_table_location(smu);
+       }
        if (!smu_is_dpm_running(smu))
                pr_info("dpm has been disabled\n");
 
@@ -1241,13 +1297,16 @@ static int smu_hw_init(void *handle)
                return ret;
        }
 
-       if (adev->flags & AMD_IS_APU) {
+       if (smu->is_apu) {
                smu_powergate_sdma(&adev->smu, false);
                smu_powergate_vcn(&adev->smu, false);
                smu_powergate_jpeg(&adev->smu, false);
                smu_set_gfx_cgpg(&adev->smu, true);
        }
 
+       if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))
+               return 0;
+
        if (!smu->pm_enabled)
                return 0;
 
@@ -1290,7 +1349,7 @@ failed:
 
 static int smu_stop_dpms(struct smu_context *smu)
 {
-       return smu_send_smc_msg(smu, SMU_MSG_DisableAllSmuFeatures);
+       return smu_system_features_control(smu, false);
 }
 
 static int smu_hw_fini(void *handle)
@@ -1300,37 +1359,45 @@ static int smu_hw_fini(void *handle)
        struct smu_table_context *table_context = &smu->smu_table;
        int ret = 0;
 
-       if (adev->flags & AMD_IS_APU) {
+       if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
+               return 0;
+
+       if (smu->is_apu) {
                smu_powergate_sdma(&adev->smu, true);
                smu_powergate_vcn(&adev->smu, true);
                smu_powergate_jpeg(&adev->smu, true);
        }
 
-       ret = smu_stop_thermal_control(smu);
-       if (ret) {
-               pr_warn("Fail to stop thermal control!\n");
-               return ret;
-       }
+       if (!smu->pm_enabled)
+               return 0;
 
-       /*
-        * For custom pptable uploading, skip the DPM features
-        * disable process on Navi1x ASICs.
-        *   - As the gfx related features are under control of
-        *     RLC on those ASICs. RLC reinitialization will be
-        *     needed to reenable them. That will cost much more
-        *     efforts.
-        *
-        *   - SMU firmware can handle the DPM reenablement
-        *     properly.
-        */
-       if (!smu->uploading_custom_pp_table ||
-           !((adev->asic_type >= CHIP_NAVI10) &&
-             (adev->asic_type <= CHIP_NAVI12))) {
-               ret = smu_stop_dpms(smu);
+       if (!amdgpu_sriov_vf(adev)){
+               ret = smu_stop_thermal_control(smu);
                if (ret) {
-                       pr_warn("Fail to stop Dpms!\n");
+                       pr_warn("Fail to stop thermal control!\n");
                        return ret;
                }
+
+               /*
+                * For custom pptable uploading, skip the DPM features
+                * disable process on Navi1x ASICs.
+                *   - As the gfx related features are under control of
+                *     RLC on those ASICs. RLC reinitialization will be
+                *     needed to reenable them. That will cost much more
+                *     efforts.
+                *
+                *   - SMU firmware can handle the DPM reenablement
+                *     properly.
+                */
+               if (!smu->uploading_custom_pp_table ||
+                               !((adev->asic_type >= CHIP_NAVI10) &&
+                                       (adev->asic_type <= CHIP_NAVI12))) {
+                       ret = smu_stop_dpms(smu);
+                       if (ret) {
+                               pr_warn("Fail to stop Dpms!\n");
+                               return ret;
+                       }
+               }
        }
 
        kfree(table_context->driver_pptable);
@@ -1376,7 +1443,10 @@ static int smu_suspend(void *handle)
        struct smu_context *smu = &adev->smu;
        bool baco_feature_is_enabled = false;
 
-       if(!(adev->flags & AMD_IS_APU))
+       if (!smu->pm_enabled)
+               return 0;
+
+       if(!smu->is_apu)
                baco_feature_is_enabled = smu_feature_is_enabled(smu, SMU_FEATURE_BACO_BIT);
 
        ret = smu_system_features_control(smu, false);
@@ -1408,6 +1478,12 @@ static int smu_resume(void *handle)
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
        struct smu_context *smu = &adev->smu;
 
+       if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
+               return 0;
+
+       if (!smu->pm_enabled)
+               return 0;
+
        pr_info("SMU is resuming...\n");
 
        ret = smu_start_smc_engine(smu);
@@ -1606,43 +1682,6 @@ static int smu_enable_umd_pstate(void *handle,
        return 0;
 }
 
-static int smu_default_set_performance_level(struct smu_context *smu, enum amd_dpm_forced_level level)
-{
-       int ret = 0;
-       uint32_t sclk_mask, mclk_mask, soc_mask;
-
-       switch (level) {
-       case AMD_DPM_FORCED_LEVEL_HIGH:
-               ret = smu_force_dpm_limit_value(smu, true);
-               break;
-       case AMD_DPM_FORCED_LEVEL_LOW:
-               ret = smu_force_dpm_limit_value(smu, false);
-               break;
-       case AMD_DPM_FORCED_LEVEL_AUTO:
-       case AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD:
-               ret = smu_unforce_dpm_levels(smu);
-               break;
-       case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK:
-       case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK:
-       case AMD_DPM_FORCED_LEVEL_PROFILE_PEAK:
-               ret = smu_get_profiling_clk_mask(smu, level,
-                                                &sclk_mask,
-                                                &mclk_mask,
-                                                &soc_mask);
-               if (ret)
-                       return ret;
-               smu_force_clk_levels(smu, SMU_SCLK, 1 << sclk_mask, false);
-               smu_force_clk_levels(smu, SMU_MCLK, 1 << mclk_mask, false);
-               smu_force_clk_levels(smu, SMU_SOCCLK, 1 << soc_mask, false);
-               break;
-       case AMD_DPM_FORCED_LEVEL_MANUAL:
-       case AMD_DPM_FORCED_LEVEL_PROFILE_EXIT:
-       default:
-               break;
-       }
-       return ret;
-}
-
 int smu_adjust_power_state_dynamic(struct smu_context *smu,
                                   enum amd_dpm_forced_level level,
                                   bool skip_display_settings)
@@ -1670,7 +1709,7 @@ int smu_adjust_power_state_dynamic(struct smu_context *smu,
        }
 
        if (!skip_display_settings) {
-               ret = smu_notify_smc_dispaly_config(smu);
+               ret = smu_notify_smc_display_config(smu);
                if (ret) {
                        pr_err("Failed to notify smc display config!");
                        return ret;
@@ -1680,11 +1719,8 @@ int smu_adjust_power_state_dynamic(struct smu_context *smu,
        if (smu_dpm_ctx->dpm_level != level) {
                ret = smu_asic_set_performance_level(smu, level);
                if (ret) {
-                       ret = smu_default_set_performance_level(smu, level);
-                       if (ret) {
-                               pr_err("Failed to set performance level!");
-                               return ret;
-                       }
+                       pr_err("Failed to set performance level!");
+                       return ret;
                }
 
                /* update the saved copy */
@@ -1926,26 +1962,25 @@ int smu_set_df_cstate(struct smu_context *smu,
 
 int smu_write_watermarks_table(struct smu_context *smu)
 {
-       int ret = 0;
-       struct smu_table_context *smu_table = &smu->smu_table;
-       struct smu_table *table = NULL;
-
-       table = &smu_table->tables[SMU_TABLE_WATERMARKS];
+       void *watermarks_table = smu->smu_table.watermarks_table;
 
-       if (!table->cpu_addr)
+       if (!watermarks_table)
                return -EINVAL;
 
-       ret = smu_update_table(smu, SMU_TABLE_WATERMARKS, 0, table->cpu_addr,
+       return smu_update_table(smu,
+                               SMU_TABLE_WATERMARKS,
+                               0,
+                               watermarks_table,
                                true);
-
-       return ret;
 }
 
 int smu_set_watermarks_for_clock_ranges(struct smu_context *smu,
                struct dm_pp_wm_sets_with_clock_ranges_soc15 *clock_ranges)
 {
-       struct smu_table *watermarks = &smu->smu_table.tables[SMU_TABLE_WATERMARKS];
-       void *table = watermarks->cpu_addr;
+       void *table = smu->smu_table.watermarks_table;
+
+       if (!table)
+               return -EINVAL;
 
        mutex_lock(&smu->mutex);
 
@@ -2284,13 +2319,9 @@ int smu_set_active_display_count(struct smu_context *smu, uint32_t count)
 {
        int ret = 0;
 
-       mutex_lock(&smu->mutex);
-
        if (smu->ppt_funcs->set_active_display_count)
                ret = smu->ppt_funcs->set_active_display_count(smu, count);
 
-       mutex_unlock(&smu->mutex);
-
        return ret;
 }
 
@@ -2437,7 +2468,7 @@ bool smu_baco_is_support(struct smu_context *smu)
 
        mutex_lock(&smu->mutex);
 
-       if (smu->ppt_funcs->baco_is_support)
+       if (smu->ppt_funcs && smu->ppt_funcs->baco_is_support)
                ret = smu->ppt_funcs->baco_is_support(smu);
 
        mutex_unlock(&smu->mutex);
index 17eeb54..1c15c6f 100644 (file)
@@ -179,6 +179,7 @@ static struct smu_11_0_cmn2aisc_mapping arcturus_table_map[SMU_TABLE_COUNT] = {
        TAB_MAP(DRIVER_SMU_CONFIG),
        TAB_MAP(OVERDRIVE),
        TAB_MAP(I2C_COMMANDS),
+       TAB_MAP(ACTIVITY_MONITOR_COEFF),
 };
 
 static struct smu_11_0_cmn2aisc_mapping arcturus_pwr_src_map[SMU_POWER_SOURCE_COUNT] = {
@@ -302,6 +303,10 @@ static int arcturus_tables_init(struct smu_context *smu, struct smu_table *table
        SMU_TABLE_INIT(tables, SMU_TABLE_I2C_COMMANDS, sizeof(SwI2cRequest_t),
                               PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
 
+       SMU_TABLE_INIT(tables, SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+                      sizeof(DpmActivityMonitorCoeffInt_t), PAGE_SIZE,
+                      AMDGPU_GEM_DOMAIN_VRAM);
+
        smu_table->metrics_table = kzalloc(sizeof(SmuMetrics_t), GFP_KERNEL);
        if (!smu_table->metrics_table)
                return -ENOMEM;
@@ -867,18 +872,21 @@ static int arcturus_get_metrics_table(struct smu_context *smu,
        struct smu_table_context *smu_table= &smu->smu_table;
        int ret = 0;
 
+       mutex_lock(&smu->metrics_lock);
        if (!smu_table->metrics_time ||
             time_after(jiffies, smu_table->metrics_time + HZ / 1000)) {
                ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS, 0,
                                (void *)smu_table->metrics_table, false);
                if (ret) {
                        pr_info("Failed to export SMU metrics table!\n");
+                       mutex_unlock(&smu->metrics_lock);
                        return ret;
                }
                smu_table->metrics_time = jiffies;
        }
 
        memcpy(metrics_table, smu_table->metrics_table, sizeof(SmuMetrics_t));
+       mutex_unlock(&smu->metrics_lock);
 
        return ret;
 }
@@ -1310,6 +1318,7 @@ static int arcturus_get_power_limit(struct smu_context *smu,
 static int arcturus_get_power_profile_mode(struct smu_context *smu,
                                           char *buf)
 {
+       DpmActivityMonitorCoeffInt_t activity_monitor;
        static const char *profile_name[] = {
                                        "BOOTUP_DEFAULT",
                                        "3D_FULL_SCREEN",
@@ -1319,14 +1328,35 @@ static int arcturus_get_power_profile_mode(struct smu_context *smu,
                                        "COMPUTE",
                                        "CUSTOM"};
        static const char *title[] = {
-                       "PROFILE_INDEX(NAME)"};
+                       "PROFILE_INDEX(NAME)",
+                       "CLOCK_TYPE(NAME)",
+                       "FPS",
+                       "UseRlcBusy",
+                       "MinActiveFreqType",
+                       "MinActiveFreq",
+                       "BoosterFreqType",
+                       "BoosterFreq",
+                       "PD_Data_limit_c",
+                       "PD_Data_error_coeff",
+                       "PD_Data_error_rate_coeff"};
        uint32_t i, size = 0;
        int16_t workload_type = 0;
+       int result = 0;
+       uint32_t smu_version;
 
-       if (!smu->pm_enabled || !buf)
+       if (!buf)
                return -EINVAL;
 
-       size += sprintf(buf + size, "%16s\n",
+       result = smu_get_smc_version(smu, NULL, &smu_version);
+       if (result)
+               return result;
+
+       if (smu_version >= 0x360d00)
+               size += sprintf(buf + size, "%16s %s %s %s %s %s %s %s %s %s %s\n",
+                       title[0], title[1], title[2], title[3], title[4], title[5],
+                       title[6], title[7], title[8], title[9], title[10]);
+       else
+               size += sprintf(buf + size, "%16s\n",
                        title[0]);
 
        for (i = 0; i <= PP_SMC_POWER_PROFILE_CUSTOM; i++) {
@@ -1338,8 +1368,50 @@ static int arcturus_get_power_profile_mode(struct smu_context *smu,
                if (workload_type < 0)
                        continue;
 
+               if (smu_version >= 0x360d00) {
+                       result = smu_update_table(smu,
+                                                 SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+                                                 workload_type,
+                                                 (void *)(&activity_monitor),
+                                                 false);
+                       if (result) {
+                               pr_err("[%s] Failed to get activity monitor!", __func__);
+                               return result;
+                       }
+               }
+
                size += sprintf(buf + size, "%2d %14s%s\n",
                        i, profile_name[i], (i == smu->power_profile_mode) ? "*" : " ");
+
+               if (smu_version >= 0x360d00) {
+                       size += sprintf(buf + size, "%19s %d(%13s) %7d %7d %7d %7d %7d %7d %7d %7d %7d\n",
+                               " ",
+                               0,
+                               "GFXCLK",
+                               activity_monitor.Gfx_FPS,
+                               activity_monitor.Gfx_UseRlcBusy,
+                               activity_monitor.Gfx_MinActiveFreqType,
+                               activity_monitor.Gfx_MinActiveFreq,
+                               activity_monitor.Gfx_BoosterFreqType,
+                               activity_monitor.Gfx_BoosterFreq,
+                               activity_monitor.Gfx_PD_Data_limit_c,
+                               activity_monitor.Gfx_PD_Data_error_coeff,
+                               activity_monitor.Gfx_PD_Data_error_rate_coeff);
+
+                       size += sprintf(buf + size, "%19s %d(%13s) %7d %7d %7d %7d %7d %7d %7d %7d %7d\n",
+                               " ",
+                               1,
+                               "UCLK",
+                               activity_monitor.Mem_FPS,
+                               activity_monitor.Mem_UseRlcBusy,
+                               activity_monitor.Mem_MinActiveFreqType,
+                               activity_monitor.Mem_MinActiveFreq,
+                               activity_monitor.Mem_BoosterFreqType,
+                               activity_monitor.Mem_BoosterFreq,
+                               activity_monitor.Mem_PD_Data_limit_c,
+                               activity_monitor.Mem_PD_Data_error_coeff,
+                               activity_monitor.Mem_PD_Data_error_rate_coeff);
+               }
        }
 
        return size;
@@ -1349,18 +1421,69 @@ static int arcturus_set_power_profile_mode(struct smu_context *smu,
                                           long *input,
                                           uint32_t size)
 {
+       DpmActivityMonitorCoeffInt_t activity_monitor;
        int workload_type = 0;
        uint32_t profile_mode = input[size];
        int ret = 0;
-
-       if (!smu->pm_enabled)
-               return -EINVAL;
+       uint32_t smu_version;
 
        if (profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
                pr_err("Invalid power profile mode %d\n", profile_mode);
                return -EINVAL;
        }
 
+       ret = smu_get_smc_version(smu, NULL, &smu_version);
+       if (ret)
+               return ret;
+
+       if ((profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) &&
+            (smu_version >=0x360d00)) {
+               ret = smu_update_table(smu,
+                                      SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+                                      WORKLOAD_PPLIB_CUSTOM_BIT,
+                                      (void *)(&activity_monitor),
+                                      false);
+               if (ret) {
+                       pr_err("[%s] Failed to get activity monitor!", __func__);
+                       return ret;
+               }
+
+               switch (input[0]) {
+               case 0: /* Gfxclk */
+                       activity_monitor.Gfx_FPS = input[1];
+                       activity_monitor.Gfx_UseRlcBusy = input[2];
+                       activity_monitor.Gfx_MinActiveFreqType = input[3];
+                       activity_monitor.Gfx_MinActiveFreq = input[4];
+                       activity_monitor.Gfx_BoosterFreqType = input[5];
+                       activity_monitor.Gfx_BoosterFreq = input[6];
+                       activity_monitor.Gfx_PD_Data_limit_c = input[7];
+                       activity_monitor.Gfx_PD_Data_error_coeff = input[8];
+                       activity_monitor.Gfx_PD_Data_error_rate_coeff = input[9];
+                       break;
+               case 1: /* Uclk */
+                       activity_monitor.Mem_FPS = input[1];
+                       activity_monitor.Mem_UseRlcBusy = input[2];
+                       activity_monitor.Mem_MinActiveFreqType = input[3];
+                       activity_monitor.Mem_MinActiveFreq = input[4];
+                       activity_monitor.Mem_BoosterFreqType = input[5];
+                       activity_monitor.Mem_BoosterFreq = input[6];
+                       activity_monitor.Mem_PD_Data_limit_c = input[7];
+                       activity_monitor.Mem_PD_Data_error_coeff = input[8];
+                       activity_monitor.Mem_PD_Data_error_rate_coeff = input[9];
+                       break;
+               }
+
+               ret = smu_update_table(smu,
+                                      SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+                                      WORKLOAD_PPLIB_CUSTOM_BIT,
+                                      (void *)(&activity_monitor),
+                                      true);
+               if (ret) {
+                       pr_err("[%s] Failed to set activity monitor!", __func__);
+                       return ret;
+               }
+       }
+
        /*
         * Conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT
         * Not all profile modes are supported on arcturus.
@@ -1899,7 +2022,7 @@ static int arcturus_i2c_eeprom_read_data(struct i2c_adapter *control,
        SwI2cRequest_t req;
        struct amdgpu_device *adev = to_amdgpu_device(control);
        struct smu_table_context *smu_table = &adev->smu.smu_table;
-       struct smu_table *table = &smu_table->tables[SMU_TABLE_I2C_COMMANDS];
+       struct smu_table *table = &smu_table->driver_table;
 
        memset(&req, 0, sizeof(req));
        arcturus_fill_eeprom_i2c_req(&req, false, address, numbytes, data);
@@ -2053,8 +2176,12 @@ static const struct i2c_algorithm arcturus_i2c_eeprom_i2c_algo = {
 static int arcturus_i2c_eeprom_control_init(struct i2c_adapter *control)
 {
        struct amdgpu_device *adev = to_amdgpu_device(control);
+       struct smu_context *smu = &adev->smu;
        int res;
 
+       if (!smu->pm_enabled)
+               return -EOPNOTSUPP;
+
        control->owner = THIS_MODULE;
        control->class = I2C_CLASS_SPD;
        control->dev.parent = &adev->pdev->dev;
@@ -2070,6 +2197,12 @@ static int arcturus_i2c_eeprom_control_init(struct i2c_adapter *control)
 
 static void arcturus_i2c_eeprom_control_fini(struct i2c_adapter *control)
 {
+       struct amdgpu_device *adev = to_amdgpu_device(control);
+       struct smu_context *smu = &adev->smu;
+
+       if (!smu->pm_enabled)
+               return;
+
        i2c_del_adapter(control);
 }
 
@@ -2114,6 +2247,7 @@ static const struct pptable_funcs arcturus_ppt_funcs = {
        .get_profiling_clk_mask = arcturus_get_profiling_clk_mask,
        .get_power_profile_mode = arcturus_get_power_profile_mode,
        .set_power_profile_mode = arcturus_set_power_profile_mode,
+       .set_performance_level = smu_v11_0_set_performance_level,
        /* debug (internal used) */
        .dump_pptable = arcturus_dump_pptable,
        .get_power_limit = arcturus_get_power_limit,
@@ -2137,6 +2271,7 @@ static const struct pptable_funcs arcturus_ppt_funcs = {
        .check_fw_version = smu_v11_0_check_fw_version,
        .write_pptable = smu_v11_0_write_pptable,
        .set_min_dcef_deep_sleep = smu_v11_0_set_min_dcef_deep_sleep,
+       .set_driver_table_location = smu_v11_0_set_driver_table_location,
        .set_tool_table_location = smu_v11_0_set_tool_table_location,
        .notify_memory_pool_location = smu_v11_0_notify_memory_pool_location,
        .system_features_control = smu_v11_0_system_features_control,
index 253860d..9454ab5 100644 (file)
@@ -99,6 +99,9 @@ int phm_disable_dynamic_state_management(struct pp_hwmgr *hwmgr)
 
        PHM_FUNC_CHECK(hwmgr);
 
+       if (!hwmgr->not_vf)
+               return 0;
+
        if (!smum_is_dpm_running(hwmgr)) {
                pr_info("dpm has been disabled\n");
                return 0;
index e2b82c9..f48fdc7 100644 (file)
@@ -282,10 +282,7 @@ err:
 
 int hwmgr_hw_fini(struct pp_hwmgr *hwmgr)
 {
-       if (!hwmgr->not_vf)
-               return 0;
-
-       if (!hwmgr || !hwmgr->pm_en)
+       if (!hwmgr || !hwmgr->pm_en || !hwmgr->not_vf)
                return 0;
 
        phm_stop_thermal_controller(hwmgr);
@@ -305,10 +302,7 @@ int hwmgr_suspend(struct pp_hwmgr *hwmgr)
 {
        int ret = 0;
 
-       if (!hwmgr->not_vf)
-               return 0;
-
-       if (!hwmgr || !hwmgr->pm_en)
+       if (!hwmgr || !hwmgr->pm_en || !hwmgr->not_vf)
                return 0;
 
        phm_disable_smc_firmware_ctf(hwmgr);
@@ -327,13 +321,10 @@ int hwmgr_resume(struct pp_hwmgr *hwmgr)
 {
        int ret = 0;
 
-       if (!hwmgr->not_vf)
-               return 0;
-
        if (!hwmgr)
                return -EINVAL;
 
-       if (!hwmgr->pm_en)
+       if (!hwmgr->not_vf || !hwmgr->pm_en)
                return 0;
 
        ret = phm_setup_asic(hwmgr);
index 1484465..92a65e3 100644 (file)
@@ -3538,7 +3538,8 @@ static int vega10_upload_dpm_bootup_level(struct pp_hwmgr *hwmgr)
        if (!data->registry_data.mclk_dpm_key_disabled) {
                if (data->smc_state_table.mem_boot_level !=
                                data->dpm_table.mem_table.dpm_state.soft_min_level) {
-                       if (data->smc_state_table.mem_boot_level == NUM_UCLK_DPM_LEVELS - 1) {
+                       if ((data->smc_state_table.mem_boot_level == NUM_UCLK_DPM_LEVELS - 1)
+                           && hwmgr->not_vf) {
                                socclk_idx = vega10_get_soc_index_for_max_uclk(hwmgr);
                                smum_send_msg_to_smc_with_parameter(hwmgr,
                                                PPSMC_MSG_SetSoftMinSocclkByIndex,
index 5bcf0d6..3b3ec56 100644 (file)
@@ -872,7 +872,7 @@ static int vega20_override_pcie_parameters(struct pp_hwmgr *hwmgr)
                "[OverridePcieParameters] Attempt to override pcie params failed!",
                return ret);
 
-       data->pcie_parameters_override = 1;
+       data->pcie_parameters_override = true;
        data->pcie_gen_level1 = pcie_gen;
        data->pcie_width_level1 = pcie_width;
 
index ca3fdc6..b0591a8 100644 (file)
@@ -254,11 +254,21 @@ struct smu_table_context
        unsigned long                   metrics_time;
        void                            *metrics_table;
        void                            *clocks_table;
+       void                            *watermarks_table;
 
        void                            *max_sustainable_clocks;
        struct smu_bios_boot_up_values  boot_values;
        void                            *driver_pptable;
        struct smu_table                *tables;
+       /*
+        * The driver table is just a staging buffer for
+        * uploading/downloading content from the SMU.
+        *
+        * And the table_id for SMU_MSG_TransferTableSmu2Dram/
+        * SMU_MSG_TransferTableDram2Smu instructs SMU
+        * which content driver is interested.
+        */
+       struct smu_table                driver_table;
        struct smu_table                memory_pool;
        uint8_t                         thermal_controller_type;
 
@@ -350,6 +360,7 @@ struct smu_context
        const struct pptable_funcs      *ppt_funcs;
        struct mutex                    mutex;
        struct mutex                    sensor_lock;
+       struct mutex                    metrics_lock;
        uint64_t pool_size;
 
        struct smu_table_context        smu_table;
@@ -443,7 +454,7 @@ struct pptable_funcs {
        int (*pre_display_config_changed)(struct smu_context *smu);
        int (*display_config_changed)(struct smu_context *smu);
        int (*apply_clocks_adjust_rules)(struct smu_context *smu);
-       int (*notify_smc_dispaly_config)(struct smu_context *smu);
+       int (*notify_smc_display_config)(struct smu_context *smu);
        int (*force_dpm_limit_value)(struct smu_context *smu, bool highest);
        int (*unforce_dpm_levels)(struct smu_context *smu);
        int (*get_profiling_clk_mask)(struct smu_context *smu,
@@ -496,6 +507,7 @@ struct pptable_funcs {
        int (*set_gfx_cgpg)(struct smu_context *smu, bool enable);
        int (*write_pptable)(struct smu_context *smu);
        int (*set_min_dcef_deep_sleep)(struct smu_context *smu);
+       int (*set_driver_table_location)(struct smu_context *smu);
        int (*set_tool_table_location)(struct smu_context *smu);
        int (*notify_memory_pool_location)(struct smu_context *smu);
        int (*set_last_dcef_min_deep_sleep_clk)(struct smu_context *smu);
@@ -696,6 +708,8 @@ int smu_set_soft_freq_range(struct smu_context *smu, enum smu_clk_type clk_type,
                            uint32_t min, uint32_t max);
 int smu_set_hard_freq_range(struct smu_context *smu, enum smu_clk_type clk_type,
                            uint32_t min, uint32_t max);
+int smu_get_dpm_level_range(struct smu_context *smu, enum smu_clk_type clk_type,
+                           uint32_t *min_value, uint32_t *max_value);
 enum amd_dpm_forced_level smu_get_performance_level(struct smu_context *smu);
 int smu_force_performance_level(struct smu_context *smu, enum amd_dpm_forced_level level);
 int smu_set_display_count(struct smu_context *smu, uint32_t count);
index a886f06..ce5b501 100644 (file)
@@ -622,8 +622,14 @@ typedef struct {
   uint16_t     PccThresholdHigh;
   uint32_t     PaddingAPCC[6];  //FIXME pending SPEC
 
+  // OOB Settings
+  uint16_t BasePerformanceCardPower;
+  uint16_t MaxPerformanceCardPower;
+  uint16_t BasePerformanceFrequencyCap;   //In Mhz
+  uint16_t MaxPerformanceFrequencyCap;    //In Mhz
+
   // SECTION: Reserved
-  uint32_t     Reserved[11];
+  uint32_t     Reserved[9];
 
   // SECTION: BOARD PARAMETERS
 
@@ -823,7 +829,6 @@ typedef struct {
   uint32_t MmHubPadding[8]; // SMU internal use
 } AvfsFuseOverride_t;
 
-/* NOT CURRENTLY USED
 typedef struct {
   uint8_t   Gfx_ActiveHystLimit;
   uint8_t   Gfx_IdleHystLimit;
@@ -866,7 +871,6 @@ typedef struct {
 
   uint32_t  MmHubPadding[8]; // SMU internal use
 } DpmActivityMonitorCoeffInt_t;
-*/
 
 // These defines are used with the following messages:
 // SMC_MSG_TransferTableDram2Smu
@@ -878,11 +882,11 @@ typedef struct {
 #define TABLE_PMSTATUSLOG             4
 #define TABLE_SMU_METRICS             5
 #define TABLE_DRIVER_SMU_CONFIG       6
-//#define TABLE_ACTIVITY_MONITOR_COEFF  7
 #define TABLE_OVERDRIVE               7
 #define TABLE_WAFL_XGMI_TOPOLOGY      8
 #define TABLE_I2C_COMMANDS            9
-#define TABLE_COUNT                   10
+#define TABLE_ACTIVITY_MONITOR_COEFF  10
+#define TABLE_COUNT                   11
 
 // These defines are used with the SMC_MSG_SetUclkFastSwitch message.
 typedef enum {
index 786de77..d5314d1 100644 (file)
@@ -27,7 +27,7 @@
 
 #define SMU11_DRIVER_IF_VERSION_INV 0xFFFFFFFF
 #define SMU11_DRIVER_IF_VERSION_VG20 0x13
-#define SMU11_DRIVER_IF_VERSION_ARCT 0x10
+#define SMU11_DRIVER_IF_VERSION_ARCT 0x12
 #define SMU11_DRIVER_IF_VERSION_NV10 0x33
 #define SMU11_DRIVER_IF_VERSION_NV14 0x34
 
@@ -170,6 +170,8 @@ int smu_v11_0_write_pptable(struct smu_context *smu);
 
 int smu_v11_0_set_min_dcef_deep_sleep(struct smu_context *smu);
 
+int smu_v11_0_set_driver_table_location(struct smu_context *smu);
+
 int smu_v11_0_set_tool_table_location(struct smu_context *smu);
 
 int smu_v11_0_notify_memory_pool_location(struct smu_context *smu);
@@ -262,4 +264,7 @@ int smu_v11_0_set_default_od_settings(struct smu_context *smu, bool initialize,
 
 uint32_t smu_v11_0_get_max_power_limit(struct smu_context *smu);
 
+int smu_v11_0_set_performance_level(struct smu_context *smu,
+                                   enum amd_dpm_forced_level level);
+
 #endif
index 3f1cd06..d79e54b 100644 (file)
@@ -90,4 +90,6 @@ int smu_v12_0_mode2_reset(struct smu_context *smu);
 int smu_v12_0_set_soft_freq_limited_range(struct smu_context *smu, enum smu_clk_type clk_type,
                            uint32_t min, uint32_t max);
 
+int smu_v12_0_set_driver_table_location(struct smu_context *smu);
+
 #endif
index 15403b7..93c66c6 100644 (file)
@@ -555,6 +555,10 @@ static int navi10_tables_init(struct smu_context *smu, struct smu_table *tables)
                return -ENOMEM;
        smu_table->metrics_time = 0;
 
+       smu_table->watermarks_table = kzalloc(sizeof(Watermarks_t), GFP_KERNEL);
+       if (!smu_table->watermarks_table)
+               return -ENOMEM;
+
        return 0;
 }
 
@@ -564,17 +568,20 @@ static int navi10_get_metrics_table(struct smu_context *smu,
        struct smu_table_context *smu_table= &smu->smu_table;
        int ret = 0;
 
+       mutex_lock(&smu->metrics_lock);
        if (!smu_table->metrics_time || time_after(jiffies, smu_table->metrics_time + msecs_to_jiffies(100))) {
                ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS, 0,
                                (void *)smu_table->metrics_table, false);
                if (ret) {
                        pr_info("Failed to export SMU metrics table!\n");
+                       mutex_unlock(&smu->metrics_lock);
                        return ret;
                }
                smu_table->metrics_time = jiffies;
        }
 
        memcpy(metrics_table, smu_table->metrics_table, sizeof(SmuMetrics_t));
+       mutex_unlock(&smu->metrics_lock);
 
        return ret;
 }
@@ -1374,7 +1381,7 @@ static int navi10_get_profiling_clk_mask(struct smu_context *smu,
        return ret;
 }
 
-static int navi10_notify_smc_dispaly_config(struct smu_context *smu)
+static int navi10_notify_smc_display_config(struct smu_context *smu)
 {
        struct smu_clocks min_clocks = {0};
        struct pp_display_clock_request clock_req;
@@ -1579,12 +1586,44 @@ static int navi10_get_uclk_dpm_states(struct smu_context *smu, uint32_t *clocks_
        return 0;
 }
 
-static int navi10_set_peak_clock_by_device(struct smu_context *smu)
+static int navi10_set_performance_level(struct smu_context *smu,
+                                       enum amd_dpm_forced_level level);
+
+static int navi10_set_standard_performance_level(struct smu_context *smu)
+{
+       struct amdgpu_device *adev = smu->adev;
+       int ret = 0;
+       uint32_t sclk_freq = 0, uclk_freq = 0;
+
+       switch (adev->asic_type) {
+       case CHIP_NAVI10:
+               sclk_freq = NAVI10_UMD_PSTATE_PROFILING_GFXCLK;
+               uclk_freq = NAVI10_UMD_PSTATE_PROFILING_MEMCLK;
+               break;
+       case CHIP_NAVI14:
+               sclk_freq = NAVI14_UMD_PSTATE_PROFILING_GFXCLK;
+               uclk_freq = NAVI14_UMD_PSTATE_PROFILING_MEMCLK;
+               break;
+       default:
+               /* by default, this is same as auto performance level */
+               return navi10_set_performance_level(smu, AMD_DPM_FORCED_LEVEL_AUTO);
+       }
+
+       ret = smu_set_soft_freq_range(smu, SMU_SCLK, sclk_freq, sclk_freq);
+       if (ret)
+               return ret;
+       ret = smu_set_soft_freq_range(smu, SMU_UCLK, uclk_freq, uclk_freq);
+       if (ret)
+               return ret;
+
+       return ret;
+}
+
+static int navi10_set_peak_performance_level(struct smu_context *smu)
 {
        struct amdgpu_device *adev = smu->adev;
        int ret = 0;
        uint32_t sclk_freq = 0, uclk_freq = 0;
-       uint32_t uclk_level = 0;
 
        switch (adev->asic_type) {
        case CHIP_NAVI10:
@@ -1625,14 +1664,16 @@ static int navi10_set_peak_clock_by_device(struct smu_context *smu)
                        break;
                }
                break;
+       case CHIP_NAVI12:
+               sclk_freq = NAVI12_UMD_PSTATE_PEAK_GFXCLK;
+               break;
        default:
-               return -EINVAL;
+               ret = smu_get_dpm_level_range(smu, SMU_SCLK, NULL, &sclk_freq);
+               if (ret)
+                       return ret;
        }
 
-       ret = smu_get_dpm_level_count(smu, SMU_UCLK, &uclk_level);
-       if (ret)
-               return ret;
-       ret = smu_get_dpm_freq_by_index(smu, SMU_UCLK, uclk_level - 1, &uclk_freq);
+       ret = smu_get_dpm_level_range(smu, SMU_UCLK, NULL, &uclk_freq);
        if (ret)
                return ret;
 
@@ -1646,19 +1687,45 @@ static int navi10_set_peak_clock_by_device(struct smu_context *smu)
        return ret;
 }
 
-static int navi10_set_performance_level(struct smu_context *smu, enum amd_dpm_forced_level level)
+static int navi10_set_performance_level(struct smu_context *smu,
+                                       enum amd_dpm_forced_level level)
 {
        int ret = 0;
+       uint32_t sclk_mask, mclk_mask, soc_mask;
 
        switch (level) {
+       case AMD_DPM_FORCED_LEVEL_HIGH:
+               ret = smu_force_dpm_limit_value(smu, true);
+               break;
+       case AMD_DPM_FORCED_LEVEL_LOW:
+               ret = smu_force_dpm_limit_value(smu, false);
+               break;
+       case AMD_DPM_FORCED_LEVEL_AUTO:
+               ret = smu_unforce_dpm_levels(smu);
+               break;
+       case AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD:
+               ret = navi10_set_standard_performance_level(smu);
+               break;
+       case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK:
+       case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK:
+               ret = smu_get_profiling_clk_mask(smu, level,
+                                                &sclk_mask,
+                                                &mclk_mask,
+                                                &soc_mask);
+               if (ret)
+                       return ret;
+               smu_force_clk_levels(smu, SMU_SCLK, 1 << sclk_mask, false);
+               smu_force_clk_levels(smu, SMU_MCLK, 1 << mclk_mask, false);
+               smu_force_clk_levels(smu, SMU_SOCCLK, 1 << soc_mask, false);
+               break;
        case AMD_DPM_FORCED_LEVEL_PROFILE_PEAK:
-               ret = navi10_set_peak_clock_by_device(smu);
+               ret = navi10_set_peak_performance_level(smu);
                break;
+       case AMD_DPM_FORCED_LEVEL_MANUAL:
+       case AMD_DPM_FORCED_LEVEL_PROFILE_EXIT:
        default:
-               ret = -EINVAL;
                break;
        }
-
        return ret;
 }
 
@@ -2047,7 +2114,7 @@ static const struct pptable_funcs navi10_ppt_funcs = {
        .get_clock_by_type_with_latency = navi10_get_clock_by_type_with_latency,
        .pre_display_config_changed = navi10_pre_display_config_changed,
        .display_config_changed = navi10_display_config_changed,
-       .notify_smc_dispaly_config = navi10_notify_smc_dispaly_config,
+       .notify_smc_display_config = navi10_notify_smc_display_config,
        .force_dpm_limit_value = navi10_force_dpm_limit_value,
        .unforce_dpm_levels = navi10_unforce_dpm_levels,
        .is_dpm_running = navi10_is_dpm_running,
@@ -2080,6 +2147,7 @@ static const struct pptable_funcs navi10_ppt_funcs = {
        .check_fw_version = smu_v11_0_check_fw_version,
        .write_pptable = smu_v11_0_write_pptable,
        .set_min_dcef_deep_sleep = smu_v11_0_set_min_dcef_deep_sleep,
+       .set_driver_table_location = smu_v11_0_set_driver_table_location,
        .set_tool_table_location = smu_v11_0_set_tool_table_location,
        .notify_memory_pool_location = smu_v11_0_notify_memory_pool_location,
        .system_features_control = smu_v11_0_system_features_control,
index ec03c79..2abb4ba 100644 (file)
 #define NAVI10_PEAK_SCLK_XT            (1755)
 #define NAVI10_PEAK_SCLK_XL            (1625)
 
+#define NAVI10_UMD_PSTATE_PROFILING_GFXCLK    (1300)
+#define NAVI10_UMD_PSTATE_PROFILING_SOCCLK    (980)
+#define NAVI10_UMD_PSTATE_PROFILING_MEMCLK    (625)
+#define NAVI10_UMD_PSTATE_PROFILING_VCLK      (980)
+#define NAVI10_UMD_PSTATE_PROFILING_DCLK      (850)
+
 #define NAVI14_UMD_PSTATE_PEAK_XT_GFXCLK      (1670)
 #define NAVI14_UMD_PSTATE_PEAK_XTM_GFXCLK     (1448)
 #define NAVI14_UMD_PSTATE_PEAK_XLM_GFXCLK     (1181)
 #define NAVI14_UMD_PSTATE_PEAK_XTX_GFXCLK     (1717)
 #define NAVI14_UMD_PSTATE_PEAK_XL_GFXCLK      (1448)
 
+#define NAVI14_UMD_PSTATE_PROFILING_GFXCLK    (1200)
+#define NAVI14_UMD_PSTATE_PROFILING_SOCCLK    (900)
+#define NAVI14_UMD_PSTATE_PROFILING_MEMCLK    (600)
+#define NAVI14_UMD_PSTATE_PROFILING_VCLK      (900)
+#define NAVI14_UMD_PSTATE_PROFILING_DCLK      (800)
+
+#define NAVI12_UMD_PSTATE_PEAK_GFXCLK     (1100)
+
 #define NAVI10_VOLTAGE_SCALE (4)
 
 #define smnPCIE_LC_SPEED_CNTL                  0x11140290
index 89a54f8..861e641 100644 (file)
@@ -171,17 +171,20 @@ static int renoir_get_metrics_table(struct smu_context *smu,
        struct smu_table_context *smu_table= &smu->smu_table;
        int ret = 0;
 
+       mutex_lock(&smu->metrics_lock);
        if (!smu_table->metrics_time || time_after(jiffies, smu_table->metrics_time + msecs_to_jiffies(100))) {
                ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS, 0,
                                (void *)smu_table->metrics_table, false);
                if (ret) {
                        pr_info("Failed to export SMU metrics table!\n");
+                       mutex_unlock(&smu->metrics_lock);
                        return ret;
                }
                smu_table->metrics_time = jiffies;
        }
 
        memcpy(metrics_table, smu_table->metrics_table, sizeof(SmuMetrics_t));
+       mutex_unlock(&smu->metrics_lock);
 
        return ret;
 }
@@ -206,6 +209,10 @@ static int renoir_tables_init(struct smu_context *smu, struct smu_table *tables)
                return -ENOMEM;
        smu_table->metrics_time = 0;
 
+       smu_table->watermarks_table = kzalloc(sizeof(Watermarks_t), GFP_KERNEL);
+       if (!smu_table->watermarks_table)
+               return -ENOMEM;
+
        return 0;
 }
 
@@ -239,8 +246,7 @@ static int renoir_print_clk_levels(struct smu_context *smu,
 
        memset(&metrics, 0, sizeof(metrics));
 
-       ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS, 0,
-                              (void *)&metrics, false);
+       ret = renoir_get_metrics_table(smu, &metrics);
        if (ret)
                return ret;
 
@@ -706,19 +712,43 @@ static int renoir_set_peak_clock_by_device(struct smu_context *smu)
        return ret;
 }
 
-static int renoir_set_performance_level(struct smu_context *smu, enum amd_dpm_forced_level level)
+static int renoir_set_performance_level(struct smu_context *smu,
+                                       enum amd_dpm_forced_level level)
 {
        int ret = 0;
+       uint32_t sclk_mask, mclk_mask, soc_mask;
 
        switch (level) {
+       case AMD_DPM_FORCED_LEVEL_HIGH:
+               ret = smu_force_dpm_limit_value(smu, true);
+               break;
+       case AMD_DPM_FORCED_LEVEL_LOW:
+               ret = smu_force_dpm_limit_value(smu, false);
+               break;
+       case AMD_DPM_FORCED_LEVEL_AUTO:
+       case AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD:
+               ret = smu_unforce_dpm_levels(smu);
+               break;
+       case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK:
+       case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK:
+               ret = smu_get_profiling_clk_mask(smu, level,
+                                                &sclk_mask,
+                                                &mclk_mask,
+                                                &soc_mask);
+               if (ret)
+                       return ret;
+               smu_force_clk_levels(smu, SMU_SCLK, 1 << sclk_mask, false);
+               smu_force_clk_levels(smu, SMU_MCLK, 1 << mclk_mask, false);
+               smu_force_clk_levels(smu, SMU_SOCCLK, 1 << soc_mask, false);
+               break;
        case AMD_DPM_FORCED_LEVEL_PROFILE_PEAK:
                ret = renoir_set_peak_clock_by_device(smu);
                break;
+       case AMD_DPM_FORCED_LEVEL_MANUAL:
+       case AMD_DPM_FORCED_LEVEL_PROFILE_EXIT:
        default:
-               ret = -EINVAL;
                break;
        }
-
        return ret;
 }
 
@@ -777,9 +807,17 @@ static int renoir_set_watermarks_table(
        }
 
        /* pass data to smu controller */
-       ret = smu_write_watermarks_table(smu);
+       if ((smu->watermarks_bitmap & WATERMARKS_EXIST) &&
+                       !(smu->watermarks_bitmap & WATERMARKS_LOADED)) {
+               ret = smu_write_watermarks_table(smu);
+               if (ret) {
+                       pr_err("Failed to update WMTABLE!");
+                       return ret;
+               }
+               smu->watermarks_bitmap |= WATERMARKS_LOADED;
+       }
 
-       return ret;
+       return 0;
 }
 
 static int renoir_get_power_profile_mode(struct smu_context *smu,
@@ -882,6 +920,7 @@ static const struct pptable_funcs renoir_ppt_funcs = {
        .get_dpm_ultimate_freq = smu_v12_0_get_dpm_ultimate_freq,
        .mode2_reset = smu_v12_0_mode2_reset,
        .set_soft_freq_limited_range = smu_v12_0_set_soft_freq_limited_range,
+       .set_driver_table_location = smu_v12_0_set_driver_table_location,
 };
 
 void renoir_set_ppt_funcs(struct smu_context *smu)
index 60ce1fc..783319e 100644 (file)
@@ -61,6 +61,8 @@
        ((smu)->ppt_funcs->write_pptable ? (smu)->ppt_funcs->write_pptable((smu)) : 0)
 #define smu_set_min_dcef_deep_sleep(smu) \
        ((smu)->ppt_funcs->set_min_dcef_deep_sleep ? (smu)->ppt_funcs->set_min_dcef_deep_sleep((smu)) : 0)
+#define smu_set_driver_table_location(smu) \
+       ((smu)->ppt_funcs->set_driver_table_location ? (smu)->ppt_funcs->set_driver_table_location((smu)) : 0)
 #define smu_set_tool_table_location(smu) \
        ((smu)->ppt_funcs->set_tool_table_location ? (smu)->ppt_funcs->set_tool_table_location((smu)) : 0)
 #define smu_notify_memory_pool_location(smu) \
@@ -129,8 +131,8 @@ int smu_send_smc_msg(struct smu_context *smu, enum smu_message_type msg);
        ((smu)->ppt_funcs->display_config_changed ? (smu)->ppt_funcs->display_config_changed((smu)) : 0)
 #define smu_apply_clocks_adjust_rules(smu) \
        ((smu)->ppt_funcs->apply_clocks_adjust_rules ? (smu)->ppt_funcs->apply_clocks_adjust_rules((smu)) : 0)
-#define smu_notify_smc_dispaly_config(smu) \
-       ((smu)->ppt_funcs->notify_smc_dispaly_config ? (smu)->ppt_funcs->notify_smc_dispaly_config((smu)) : 0)
+#define smu_notify_smc_display_config(smu) \
+       ((smu)->ppt_funcs->notify_smc_display_config ? (smu)->ppt_funcs->notify_smc_display_config((smu)) : 0)
 #define smu_force_dpm_limit_value(smu, highest) \
        ((smu)->ppt_funcs->force_dpm_limit_value ? (smu)->ppt_funcs->force_dpm_limit_value((smu), (highest)) : 0)
 #define smu_unforce_dpm_levels(smu) \
index 7781d24..e804f98 100644 (file)
@@ -450,8 +450,10 @@ int smu_v11_0_fini_smc_tables(struct smu_context *smu)
 
        kfree(smu_table->tables);
        kfree(smu_table->metrics_table);
+       kfree(smu_table->watermarks_table);
        smu_table->tables = NULL;
        smu_table->metrics_table = NULL;
+       smu_table->watermarks_table = NULL;
        smu_table->metrics_time = 0;
 
        ret = smu_v11_0_fini_dpm_context(smu);
@@ -774,6 +776,24 @@ int smu_v11_0_set_min_dcef_deep_sleep(struct smu_context *smu)
        return smu_v11_0_set_deep_sleep_dcefclk(smu, table_context->boot_values.dcefclk / 100);
 }
 
+int smu_v11_0_set_driver_table_location(struct smu_context *smu)
+{
+       struct smu_table *driver_table = &smu->smu_table.driver_table;
+       int ret = 0;
+
+       if (driver_table->mc_address) {
+               ret = smu_send_smc_msg_with_param(smu,
+                               SMU_MSG_SetDriverDramAddrHigh,
+                               upper_32_bits(driver_table->mc_address));
+               if (!ret)
+                       ret = smu_send_smc_msg_with_param(smu,
+                               SMU_MSG_SetDriverDramAddrLow,
+                               lower_32_bits(driver_table->mc_address));
+       }
+
+       return ret;
+}
+
 int smu_v11_0_set_tool_table_location(struct smu_context *smu)
 {
        int ret = 0;
@@ -835,27 +855,33 @@ int smu_v11_0_get_enabled_mask(struct smu_context *smu,
                                      uint32_t *feature_mask, uint32_t num)
 {
        uint32_t feature_mask_high = 0, feature_mask_low = 0;
+       struct smu_feature *feature = &smu->smu_feature;
        int ret = 0;
 
        if (!feature_mask || num < 2)
                return -EINVAL;
 
-       ret = smu_send_smc_msg(smu, SMU_MSG_GetEnabledSmuFeaturesHigh);
-       if (ret)
-               return ret;
-       ret = smu_read_smc_arg(smu, &feature_mask_high);
-       if (ret)
-               return ret;
+       if (bitmap_empty(feature->enabled, feature->feature_num)) {
+               ret = smu_send_smc_msg(smu, SMU_MSG_GetEnabledSmuFeaturesHigh);
+               if (ret)
+                       return ret;
+               ret = smu_read_smc_arg(smu, &feature_mask_high);
+               if (ret)
+                       return ret;
 
-       ret = smu_send_smc_msg(smu, SMU_MSG_GetEnabledSmuFeaturesLow);
-       if (ret)
-               return ret;
-       ret = smu_read_smc_arg(smu, &feature_mask_low);
-       if (ret)
-               return ret;
+               ret = smu_send_smc_msg(smu, SMU_MSG_GetEnabledSmuFeaturesLow);
+               if (ret)
+                       return ret;
+               ret = smu_read_smc_arg(smu, &feature_mask_low);
+               if (ret)
+                       return ret;
 
-       feature_mask[0] = feature_mask_low;
-       feature_mask[1] = feature_mask_high;
+               feature_mask[0] = feature_mask_low;
+               feature_mask[1] = feature_mask_high;
+       } else {
+               bitmap_copy((unsigned long *)feature_mask, feature->enabled,
+                            feature->feature_num);
+       }
 
        return ret;
 }
@@ -867,21 +893,24 @@ int smu_v11_0_system_features_control(struct smu_context *smu,
        uint32_t feature_mask[2];
        int ret = 0;
 
-       if (smu->pm_enabled) {
-               ret = smu_send_smc_msg(smu, (en ? SMU_MSG_EnableAllSmuFeatures :
-                                            SMU_MSG_DisableAllSmuFeatures));
-               if (ret)
-                       return ret;
-       }
-
-       ret = smu_feature_get_enabled_mask(smu, feature_mask, 2);
+       ret = smu_send_smc_msg(smu, (en ? SMU_MSG_EnableAllSmuFeatures :
+                                    SMU_MSG_DisableAllSmuFeatures));
        if (ret)
                return ret;
 
-       bitmap_copy(feature->enabled, (unsigned long *)&feature_mask,
-                   feature->feature_num);
-       bitmap_copy(feature->supported, (unsigned long *)&feature_mask,
-                   feature->feature_num);
+       if (en) {
+               ret = smu_feature_get_enabled_mask(smu, feature_mask, 2);
+               if (ret)
+                       return ret;
+
+               bitmap_copy(feature->enabled, (unsigned long *)&feature_mask,
+                           feature->feature_num);
+               bitmap_copy(feature->supported, (unsigned long *)&feature_mask,
+                           feature->feature_num);
+       } else {
+               bitmap_zero(feature->enabled, feature->feature_num);
+               bitmap_zero(feature->supported, feature->feature_num);
+       }
 
        return ret;
 }
@@ -1860,3 +1889,42 @@ int smu_v11_0_set_default_od_settings(struct smu_context *smu, bool initialize,
        }
        return ret;
 }
+
+int smu_v11_0_set_performance_level(struct smu_context *smu,
+                                   enum amd_dpm_forced_level level)
+{
+       int ret = 0;
+       uint32_t sclk_mask, mclk_mask, soc_mask;
+
+       switch (level) {
+       case AMD_DPM_FORCED_LEVEL_HIGH:
+               ret = smu_force_dpm_limit_value(smu, true);
+               break;
+       case AMD_DPM_FORCED_LEVEL_LOW:
+               ret = smu_force_dpm_limit_value(smu, false);
+               break;
+       case AMD_DPM_FORCED_LEVEL_AUTO:
+       case AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD:
+               ret = smu_unforce_dpm_levels(smu);
+               break;
+       case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK:
+       case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK:
+       case AMD_DPM_FORCED_LEVEL_PROFILE_PEAK:
+               ret = smu_get_profiling_clk_mask(smu, level,
+                                                &sclk_mask,
+                                                &mclk_mask,
+                                                &soc_mask);
+               if (ret)
+                       return ret;
+               smu_force_clk_levels(smu, SMU_SCLK, 1 << sclk_mask, false);
+               smu_force_clk_levels(smu, SMU_MCLK, 1 << mclk_mask, false);
+               smu_force_clk_levels(smu, SMU_SOCCLK, 1 << soc_mask, false);
+               break;
+       case AMD_DPM_FORCED_LEVEL_MANUAL:
+       case AMD_DPM_FORCED_LEVEL_PROFILE_EXIT:
+       default:
+               break;
+       }
+       return ret;
+}
+
index 2ac7f2f..870e6db 100644 (file)
@@ -159,7 +159,7 @@ int smu_v12_0_check_fw_version(struct smu_context *smu)
 
 int smu_v12_0_powergate_sdma(struct smu_context *smu, bool gate)
 {
-       if (!(smu->adev->flags & AMD_IS_APU))
+       if (!smu->is_apu)
                return 0;
 
        if (gate)
@@ -170,7 +170,7 @@ int smu_v12_0_powergate_sdma(struct smu_context *smu, bool gate)
 
 int smu_v12_0_powergate_vcn(struct smu_context *smu, bool gate)
 {
-       if (!(smu->adev->flags & AMD_IS_APU))
+       if (!smu->is_apu)
                return 0;
 
        if (gate)
@@ -181,7 +181,7 @@ int smu_v12_0_powergate_vcn(struct smu_context *smu, bool gate)
 
 int smu_v12_0_powergate_jpeg(struct smu_context *smu, bool gate)
 {
-       if (!(smu->adev->flags & AMD_IS_APU))
+       if (!smu->is_apu)
                return 0;
 
        if (gate)
@@ -318,14 +318,6 @@ int smu_v12_0_fini_smc_tables(struct smu_context *smu)
 int smu_v12_0_populate_smc_tables(struct smu_context *smu)
 {
        struct smu_table_context *smu_table = &smu->smu_table;
-       struct smu_table *table = NULL;
-
-       table = &smu_table->tables[SMU_TABLE_DPMCLOCKS];
-       if (!table)
-               return -EINVAL;
-
-       if (!table->cpu_addr)
-               return -EINVAL;
 
        return smu_update_table(smu, SMU_TABLE_DPMCLOCKS, 0, smu_table->clocks_table, false);
 }
@@ -514,3 +506,21 @@ int smu_v12_0_set_soft_freq_limited_range(struct smu_context *smu, enum smu_clk_
 
        return ret;
 }
+
+int smu_v12_0_set_driver_table_location(struct smu_context *smu)
+{
+       struct smu_table *driver_table = &smu->smu_table.driver_table;
+       int ret = 0;
+
+       if (driver_table->mc_address) {
+               ret = smu_send_smc_msg_with_param(smu,
+                               SMU_MSG_SetDriverDramAddrHigh,
+                               upper_32_bits(driver_table->mc_address));
+               if (!ret)
+                       ret = smu_send_smc_msg_with_param(smu,
+                               SMU_MSG_SetDriverDramAddrLow,
+                               lower_32_bits(driver_table->mc_address));
+       }
+
+       return ret;
+}
index aa0ee2b..2319400 100644 (file)
@@ -137,7 +137,7 @@ static int smu10_copy_table_from_smc(struct pp_hwmgr *hwmgr,
                        priv->smu_tables.entry[table_id].table_id);
 
        /* flush hdp cache */
-       adev->nbio.funcs->hdp_flush(adev, NULL);
+       amdgpu_asic_flush_hdp(adev, NULL);
 
        memcpy(table, (uint8_t *)priv->smu_tables.entry[table_id].table,
                        priv->smu_tables.entry[table_id].size);
@@ -150,6 +150,7 @@ static int smu10_copy_table_to_smc(struct pp_hwmgr *hwmgr,
 {
        struct smu10_smumgr *priv =
                        (struct smu10_smumgr *)(hwmgr->smu_backend);
+       struct amdgpu_device *adev = hwmgr->adev;
 
        PP_ASSERT_WITH_CODE(table_id < MAX_SMU_TABLE,
                        "Invalid SMU Table ID!", return -EINVAL;);
@@ -161,6 +162,8 @@ static int smu10_copy_table_to_smc(struct pp_hwmgr *hwmgr,
        memcpy(priv->smu_tables.entry[table_id].table, table,
                        priv->smu_tables.entry[table_id].size);
 
+       amdgpu_asic_flush_hdp(adev, NULL);
+
        smu10_send_msg_to_smc_with_parameter(hwmgr,
                        PPSMC_MSG_SetDriverDramAddrHigh,
                        upper_32_bits(priv->smu_tables.entry[table_id].mc_addr));
index 39427ca..7155640 100644 (file)
@@ -58,7 +58,7 @@ static int vega10_copy_table_from_smc(struct pp_hwmgr *hwmgr,
                        priv->smu_tables.entry[table_id].table_id);
 
        /* flush hdp cache */
-       adev->nbio.funcs->hdp_flush(adev, NULL);
+       amdgpu_asic_flush_hdp(adev, NULL);
 
        memcpy(table, priv->smu_tables.entry[table_id].table,
                        priv->smu_tables.entry[table_id].size);
@@ -70,6 +70,7 @@ static int vega10_copy_table_to_smc(struct pp_hwmgr *hwmgr,
                uint8_t *table, int16_t table_id)
 {
        struct vega10_smumgr *priv = hwmgr->smu_backend;
+       struct amdgpu_device *adev = hwmgr->adev;
 
        /* under sriov, vbios or hypervisor driver
         * has already copy table to smc so here only skip it
@@ -87,6 +88,8 @@ static int vega10_copy_table_to_smc(struct pp_hwmgr *hwmgr,
        memcpy(priv->smu_tables.entry[table_id].table, table,
                        priv->smu_tables.entry[table_id].size);
 
+       amdgpu_asic_flush_hdp(adev, NULL);
+
        smu9_send_msg_to_smc_with_parameter(hwmgr,
                        PPSMC_MSG_SetDriverDramAddrHigh,
                        upper_32_bits(priv->smu_tables.entry[table_id].mc_addr));
index 90c782c..a3915bf 100644 (file)
@@ -66,7 +66,7 @@ static int vega12_copy_table_from_smc(struct pp_hwmgr *hwmgr,
                        return -EINVAL);
 
        /* flush hdp cache */
-       adev->nbio.funcs->hdp_flush(adev, NULL);
+       amdgpu_asic_flush_hdp(adev, NULL);
 
        memcpy(table, priv->smu_tables.entry[table_id].table,
                        priv->smu_tables.entry[table_id].size);
@@ -84,6 +84,7 @@ static int vega12_copy_table_to_smc(struct pp_hwmgr *hwmgr,
 {
        struct vega12_smumgr *priv =
                        (struct vega12_smumgr *)(hwmgr->smu_backend);
+       struct amdgpu_device *adev = hwmgr->adev;
 
        PP_ASSERT_WITH_CODE(table_id < TABLE_COUNT,
                        "Invalid SMU Table ID!", return -EINVAL);
@@ -95,6 +96,8 @@ static int vega12_copy_table_to_smc(struct pp_hwmgr *hwmgr,
        memcpy(priv->smu_tables.entry[table_id].table, table,
                        priv->smu_tables.entry[table_id].size);
 
+       amdgpu_asic_flush_hdp(adev, NULL);
+
        PP_ASSERT_WITH_CODE(smu9_send_msg_to_smc_with_parameter(hwmgr,
                        PPSMC_MSG_SetDriverDramAddrHigh,
                        upper_32_bits(priv->smu_tables.entry[table_id].mc_addr)) == 0,
index f604612..0db57fb 100644 (file)
@@ -189,7 +189,7 @@ static int vega20_copy_table_from_smc(struct pp_hwmgr *hwmgr,
                        return ret);
 
        /* flush hdp cache */
-       adev->nbio.funcs->hdp_flush(adev, NULL);
+       amdgpu_asic_flush_hdp(adev, NULL);
 
        memcpy(table, priv->smu_tables.entry[table_id].table,
                        priv->smu_tables.entry[table_id].size);
@@ -207,6 +207,7 @@ static int vega20_copy_table_to_smc(struct pp_hwmgr *hwmgr,
 {
        struct vega20_smumgr *priv =
                        (struct vega20_smumgr *)(hwmgr->smu_backend);
+       struct amdgpu_device *adev = hwmgr->adev;
        int ret = 0;
 
        PP_ASSERT_WITH_CODE(table_id < TABLE_COUNT,
@@ -219,6 +220,8 @@ static int vega20_copy_table_to_smc(struct pp_hwmgr *hwmgr,
        memcpy(priv->smu_tables.entry[table_id].table, table,
                        priv->smu_tables.entry[table_id].size);
 
+       amdgpu_asic_flush_hdp(adev, NULL);
+
        PP_ASSERT_WITH_CODE((ret = vega20_send_msg_to_smc_with_parameter(hwmgr,
                        PPSMC_MSG_SetDriverDramAddrHigh,
                        upper_32_bits(priv->smu_tables.entry[table_id].mc_addr))) == 0,
@@ -242,11 +245,14 @@ int vega20_set_activity_monitor_coeff(struct pp_hwmgr *hwmgr,
 {
        struct vega20_smumgr *priv =
                        (struct vega20_smumgr *)(hwmgr->smu_backend);
+       struct amdgpu_device *adev = hwmgr->adev;
        int ret = 0;
 
        memcpy(priv->smu_tables.entry[TABLE_ACTIVITY_MONITOR_COEFF].table, table,
                        priv->smu_tables.entry[TABLE_ACTIVITY_MONITOR_COEFF].size);
 
+       amdgpu_asic_flush_hdp(adev, NULL);
+
        PP_ASSERT_WITH_CODE((ret = vega20_send_msg_to_smc_with_parameter(hwmgr,
                        PPSMC_MSG_SetDriverDramAddrHigh,
                        upper_32_bits(priv->smu_tables.entry[TABLE_ACTIVITY_MONITOR_COEFF].mc_addr))) == 0,
@@ -290,7 +296,7 @@ int vega20_get_activity_monitor_coeff(struct pp_hwmgr *hwmgr,
                        return ret);
 
        /* flush hdp cache */
-       adev->nbio.funcs->hdp_flush(adev, NULL);
+       amdgpu_asic_flush_hdp(adev, NULL);
 
        memcpy(table, priv->smu_tables.entry[TABLE_ACTIVITY_MONITOR_COEFF].table,
                        priv->smu_tables.entry[TABLE_ACTIVITY_MONITOR_COEFF].size);
index 12bcc3e..38febd5 100644 (file)
@@ -338,6 +338,10 @@ static int vega20_tables_init(struct smu_context *smu, struct smu_table *tables)
                return -ENOMEM;
        smu_table->metrics_time = 0;
 
+       smu_table->watermarks_table = kzalloc(sizeof(Watermarks_t), GFP_KERNEL);
+       if (!smu_table->watermarks_table)
+               return -ENOMEM;
+
        return 0;
 }
 
@@ -1678,17 +1682,20 @@ static int vega20_get_metrics_table(struct smu_context *smu,
        struct smu_table_context *smu_table= &smu->smu_table;
        int ret = 0;
 
+       mutex_lock(&smu->metrics_lock);
        if (!smu_table->metrics_time || time_after(jiffies, smu_table->metrics_time + HZ / 1000)) {
                ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS, 0,
                                (void *)smu_table->metrics_table, false);
                if (ret) {
                        pr_info("Failed to export SMU metrics table!\n");
+                       mutex_unlock(&smu->metrics_lock);
                        return ret;
                }
                smu_table->metrics_time = jiffies;
        }
 
        memcpy(metrics_table, smu_table->metrics_table, sizeof(SmuMetrics_t));
+       mutex_unlock(&smu->metrics_lock);
 
        return ret;
 }
@@ -2232,7 +2239,7 @@ static int vega20_apply_clocks_adjust_rules(struct smu_context *smu)
 }
 
 static int
-vega20_notify_smc_dispaly_config(struct smu_context *smu)
+vega20_notify_smc_display_config(struct smu_context *smu)
 {
        struct vega20_dpm_table *dpm_table = smu->smu_dpm.dpm_context;
        struct vega20_single_dpm_table *memtable = &dpm_table->mem_table;
@@ -3191,6 +3198,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
        .get_od_percentage = vega20_get_od_percentage,
        .get_power_profile_mode = vega20_get_power_profile_mode,
        .set_power_profile_mode = vega20_set_power_profile_mode,
+       .set_performance_level = smu_v11_0_set_performance_level,
        .set_od_percentage = vega20_set_od_percentage,
        .set_default_od_settings = vega20_set_default_od_settings,
        .od_edit_dpm_table = vega20_odn_edit_dpm_table,
@@ -3200,7 +3208,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
        .pre_display_config_changed = vega20_pre_display_config_changed,
        .display_config_changed = vega20_display_config_changed,
        .apply_clocks_adjust_rules = vega20_apply_clocks_adjust_rules,
-       .notify_smc_dispaly_config = vega20_notify_smc_dispaly_config,
+       .notify_smc_display_config = vega20_notify_smc_display_config,
        .force_dpm_limit_value = vega20_force_dpm_limit_value,
        .unforce_dpm_levels = vega20_unforce_dpm_levels,
        .get_profiling_clk_mask = vega20_get_profiling_clk_mask,
@@ -3228,6 +3236,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
        .check_fw_version = smu_v11_0_check_fw_version,
        .write_pptable = smu_v11_0_write_pptable,
        .set_min_dcef_deep_sleep = smu_v11_0_set_min_dcef_deep_sleep,
+       .set_driver_table_location = smu_v11_0_set_driver_table_location,
        .set_tool_table_location = smu_v11_0_set_tool_table_location,
        .notify_memory_pool_location = smu_v11_0_notify_memory_pool_location,
        .system_features_control = smu_v11_0_system_features_control,
index 6fab719..6effe53 100644 (file)
@@ -1289,21 +1289,19 @@ struct drm_crtc *analogix_dp_get_new_crtc(struct analogix_dp_device *dp,
        return conn_state->crtc;
 }
 
-static void
-analogix_dp_bridge_atomic_pre_enable(struct drm_bridge *bridge,
-                                    struct drm_bridge_state *old_bridge_state)
+static void analogix_dp_bridge_atomic_pre_enable(struct drm_bridge *bridge,
+                                                struct drm_atomic_state *state)
 {
-       struct drm_atomic_state *old_state = old_bridge_state->base.state;
        struct analogix_dp_device *dp = bridge->driver_private;
        struct drm_crtc *crtc;
        struct drm_crtc_state *old_crtc_state;
        int ret;
 
-       crtc = analogix_dp_get_new_crtc(dp, old_state);
+       crtc = analogix_dp_get_new_crtc(dp, state);
        if (!crtc)
                return;
 
-       old_crtc_state = drm_atomic_get_old_crtc_state(old_state, crtc);
+       old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc);
        /* Don't touch the panel if we're coming back from PSR */
        if (old_crtc_state && old_crtc_state->self_refresh_active)
                return;
@@ -1368,22 +1366,20 @@ out_dp_clk_pre:
        return ret;
 }
 
-static void
-analogix_dp_bridge_atomic_enable(struct drm_bridge *bridge,
-                                struct drm_bridge_state *old_bridge_state)
+static void analogix_dp_bridge_atomic_enable(struct drm_bridge *bridge,
+                                            struct drm_atomic_state *state)
 {
-       struct drm_atomic_state *old_state = old_bridge_state->base.state;
        struct analogix_dp_device *dp = bridge->driver_private;
        struct drm_crtc *crtc;
        struct drm_crtc_state *old_crtc_state;
        int timeout_loop = 0;
        int ret;
 
-       crtc = analogix_dp_get_new_crtc(dp, old_state);
+       crtc = analogix_dp_get_new_crtc(dp, state);
        if (!crtc)
                return;
 
-       old_crtc_state = drm_atomic_get_old_crtc_state(old_state, crtc);
+       old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc);
        /* Not a full enable, just disable PSR and continue */
        if (old_crtc_state && old_crtc_state->self_refresh_active) {
                ret = analogix_dp_disable_psr(dp);
@@ -1444,20 +1440,18 @@ static void analogix_dp_bridge_disable(struct drm_bridge *bridge)
        dp->dpms_mode = DRM_MODE_DPMS_OFF;
 }
 
-static void
-analogix_dp_bridge_atomic_disable(struct drm_bridge *bridge,
-                                 struct drm_bridge_state *old_bridge_state)
+static void analogix_dp_bridge_atomic_disable(struct drm_bridge *bridge,
+                                             struct drm_atomic_state *state)
 {
-       struct drm_atomic_state *old_state = old_bridge_state->base.state;
        struct analogix_dp_device *dp = bridge->driver_private;
        struct drm_crtc *crtc;
        struct drm_crtc_state *new_crtc_state = NULL;
 
-       crtc = analogix_dp_get_new_crtc(dp, old_state);
+       crtc = analogix_dp_get_new_crtc(dp, state);
        if (!crtc)
                goto out;
 
-       new_crtc_state = drm_atomic_get_new_crtc_state(old_state, crtc);
+       new_crtc_state = drm_atomic_get_new_crtc_state(state, crtc);
        if (!new_crtc_state)
                goto out;
 
@@ -1469,21 +1463,20 @@ out:
        analogix_dp_bridge_disable(bridge);
 }
 
-static void
-analogix_dp_bridge_atomic_post_disable(struct drm_bridge *bridge,
-                               struct drm_bridge_state *old_bridge_state)
+static
+void analogix_dp_bridge_atomic_post_disable(struct drm_bridge *bridge,
+                                           struct drm_atomic_state *state)
 {
-       struct drm_atomic_state *old_state = old_bridge_state->base.state;
        struct analogix_dp_device *dp = bridge->driver_private;
        struct drm_crtc *crtc;
        struct drm_crtc_state *new_crtc_state;
        int ret;
 
-       crtc = analogix_dp_get_new_crtc(dp, old_state);
+       crtc = analogix_dp_get_new_crtc(dp, state);
        if (!crtc)
                return;
 
-       new_crtc_state = drm_atomic_get_new_crtc_state(old_state, crtc);
+       new_crtc_state = drm_atomic_get_new_crtc_state(state, crtc);
        if (!new_crtc_state || !new_crtc_state->self_refresh_active)
                return;
 
index bf1b9c3..d336915 100644 (file)
@@ -30,7 +30,6 @@
 
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_uapi.h>
-#include <drm/drm_bridge.h>
 #include <drm/drm_debugfs.h>
 #include <drm/drm_device.h>
 #include <drm/drm_drv.h>
@@ -1019,44 +1018,6 @@ static void drm_atomic_connector_print_state(struct drm_printer *p,
 }
 
 /**
- * drm_atomic_add_encoder_bridges - add bridges attached to an encoder
- * @state: atomic state
- * @encoder: DRM encoder
- *
- * This function adds all bridges attached to @encoder. This is needed to add
- * bridge states to @state and make them available when
- * &bridge_funcs.atomic_{check,pre_enable,enable,disable_post_disable}() are
- * called
- *
- * Returns:
- * 0 on success or can fail with -EDEADLK or -ENOMEM. When the error is EDEADLK
- * then the w/w mutex code has detected a deadlock and the entire atomic
- * sequence must be restarted. All other errors are fatal.
- */
-int
-drm_atomic_add_encoder_bridges(struct drm_atomic_state *state,
-                              struct drm_encoder *encoder)
-{
-       struct drm_bridge_state *bridge_state;
-       struct drm_bridge *bridge;
-
-       if (!encoder)
-               return 0;
-
-       DRM_DEBUG_ATOMIC("Adding all bridges for [encoder:%d:%s] to %p\n",
-                        encoder->base.id, encoder->name, state);
-
-       drm_for_each_bridge_in_chain(encoder, bridge) {
-               bridge_state = drm_atomic_get_bridge_state(state, bridge);
-               if (IS_ERR(bridge_state))
-                       return PTR_ERR(bridge_state);
-       }
-
-       return 0;
-}
-EXPORT_SYMBOL(drm_atomic_add_encoder_bridges);
-
-/**
  * drm_atomic_add_affected_connectors - add connectors for CRTC
  * @state: atomic state
  * @crtc: DRM CRTC
index afe14f7..4511c2e 100644 (file)
@@ -437,12 +437,12 @@ mode_fixup(struct drm_atomic_state *state)
                funcs = encoder->helper_private;
 
                bridge = drm_bridge_chain_get_first_bridge(encoder);
-               ret = drm_atomic_bridge_chain_check(bridge,
-                                                   new_crtc_state,
-                                                   new_conn_state);
-               if (ret) {
-                       DRM_DEBUG_ATOMIC("Bridge atomic check failed\n");
-                       return ret;
+               ret = drm_bridge_chain_mode_fixup(bridge,
+                                       &new_crtc_state->mode,
+                                       &new_crtc_state->adjusted_mode);
+               if (!ret) {
+                       DRM_DEBUG_ATOMIC("Bridge fixup failed\n");
+                       return -EINVAL;
                }
 
                if (funcs && funcs->atomic_check) {
@@ -730,26 +730,6 @@ drm_atomic_helper_check_modeset(struct drm_device *dev,
                        return ret;
        }
 
-       /*
-        * Iterate over all connectors again, and add all affected bridges to
-        * the state.
-        */
-       for_each_oldnew_connector_in_state(state, connector,
-                                          old_connector_state,
-                                          new_connector_state, i) {
-               struct drm_encoder *encoder;
-
-               encoder = old_connector_state->best_encoder;
-               ret = drm_atomic_add_encoder_bridges(state, encoder);
-               if (ret)
-                       return ret;
-
-               encoder = new_connector_state->best_encoder;
-               ret = drm_atomic_add_encoder_bridges(state, encoder);
-               if (ret)
-                       return ret;
-       }
-
        ret = mode_valid(state);
        if (ret)
                return ret;
index 3740060..c2cf0c9 100644 (file)
@@ -25,7 +25,6 @@
 #include <linux/module.h>
 #include <linux/mutex.h>
 
-#include <drm/drm_atomic_state_helper.h>
 #include <drm/drm_bridge.h>
 #include <drm/drm_encoder.h>
 
@@ -90,74 +89,6 @@ void drm_bridge_remove(struct drm_bridge *bridge)
 }
 EXPORT_SYMBOL(drm_bridge_remove);
 
-static struct drm_bridge_state *
-drm_atomic_default_bridge_duplicate_state(struct drm_bridge *bridge)
-{
-       struct drm_bridge_state *new;
-
-       if (WARN_ON(!bridge->base.state))
-               return NULL;
-
-       new = kzalloc(sizeof(*new), GFP_KERNEL);
-       if (new)
-               __drm_atomic_helper_bridge_duplicate_state(bridge, new);
-
-       return new;
-}
-
-static struct drm_private_state *
-drm_bridge_atomic_duplicate_priv_state(struct drm_private_obj *obj)
-{
-       struct drm_bridge *bridge = drm_priv_to_bridge(obj);
-       struct drm_bridge_state *state;
-
-       if (bridge->funcs->atomic_duplicate_state)
-               state = bridge->funcs->atomic_duplicate_state(bridge);
-       else
-               state = drm_atomic_default_bridge_duplicate_state(bridge);
-
-       return state ? &state->base : NULL;
-}
-
-static void
-drm_atomic_default_bridge_destroy_state(struct drm_bridge *bridge,
-                                       struct drm_bridge_state *state)
-{
-       /* Just a simple kfree() for now */
-       kfree(state);
-}
-
-static void
-drm_bridge_atomic_destroy_priv_state(struct drm_private_obj *obj,
-                                    struct drm_private_state *s)
-{
-       struct drm_bridge_state *state = drm_priv_to_bridge_state(s);
-       struct drm_bridge *bridge = drm_priv_to_bridge(obj);
-
-       if (bridge->funcs->atomic_destroy_state)
-               bridge->funcs->atomic_destroy_state(bridge, state);
-       else
-               drm_atomic_default_bridge_destroy_state(bridge, state);
-}
-
-static const struct drm_private_state_funcs drm_bridge_priv_state_funcs = {
-       .atomic_duplicate_state = drm_bridge_atomic_duplicate_priv_state,
-       .atomic_destroy_state = drm_bridge_atomic_destroy_priv_state,
-};
-
-static struct drm_bridge_state *
-drm_atomic_default_bridge_reset(struct drm_bridge *bridge)
-{
-       struct drm_bridge_state *bridge_state;
-
-       bridge_state = kzalloc(sizeof(*bridge_state), GFP_KERNEL);
-       if (!bridge_state)
-               return ERR_PTR(-ENOMEM);
-
-       __drm_atomic_helper_bridge_reset(bridge, bridge_state);
-       return bridge_state;
-}
-
 /**
  * drm_bridge_attach - attach the bridge to an encoder's chain
  *
@@ -183,7 +114,6 @@ drm_atomic_default_bridge_reset(struct drm_bridge *bridge)
 int drm_bridge_attach(struct drm_encoder *encoder, struct drm_bridge *bridge,
                      struct drm_bridge *previous)
 {
-       struct drm_bridge_state *state;
        int ret;
 
        if (!encoder || !bridge)
@@ -205,35 +135,15 @@ int drm_bridge_attach(struct drm_encoder *encoder, struct drm_bridge *bridge,
 
        if (bridge->funcs->attach) {
                ret = bridge->funcs->attach(bridge);
-               if (ret < 0)
-                       goto err_reset_bridge;
-       }
-
-       if (bridge->funcs->atomic_reset)
-               state = bridge->funcs->atomic_reset(bridge);
-       else
-               state = drm_atomic_default_bridge_reset(bridge);
-
-       if (IS_ERR(state)) {
-               ret = PTR_ERR(state);
-               goto err_detach_bridge;
+               if (ret < 0) {
+                       list_del(&bridge->chain_node);
+                       bridge->dev = NULL;
+                       bridge->encoder = NULL;
+                       return ret;
+               }
        }
 
-       drm_atomic_private_obj_init(bridge->dev, &bridge->base,
-                                   &state->base,
-                                   &drm_bridge_priv_state_funcs);
-
        return 0;
-
-err_detach_bridge:
-       if (bridge->funcs->detach)
-               bridge->funcs->detach(bridge);
-
-err_reset_bridge:
-       bridge->dev = NULL;
-       bridge->encoder = NULL;
-       list_del(&bridge->chain_node);
-       return ret;
 }
 EXPORT_SYMBOL(drm_bridge_attach);
 
@@ -245,8 +155,6 @@ void drm_bridge_detach(struct drm_bridge *bridge)
        if (WARN_ON(!bridge->dev))
                return;
 
-       drm_atomic_private_obj_fini(&bridge->base);
-
        if (bridge->funcs->detach)
                bridge->funcs->detach(bridge);
 
@@ -501,19 +409,10 @@ void drm_atomic_bridge_chain_disable(struct drm_bridge *bridge,
 
        encoder = bridge->encoder;
        list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) {
-               if (iter->funcs->atomic_disable) {
-                       struct drm_bridge_state *old_bridge_state;
-
-                       old_bridge_state =
-                               drm_atomic_get_old_bridge_state(old_state,
-                                                               iter);
-                       if (WARN_ON(!old_bridge_state))
-                               return;
-
-                       iter->funcs->atomic_disable(iter, old_bridge_state);
-               } else if (iter->funcs->disable) {
+               if (iter->funcs->atomic_disable)
+                       iter->funcs->atomic_disable(iter, old_state);
+               else if (iter->funcs->disable)
                        iter->funcs->disable(iter);
-               }
 
                if (iter == bridge)
                        break;
@@ -544,20 +443,10 @@ void drm_atomic_bridge_chain_post_disable(struct drm_bridge *bridge,
 
        encoder = bridge->encoder;
        list_for_each_entry_from(bridge, &encoder->bridge_chain, chain_node) {
-               if (bridge->funcs->atomic_post_disable) {
-                       struct drm_bridge_state *old_bridge_state;
-
-                       old_bridge_state =
-                               drm_atomic_get_old_bridge_state(old_state,
-                                                               bridge);
-                       if (WARN_ON(!old_bridge_state))
-                               return;
-
-                       bridge->funcs->atomic_post_disable(bridge,
-                                                          old_bridge_state);
-               } else if (bridge->funcs->post_disable) {
+               if (bridge->funcs->atomic_post_disable)
+                       bridge->funcs->atomic_post_disable(bridge, old_state);
+               else if (bridge->funcs->post_disable)
                        bridge->funcs->post_disable(bridge);
-               }
        }
 }
 EXPORT_SYMBOL(drm_atomic_bridge_chain_post_disable);
@@ -586,19 +475,10 @@ void drm_atomic_bridge_chain_pre_enable(struct drm_bridge *bridge,
 
        encoder = bridge->encoder;
        list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) {
-               if (iter->funcs->atomic_pre_enable) {
-                       struct drm_bridge_state *old_bridge_state;
-
-                       old_bridge_state =
-                               drm_atomic_get_old_bridge_state(old_state,
-                                                               iter);
-                       if (WARN_ON(!old_bridge_state))
-                               return;
-
-                       iter->funcs->atomic_pre_enable(iter, old_bridge_state);
-               } else if (iter->funcs->pre_enable) {
+               if (iter->funcs->atomic_pre_enable)
+                       iter->funcs->atomic_pre_enable(iter, old_state);
+               else if (iter->funcs->pre_enable)
                        iter->funcs->pre_enable(iter);
-               }
 
                if (iter == bridge)
                        break;
@@ -628,385 +508,14 @@ void drm_atomic_bridge_chain_enable(struct drm_bridge *bridge,
 
        encoder = bridge->encoder;
        list_for_each_entry_from(bridge, &encoder->bridge_chain, chain_node) {
-               if (bridge->funcs->atomic_enable) {
-                       struct drm_bridge_state *old_bridge_state;
-
-                       old_bridge_state =
-                               drm_atomic_get_old_bridge_state(old_state,
-                                                               bridge);
-                       if (WARN_ON(!old_bridge_state))
-                               return;
-
-                       bridge->funcs->atomic_enable(bridge, old_bridge_state);
-               } else if (bridge->funcs->enable) {
+               if (bridge->funcs->atomic_enable)
+                       bridge->funcs->atomic_enable(bridge, old_state);
+               else if (bridge->funcs->enable)
                        bridge->funcs->enable(bridge);
-               }
        }
 }
 EXPORT_SYMBOL(drm_atomic_bridge_chain_enable);
 
-static int drm_atomic_bridge_check(struct drm_bridge *bridge,
-                                  struct drm_crtc_state *crtc_state,
-                                  struct drm_connector_state *conn_state)
-{
-       if (bridge->funcs->atomic_check) {
-               struct drm_bridge_state *bridge_state;
-               int ret;
-
-               bridge_state = drm_atomic_get_new_bridge_state(crtc_state->state,
-                                                              bridge);
-               if (WARN_ON(!bridge_state))
-                       return -EINVAL;
-
-               ret = bridge->funcs->atomic_check(bridge, bridge_state,
-                                                 crtc_state, conn_state);
-               if (ret)
-                       return ret;
-       } else if (bridge->funcs->mode_fixup) {
-               if (!bridge->funcs->mode_fixup(bridge, &crtc_state->mode,
-                                              &crtc_state->adjusted_mode))
-                       return -EINVAL;
-       }
-
-       return 0;
-}
-
-/**
- * drm_atomic_helper_bridge_propagate_bus_fmt() - Propagate output format to
- *                                               the input end of a bridge
- * @bridge: bridge control structure
- * @bridge_state: new bridge state
- * @crtc_state: new CRTC state
- * @conn_state: new connector state
- * @output_fmt: tested output bus format
- * @num_input_fmts: will contain the size of the returned array
- *
- * This helper is a pluggable implementation of the
- * &drm_bridge_funcs.atomic_get_input_bus_fmts operation for bridges that don't
- * modify the bus configuration between their input and their output. It
- * returns an array of input formats with a single element set to @output_fmt.
- *
- * RETURNS:
- * a valid format array of size @num_input_fmts, or NULL if the allocation
- * failed
- */
-u32 *
-drm_atomic_helper_bridge_propagate_bus_fmt(struct drm_bridge *bridge,
-                                       struct drm_bridge_state *bridge_state,
-                                       struct drm_crtc_state *crtc_state,
-                                       struct drm_connector_state *conn_state,
-                                       u32 output_fmt,
-                                       unsigned int *num_input_fmts)
-{
-       u32 *input_fmts;
-
-       input_fmts = kzalloc(sizeof(*input_fmts), GFP_KERNEL);
-       if (!input_fmts) {
-               *num_input_fmts = 0;
-               return NULL;
-       }
-
-       *num_input_fmts = 1;
-       input_fmts[0] = output_fmt;
-       return input_fmts;
-}
-EXPORT_SYMBOL(drm_atomic_helper_bridge_propagate_bus_fmt);
-
-static int select_bus_fmt_recursive(struct drm_bridge *first_bridge,
-                                   struct drm_bridge *cur_bridge,
-                                   struct drm_crtc_state *crtc_state,
-                                   struct drm_connector_state *conn_state,
-                                   u32 out_bus_fmt)
-{
-       struct drm_bridge_state *cur_state;
-       unsigned int num_in_bus_fmts, i;
-       struct drm_bridge *prev_bridge;
-       u32 *in_bus_fmts;
-       int ret;
-
-       prev_bridge = drm_bridge_get_prev_bridge(cur_bridge);
-       cur_state = drm_atomic_get_new_bridge_state(crtc_state->state,
-                                                   cur_bridge);
-       if (WARN_ON(!cur_state))
-               return -EINVAL;
-
-       /*
-        * If bus format negotiation is not supported by this bridge, let's
-        * pass MEDIA_BUS_FMT_FIXED to the previous bridge in the chain and
-        * hope that it can handle this situation gracefully (by providing
-        * appropriate default values).
-        */
-       if (!cur_bridge->funcs->atomic_get_input_bus_fmts) {
-               if (cur_bridge != first_bridge) {
-                       ret = select_bus_fmt_recursive(first_bridge,
-                                                      prev_bridge, crtc_state,
-                                                      conn_state,
-                                                      MEDIA_BUS_FMT_FIXED);
-                       if (ret)
-                               return ret;
-               }
-
-               cur_state->input_bus_cfg.format = MEDIA_BUS_FMT_FIXED;
-               cur_state->output_bus_cfg.format = out_bus_fmt;
-               return 0;
-       }
-
-       in_bus_fmts = cur_bridge->funcs->atomic_get_input_bus_fmts(cur_bridge,
-                                                       cur_state,
-                                                       crtc_state,
-                                                       conn_state,
-                                                       out_bus_fmt,
-                                                       &num_in_bus_fmts);
-       if (!num_in_bus_fmts)
-               return -ENOTSUPP;
-       else if (!in_bus_fmts)
-               return -ENOMEM;
-
-       if (first_bridge == cur_bridge) {
-               cur_state->input_bus_cfg.format = in_bus_fmts[0];
-               cur_state->output_bus_cfg.format = out_bus_fmt;
-               kfree(in_bus_fmts);
-               return 0;
-       }
-
-       for (i = 0; i < num_in_bus_fmts; i++) {
-               ret = select_bus_fmt_recursive(first_bridge, prev_bridge,
-                                              crtc_state, conn_state,
-                                              in_bus_fmts[i]);
-               if (ret != -ENOTSUPP)
-                       break;
-       }
-
-       if (!ret) {
-               cur_state->input_bus_cfg.format = in_bus_fmts[i];
-               cur_state->output_bus_cfg.format = out_bus_fmt;
-       }
-
-       kfree(in_bus_fmts);
-       return ret;
-}
-
-/*
- * This function is called by &drm_atomic_bridge_chain_check() just before
- * calling &drm_bridge_funcs.atomic_check() on all elements of the chain.
- * It performs bus format negotiation between bridge elements. The negotiation
- * happens in reverse order, starting from the last element in the chain up to
- * @bridge.
- *
- * Negotiation starts by retrieving supported output bus formats on the last
- * bridge element and testing them one by one. The test is recursive, meaning
- * that for each tested output format, the whole chain will be walked backward,
- * and each element will have to choose an input bus format that can be
- * transcoded to the requested output format. When a bridge element does not
- * support transcoding into a specific output format -ENOTSUPP is returned and
- * the next bridge element will have to try a different format. If none of the
- * combinations worked, -ENOTSUPP is returned and the atomic modeset will fail.
- *
- * This implementation is relying on
- * &drm_bridge_funcs.atomic_get_output_bus_fmts() and
- * &drm_bridge_funcs.atomic_get_input_bus_fmts() to gather supported
- * input/output formats.
- *
- * When &drm_bridge_funcs.atomic_get_output_bus_fmts() is not implemented by
- * the last element of the chain, &drm_atomic_bridge_chain_select_bus_fmts()
- * tries a single format: &drm_connector.display_info.bus_formats[0] if
- * available, MEDIA_BUS_FMT_FIXED otherwise.
- *
- * When &drm_bridge_funcs.atomic_get_input_bus_fmts() is not implemented,
- * &drm_atomic_bridge_chain_select_bus_fmts() skips the negotiation on the
- * bridge element that lacks this hook and asks the previous element in the
- * chain to try MEDIA_BUS_FMT_FIXED. It's up to bridge drivers to decide what
- * to do in that case (fail if they want to enforce bus format negotiation, or
- * provide a reasonable default if they need to support pipelines where not
- * all elements support bus format negotiation).
- */
-static int
-drm_atomic_bridge_chain_select_bus_fmts(struct drm_bridge *bridge,
-                                       struct drm_crtc_state *crtc_state,
-                                       struct drm_connector_state *conn_state)
-{
-       struct drm_connector *conn = conn_state->connector;
-       struct drm_encoder *encoder = bridge->encoder;
-       struct drm_bridge_state *last_bridge_state;
-       unsigned int i, num_out_bus_fmts;
-       struct drm_bridge *last_bridge;
-       u32 *out_bus_fmts;
-       int ret = 0;
-
-       last_bridge = list_last_entry(&encoder->bridge_chain,
-                                     struct drm_bridge, chain_node);
-       last_bridge_state = drm_atomic_get_new_bridge_state(crtc_state->state,
-                                                           last_bridge);
-       if (WARN_ON(!last_bridge_state))
-               return -EINVAL;
-
-       if (last_bridge->funcs->atomic_get_output_bus_fmts) {
-               const struct drm_bridge_funcs *funcs = last_bridge->funcs;
-
-               out_bus_fmts = funcs->atomic_get_output_bus_fmts(last_bridge,
-                                                       last_bridge_state,
-                                                       crtc_state,
-                                                       conn_state,
-                                                       &num_out_bus_fmts);
-               if (!num_out_bus_fmts)
-                       return -ENOTSUPP;
-               else if (!out_bus_fmts)
-                       return -ENOMEM;
-       } else {
-               num_out_bus_fmts = 1;
-               out_bus_fmts = kmalloc(sizeof(*out_bus_fmts), GFP_KERNEL);
-               if (!out_bus_fmts)
-                       return -ENOMEM;
-
-               if (conn->display_info.num_bus_formats &&
-                   conn->display_info.bus_formats)
-                       out_bus_fmts[0] = conn->display_info.bus_formats[0];
-               else
-                       out_bus_fmts[0] = MEDIA_BUS_FMT_FIXED;
-       }
-
-       for (i = 0; i < num_out_bus_fmts; i++) {
-               ret = select_bus_fmt_recursive(bridge, last_bridge, crtc_state,
-                                              conn_state, out_bus_fmts[i]);
-               if (ret != -ENOTSUPP)
-                       break;
-       }
-
-       kfree(out_bus_fmts);
-
-       return ret;
-}
-
-static void
-drm_atomic_bridge_propagate_bus_flags(struct drm_bridge *bridge,
-                                     struct drm_connector *conn,
-                                     struct drm_atomic_state *state)
-{
-       struct drm_bridge_state *bridge_state, *next_bridge_state;
-       struct drm_bridge *next_bridge;
-       u32 output_flags;
-
-       bridge_state = drm_atomic_get_new_bridge_state(state, bridge);
-       next_bridge = drm_bridge_get_next_bridge(bridge);
-
-       /*
-        * Let's try to apply the most common case here, that is, propagate
-        * display_info flags for the last bridge, and propagate the input
-        * flags of the next bridge element to the output end of the current
-        * bridge when the bridge is not the last one.
-        * There are exceptions to this rule, like when signal inversion is
-        * happening at the board level, but that's something drivers can deal
-        * with from their &drm_bridge_funcs.atomic_check() implementation by
-        * simply overriding the flags value we've set here.
-        */
-       if (!next_bridge) {
-               output_flags = conn->display_info.bus_flags;
-       } else {
-               next_bridge_state = drm_atomic_get_new_bridge_state(state,
-                                                               next_bridge);
-               output_flags = next_bridge_state->input_bus_cfg.flags;
-       }
-
-       bridge_state->output_bus_cfg.flags = output_flags;
-
-       /*
-        * Propage the output flags to the input end of the bridge. Again, it's
-        * not necessarily what all bridges want, but that's what most of them
-        * do, and by doing that by default we avoid forcing drivers to
-        * duplicate the "dummy propagation" logic.
-        */
-       bridge_state->input_bus_cfg.flags = output_flags;
-}
-
-/**
- * drm_atomic_bridge_chain_check() - Do an atomic check on the bridge chain
- * @bridge: bridge control structure
- * @crtc_state: new CRTC state
- * @conn_state: new connector state
- *
- * First trigger a bus format negotiation before calling
- * &drm_bridge_funcs.atomic_check() (falls back on
- * &drm_bridge_funcs.mode_fixup()) op for all the bridges in the encoder chain,
- * starting from the last bridge to the first. These are called before calling
- * &drm_encoder_helper_funcs.atomic_check()
- *
- * RETURNS:
- * 0 on success, a negative error code on failure
- */
-int drm_atomic_bridge_chain_check(struct drm_bridge *bridge,
-                                 struct drm_crtc_state *crtc_state,
-                                 struct drm_connector_state *conn_state)
-{
-       struct drm_connector *conn = conn_state->connector;
-       struct drm_encoder *encoder = bridge->encoder;
-       struct drm_bridge *iter;
-       int ret;
-
-       ret = drm_atomic_bridge_chain_select_bus_fmts(bridge, crtc_state,
-                                                     conn_state);
-       if (ret)
-               return ret;
-
-       list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) {
-               int ret;
-
-               /*
-                * Bus flags are propagated by default. If a bridge needs to
-                * tweak the input bus flags for any reason, it should happen
-                * in its &drm_bridge_funcs.atomic_check() implementation such
-                * that preceding bridges in the chain can propagate the new
-                * bus flags.
-                */
-               drm_atomic_bridge_propagate_bus_flags(iter, conn,
-                                                     crtc_state->state);
-
-               ret = drm_atomic_bridge_check(iter, crtc_state, conn_state);
-               if (ret)
-                       return ret;
-
-               if (iter == bridge)
-                       break;
-       }
-
-       return 0;
-}
-EXPORT_SYMBOL(drm_atomic_bridge_chain_check);
-
-/**
- * __drm_atomic_helper_bridge_reset() - Initialize a bridge state to its
- *                                     default
- * @bridge: the bridge this state is refers to
- * @state: bridge state to initialize
- *
- * Initialize the bridge state to default values. This is meant to be* called
- * by the bridge &drm_plane_funcs.reset hook for bridges that subclass the
- * bridge state.
- */
-void __drm_atomic_helper_bridge_reset(struct drm_bridge *bridge,
-                                     struct drm_bridge_state *state)
-{
-       memset(state, 0, sizeof(*state));
-       state->bridge = bridge;
-}
-EXPORT_SYMBOL(__drm_atomic_helper_bridge_reset);
-
-/**
- * __drm_atomic_helper_bridge_duplicate_state() - Copy atomic bridge state
- * @bridge: bridge object
- * @state: atomic bridge state
- *
- * Copies atomic state from a bridge's current state and resets inferred values.
- * This is useful for drivers that subclass the bridge state.
- */
-void __drm_atomic_helper_bridge_duplicate_state(struct drm_bridge *bridge,
-                                               struct drm_bridge_state *state)
-{
-       __drm_atomic_helper_private_obj_duplicate_state(&bridge->base,
-                                                       &state->base);
-       state->bridge = bridge;
-}
-EXPORT_SYMBOL(__drm_atomic_helper_bridge_duplicate_state);
-
 #ifdef CONFIG_OF
 /**
  * of_drm_find_bridge - find the bridge corresponding to the device node in
index ca3c55c..e22b812 100644 (file)
@@ -140,8 +140,8 @@ static ssize_t crc_control_write(struct file *file, const char __user *ubuf,
        if (IS_ERR(source))
                return PTR_ERR(source);
 
-       if (source[len] == '\n')
-               source[len] = '\0';
+       if (source[len - 1] == '\n')
+               source[len - 1] = '\0';
 
        ret = crtc->funcs->verify_crc_source(crtc, source, &values_cnt);
        if (ret)
@@ -258,6 +258,11 @@ static int crtc_crc_release(struct inode *inode, struct file *filep)
        struct drm_crtc *crtc = filep->f_inode->i_private;
        struct drm_crtc_crc *crc = &crtc->crc;
 
+       /* terminate the infinite while loop if 'drm_dp_aux_crc_work' running */
+       spin_lock_irq(&crc->lock);
+       crc->opened = false;
+       spin_unlock_irq(&crc->lock);
+
        crtc->funcs->set_crc_source(crtc, NULL);
 
        spin_lock_irq(&crc->lock);
index 0cfb386..2510717 100644 (file)
@@ -163,11 +163,7 @@ static ssize_t auxdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
                        break;
                }
 
-               if (aux_dev->aux->is_remote)
-                       res = drm_dp_mst_dpcd_read(aux_dev->aux, pos, buf,
-                                                  todo);
-               else
-                       res = drm_dp_dpcd_read(aux_dev->aux, pos, buf, todo);
+               res = drm_dp_dpcd_read(aux_dev->aux, pos, buf, todo);
 
                if (res <= 0)
                        break;
@@ -215,11 +211,7 @@ static ssize_t auxdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
                        break;
                }
 
-               if (aux_dev->aux->is_remote)
-                       res = drm_dp_mst_dpcd_write(aux_dev->aux, pos, buf,
-                                                   todo);
-               else
-                       res = drm_dp_dpcd_write(aux_dev->aux, pos, buf, todo);
+               res = drm_dp_dpcd_write(aux_dev->aux, pos, buf, todo);
 
                if (res <= 0)
                        break;
index 2c7870a..a5364b5 100644 (file)
@@ -32,6 +32,7 @@
 #include <drm/drm_dp_helper.h>
 #include <drm/drm_print.h>
 #include <drm/drm_vblank.h>
+#include <drm/drm_dp_mst_helper.h>
 
 #include "drm_crtc_helper_internal.h"
 
@@ -266,7 +267,7 @@ unlock:
 
 /**
  * drm_dp_dpcd_read() - read a series of bytes from the DPCD
- * @aux: DisplayPort AUX channel
+ * @aux: DisplayPort AUX channel (SST or MST)
  * @offset: address of the (first) register to read
  * @buffer: buffer to store the register values
  * @size: number of bytes in @buffer
@@ -295,13 +296,18 @@ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
         * We just have to do it before any DPCD access and hope that the
         * monitor doesn't power down exactly after the throw away read.
         */
-       ret = drm_dp_dpcd_access(aux, DP_AUX_NATIVE_READ, DP_DPCD_REV, buffer,
-                                1);
-       if (ret != 1)
-               goto out;
+       if (!aux->is_remote) {
+               ret = drm_dp_dpcd_access(aux, DP_AUX_NATIVE_READ, DP_DPCD_REV,
+                                        buffer, 1);
+               if (ret != 1)
+                       goto out;
+       }
 
-       ret = drm_dp_dpcd_access(aux, DP_AUX_NATIVE_READ, offset, buffer,
-                                size);
+       if (aux->is_remote)
+               ret = drm_dp_mst_dpcd_read(aux, offset, buffer, size);
+       else
+               ret = drm_dp_dpcd_access(aux, DP_AUX_NATIVE_READ, offset,
+                                        buffer, size);
 
 out:
        drm_dp_dump_access(aux, DP_AUX_NATIVE_READ, offset, buffer, ret);
@@ -311,7 +317,7 @@ EXPORT_SYMBOL(drm_dp_dpcd_read);
 
 /**
  * drm_dp_dpcd_write() - write a series of bytes to the DPCD
- * @aux: DisplayPort AUX channel
+ * @aux: DisplayPort AUX channel (SST or MST)
  * @offset: address of the (first) register to write
  * @buffer: buffer containing the values to write
  * @size: number of bytes in @buffer
@@ -328,8 +334,12 @@ ssize_t drm_dp_dpcd_write(struct drm_dp_aux *aux, unsigned int offset,
 {
        int ret;
 
-       ret = drm_dp_dpcd_access(aux, DP_AUX_NATIVE_WRITE, offset, buffer,
-                                size);
+       if (aux->is_remote)
+               ret = drm_dp_mst_dpcd_write(aux, offset, buffer, size);
+       else
+               ret = drm_dp_dpcd_access(aux, DP_AUX_NATIVE_WRITE, offset,
+                                        buffer, size);
+
        drm_dp_dump_access(aux, DP_AUX_NATIVE_WRITE, offset, buffer, ret);
        return ret;
 }
@@ -969,6 +979,19 @@ static void drm_dp_aux_crc_work(struct work_struct *work)
 }
 
 /**
+ * drm_dp_remote_aux_init() - minimally initialise a remote aux channel
+ * @aux: DisplayPort AUX channel
+ *
+ * Used for remote aux channel in general. Merely initialize the crc work
+ * struct.
+ */
+void drm_dp_remote_aux_init(struct drm_dp_aux *aux)
+{
+       INIT_WORK(&aux->crc_work, drm_dp_aux_crc_work);
+}
+EXPORT_SYMBOL(drm_dp_remote_aux_init);
+
+/**
  * drm_dp_aux_init() - minimally initialise an aux channel
  * @aux: DisplayPort AUX channel
  *
@@ -1155,6 +1178,8 @@ static const struct dpcd_quirk dpcd_quirk_list[] = {
        { OUI(0x00, 0x10, 0xfa), DEVICE_ID_ANY, false, BIT(DP_DPCD_QUIRK_NO_PSR) },
        /* CH7511 seems to leave SINK_COUNT zeroed */
        { OUI(0x00, 0x00, 0x00), DEVICE_ID('C', 'H', '7', '5', '1', '1'), false, BIT(DP_DPCD_QUIRK_NO_SINK_COUNT) },
+       /* Synaptics DP1.4 MST hubs can support DSC without virtual DPCD */
+       { OUI(0x90, 0xCC, 0x24), DEVICE_ID_ANY, true, BIT(DP_DPCD_QUIRK_DSC_WITHOUT_VIRTUAL_DPCD) },
 };
 
 #undef OUI
index e68d230..5d3c1d3 100644 (file)
@@ -853,6 +853,7 @@ static bool drm_dp_sideband_parse_enum_path_resources_ack(struct drm_dp_sideband
 {
        int idx = 1;
        repmsg->u.path_resources.port_number = (raw->msg[idx] >> 4) & 0xf;
+       repmsg->u.path_resources.fec_capable = raw->msg[idx] & 0x1;
        idx++;
        if (idx > raw->curlen)
                goto fail_len;
@@ -2174,6 +2175,7 @@ drm_dp_mst_topology_unlink_port(struct drm_dp_mst_topology_mgr *mgr,
                                struct drm_dp_mst_port *port)
 {
        mutex_lock(&mgr->lock);
+       port->parent->num_ports--;
        list_del(&port->next);
        mutex_unlock(&mgr->lock);
        drm_dp_mst_topology_put_port(port);
@@ -2198,6 +2200,9 @@ drm_dp_mst_add_port(struct drm_device *dev,
        port->aux.dev = dev->dev;
        port->aux.is_remote = true;
 
+       /* initialize the MST downstream port's AUX crc work queue */
+       drm_dp_remote_aux_init(&port->aux);
+
        /*
         * Make sure the memory allocation for our parent branch stays
         * around until our own memory allocation is released
@@ -2273,6 +2278,7 @@ drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb,
                mutex_lock(&mgr->lock);
                drm_dp_mst_topology_get_port(port);
                list_add(&port->next, &mstb->ports);
+               mstb->num_ports++;
                mutex_unlock(&mgr->lock);
        }
 
@@ -2951,6 +2957,7 @@ drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
                                      path_res->avail_payload_bw_number);
                        port->available_pbn =
                                path_res->avail_payload_bw_number;
+                       port->fec_capable = path_res->fec_capable;
                }
        }
 
@@ -4089,6 +4096,7 @@ static int drm_dp_init_vcpi(struct drm_dp_mst_topology_mgr *mgr,
  * @mgr: MST topology manager for the port
  * @port: port to find vcpi slots for
  * @pbn: bandwidth required for the mode in PBN
+ * @pbn_div: divider for DSC mode that takes FEC into account
  *
  * Allocates VCPI slots to @port, replacing any previous VCPI allocations it
  * may have had. Any atomic drivers which support MST must call this function
@@ -4115,11 +4123,12 @@ static int drm_dp_init_vcpi(struct drm_dp_mst_topology_mgr *mgr,
  */
 int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,
                                  struct drm_dp_mst_topology_mgr *mgr,
-                                 struct drm_dp_mst_port *port, int pbn)
+                                 struct drm_dp_mst_port *port, int pbn,
+                                 int pbn_div)
 {
        struct drm_dp_mst_topology_state *topology_state;
        struct drm_dp_vcpi_allocation *pos, *vcpi = NULL;
-       int prev_slots, req_slots;
+       int prev_slots, prev_bw, req_slots;
 
        topology_state = drm_atomic_get_mst_topology_state(state, mgr);
        if (IS_ERR(topology_state))
@@ -4130,6 +4139,7 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,
                if (pos->port == port) {
                        vcpi = pos;
                        prev_slots = vcpi->vcpi;
+                       prev_bw = vcpi->pbn;
 
                        /*
                         * This should never happen, unless the driver tries
@@ -4145,14 +4155,22 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,
                        break;
                }
        }
-       if (!vcpi)
+       if (!vcpi) {
                prev_slots = 0;
+               prev_bw = 0;
+       }
 
-       req_slots = DIV_ROUND_UP(pbn, mgr->pbn_div);
+       if (pbn_div <= 0)
+               pbn_div = mgr->pbn_div;
+
+       req_slots = DIV_ROUND_UP(pbn, pbn_div);
 
        DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] [MST PORT:%p] VCPI %d -> %d\n",
                         port->connector->base.id, port->connector->name,
                         port, prev_slots, req_slots);
+       DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] [MST PORT:%p] PBN %d -> %d\n",
+                        port->connector->base.id, port->connector->name,
+                        port, prev_bw, pbn);
 
        /* Add the new allocation to the state */
        if (!vcpi) {
@@ -4165,6 +4183,7 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,
                list_add(&vcpi->next, &topology_state->vcpis);
        }
        vcpi->vcpi = req_slots;
+       vcpi->pbn = pbn;
 
        return req_slots;
 }
@@ -4415,10 +4434,11 @@ EXPORT_SYMBOL(drm_dp_check_act_status);
  * drm_dp_calc_pbn_mode() - Calculate the PBN for a mode.
  * @clock: dot clock for the mode
  * @bpp: bpp for the mode.
+ * @dsc: DSC mode. If true, bpp has units of 1/16 of a bit per pixel
  *
  * This uses the formula in the spec to calculate the PBN value for a mode.
  */
-int drm_dp_calc_pbn_mode(int clock, int bpp)
+int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc)
 {
        /*
         * margin 5300ppm + 300ppm ~ 0.6% as per spec, factor is 1.006
@@ -4429,7 +4449,16 @@ int drm_dp_calc_pbn_mode(int clock, int bpp)
         * peak_kbps *= (1006/1000)
         * peak_kbps *= (64/54)
         * peak_kbps *= 8    convert to bytes
+        *
+        * If the bpp is in units of 1/16, further divide by 16. Put this
+        * factor in the numerator rather than the denominator to avoid
+        * integer overflow
         */
+
+       if (dsc)
+               return DIV_ROUND_UP_ULL(mul_u32_u32(clock * (bpp / 16), 64 * 1006),
+                                       8 * 54 * 1000 * 1000);
+
        return DIV_ROUND_UP_ULL(mul_u32_u32(clock * bpp, 64 * 1006),
                                8 * 54 * 1000 * 1000);
 }
@@ -4731,9 +4760,61 @@ static void drm_dp_mst_destroy_state(struct drm_private_obj *obj,
        kfree(mst_state);
 }
 
+static bool drm_dp_mst_port_downstream_of_branch(struct drm_dp_mst_port *port,
+                                                struct drm_dp_mst_branch *branch)
+{
+       while (port->parent) {
+               if (port->parent == branch)
+                       return true;
+
+               if (port->parent->port_parent)
+                       port = port->parent->port_parent;
+               else
+                       break;
+       }
+       return false;
+}
+
+static inline
+int drm_dp_mst_atomic_check_bw_limit(struct drm_dp_mst_branch *branch,
+                                    struct drm_dp_mst_topology_state *mst_state)
+{
+       struct drm_dp_mst_port *port;
+       struct drm_dp_vcpi_allocation *vcpi;
+       int pbn_limit = 0, pbn_used = 0;
+
+       list_for_each_entry(port, &branch->ports, next) {
+               if (port->mstb)
+                       if (drm_dp_mst_atomic_check_bw_limit(port->mstb, mst_state))
+                               return -ENOSPC;
+
+               if (port->available_pbn > 0)
+                       pbn_limit = port->available_pbn;
+       }
+       DRM_DEBUG_ATOMIC("[MST BRANCH:%p] branch has %d PBN available\n",
+                        branch, pbn_limit);
+
+       list_for_each_entry(vcpi, &mst_state->vcpis, next) {
+               if (!vcpi->pbn)
+                       continue;
+
+               if (drm_dp_mst_port_downstream_of_branch(vcpi->port, branch))
+                       pbn_used += vcpi->pbn;
+       }
+       DRM_DEBUG_ATOMIC("[MST BRANCH:%p] branch used %d PBN\n",
+                        branch, pbn_used);
+
+       if (pbn_used > pbn_limit) {
+               DRM_DEBUG_ATOMIC("[MST BRANCH:%p] No available bandwidth\n",
+                                branch);
+               return -ENOSPC;
+       }
+       return 0;
+}
+
 static inline int
-drm_dp_mst_atomic_check_topology_state(struct drm_dp_mst_topology_mgr *mgr,
-                                      struct drm_dp_mst_topology_state *mst_state)
+drm_dp_mst_atomic_check_vcpi_alloc_limit(struct drm_dp_mst_topology_mgr *mgr,
+                                        struct drm_dp_mst_topology_state *mst_state)
 {
        struct drm_dp_vcpi_allocation *vcpi;
        int avail_slots = 63, payload_count = 0;
@@ -4771,6 +4852,128 @@ drm_dp_mst_atomic_check_topology_state(struct drm_dp_mst_topology_mgr *mgr,
 }
 
 /**
+ * drm_dp_mst_add_affected_dsc_crtcs
+ * @state: Pointer to the new struct drm_dp_mst_topology_state
+ * @mgr: MST topology manager
+ *
+ * Whenever there is a change in mst topology
+ * DSC configuration would have to be recalculated
+ * therefore we need to trigger modeset on all affected
+ * CRTCs in that topology
+ *
+ * See also:
+ * drm_dp_mst_atomic_enable_dsc()
+ */
+int drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr)
+{
+       struct drm_dp_mst_topology_state *mst_state;
+       struct drm_dp_vcpi_allocation *pos;
+       struct drm_connector *connector;
+       struct drm_connector_state *conn_state;
+       struct drm_crtc *crtc;
+       struct drm_crtc_state *crtc_state;
+
+       mst_state = drm_atomic_get_mst_topology_state(state, mgr);
+
+       if (IS_ERR(mst_state))
+               return -EINVAL;
+
+       list_for_each_entry(pos, &mst_state->vcpis, next) {
+
+               connector = pos->port->connector;
+
+               if (!connector)
+                       return -EINVAL;
+
+               conn_state = drm_atomic_get_connector_state(state, connector);
+
+               if (IS_ERR(conn_state))
+                       return PTR_ERR(conn_state);
+
+               crtc = conn_state->crtc;
+
+               if (WARN_ON(!crtc))
+                       return -EINVAL;
+
+               if (!drm_dp_mst_dsc_aux_for_port(pos->port))
+                       continue;
+
+               crtc_state = drm_atomic_get_crtc_state(mst_state->base.state, crtc);
+
+               if (IS_ERR(crtc_state))
+                       return PTR_ERR(crtc_state);
+
+               DRM_DEBUG_ATOMIC("[MST MGR:%p] Setting mode_changed flag on CRTC %p\n",
+                                mgr, crtc);
+
+               crtc_state->mode_changed = true;
+       }
+       return 0;
+}
+EXPORT_SYMBOL(drm_dp_mst_add_affected_dsc_crtcs);
+
+/**
+ * drm_dp_mst_atomic_enable_dsc - Set DSC Enable Flag to On/Off
+ * @state: Pointer to the new drm_atomic_state
+ * @port: Pointer to the affected MST Port
+ * @pbn: Newly recalculated bw required for link with DSC enabled
+ * @pbn_div: Divider to calculate correct number of pbn per slot
+ * @enable: Boolean flag to enable or disable DSC on the port
+ *
+ * This function enables DSC on the given Port
+ * by recalculating its vcpi from pbn provided
+ * and sets dsc_enable flag to keep track of which
+ * ports have DSC enabled
+ *
+ */
+int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state,
+                                struct drm_dp_mst_port *port,
+                                int pbn, int pbn_div,
+                                bool enable)
+{
+       struct drm_dp_mst_topology_state *mst_state;
+       struct drm_dp_vcpi_allocation *pos;
+       bool found = false;
+       int vcpi = 0;
+
+       mst_state = drm_atomic_get_mst_topology_state(state, port->mgr);
+
+       if (IS_ERR(mst_state))
+               return PTR_ERR(mst_state);
+
+       list_for_each_entry(pos, &mst_state->vcpis, next) {
+               if (pos->port == port) {
+                       found = true;
+                       break;
+               }
+       }
+
+       if (!found) {
+               DRM_DEBUG_ATOMIC("[MST PORT:%p] Couldn't find VCPI allocation in mst state %p\n",
+                                port, mst_state);
+               return -EINVAL;
+       }
+
+       if (pos->dsc_enabled == enable) {
+               DRM_DEBUG_ATOMIC("[MST PORT:%p] DSC flag is already set to %d, returning %d VCPI slots\n",
+                                port, enable, pos->vcpi);
+               vcpi = pos->vcpi;
+       }
+
+       if (enable) {
+               vcpi = drm_dp_atomic_find_vcpi_slots(state, port->mgr, port, pbn, pbn_div);
+               DRM_DEBUG_ATOMIC("[MST PORT:%p] Enabling DSC flag, reallocating %d VCPI slots on the port\n",
+                                port, vcpi);
+               if (vcpi < 0)
+                       return -EINVAL;
+       }
+
+       pos->dsc_enabled = enable;
+
+       return vcpi;
+}
+EXPORT_SYMBOL(drm_dp_mst_atomic_enable_dsc);
+/**
  * drm_dp_mst_atomic_check - Check that the new state of an MST topology in an
  * atomic update is valid
  * @state: Pointer to the new &struct drm_dp_mst_topology_state
@@ -4798,7 +5001,10 @@ int drm_dp_mst_atomic_check(struct drm_atomic_state *state)
        int i, ret = 0;
 
        for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) {
-               ret = drm_dp_mst_atomic_check_topology_state(mgr, mst_state);
+               ret = drm_dp_mst_atomic_check_vcpi_alloc_limit(mgr, mst_state);
+               if (ret)
+                       break;
+               ret = drm_dp_mst_atomic_check_bw_limit(mgr->mst_primary, mst_state);
                if (ret)
                        break;
        }
@@ -5062,3 +5268,173 @@ static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux)
 {
        i2c_del_adapter(&aux->ddc);
 }
+
+/**
+ * drm_dp_mst_is_virtual_dpcd() - Is the given port a virtual DP Peer Device
+ * @port: The port to check
+ *
+ * A single physical MST hub object can be represented in the topology
+ * by multiple branches, with virtual ports between those branches.
+ *
+ * As of DP1.4, An MST hub with internal (virtual) ports must expose
+ * certain DPCD registers over those ports. See sections 2.6.1.1.1
+ * and 2.6.1.1.2 of Display Port specification v1.4 for details.
+ *
+ * May acquire mgr->lock
+ *
+ * Returns:
+ * true if the port is a virtual DP peer device, false otherwise
+ */
+static bool drm_dp_mst_is_virtual_dpcd(struct drm_dp_mst_port *port)
+{
+       struct drm_dp_mst_port *downstream_port;
+
+       if (!port || port->dpcd_rev < DP_DPCD_REV_14)
+               return false;
+
+       /* Virtual DP Sink (Internal Display Panel) */
+       if (port->port_num >= 8)
+               return true;
+
+       /* DP-to-HDMI Protocol Converter */
+       if (port->pdt == DP_PEER_DEVICE_DP_LEGACY_CONV &&
+           !port->mcs &&
+           port->ldps)
+               return true;
+
+       /* DP-to-DP */
+       mutex_lock(&port->mgr->lock);
+       if (port->pdt == DP_PEER_DEVICE_MST_BRANCHING &&
+           port->mstb &&
+           port->mstb->num_ports == 2) {
+               list_for_each_entry(downstream_port, &port->mstb->ports, next) {
+                       if (downstream_port->pdt == DP_PEER_DEVICE_SST_SINK &&
+                           !downstream_port->input) {
+                               mutex_unlock(&port->mgr->lock);
+                               return true;
+                       }
+               }
+       }
+       mutex_unlock(&port->mgr->lock);
+
+       return false;
+}
+
+/**
+ * drm_dp_mst_dsc_aux_for_port() - Find the correct aux for DSC
+ * @port: The port to check. A leaf of the MST tree with an attached display.
+ *
+ * Depending on the situation, DSC may be enabled via the endpoint aux,
+ * the immediately upstream aux, or the connector's physical aux.
+ *
+ * This is both the correct aux to read DSC_CAPABILITY and the
+ * correct aux to write DSC_ENABLED.
+ *
+ * This operation can be expensive (up to four aux reads), so
+ * the caller should cache the return.
+ *
+ * Returns:
+ * NULL if DSC cannot be enabled on this port, otherwise the aux device
+ */
+struct drm_dp_aux *drm_dp_mst_dsc_aux_for_port(struct drm_dp_mst_port *port)
+{
+       struct drm_dp_mst_port *immediate_upstream_port;
+       struct drm_dp_mst_port *fec_port;
+       struct drm_dp_desc desc = { 0 };
+       u8 endpoint_fec;
+       u8 endpoint_dsc;
+
+       if (!port)
+               return NULL;
+
+       if (port->parent->port_parent)
+               immediate_upstream_port = port->parent->port_parent;
+       else
+               immediate_upstream_port = NULL;
+
+       fec_port = immediate_upstream_port;
+       while (fec_port) {
+               /*
+                * Each physical link (i.e. not a virtual port) between the
+                * output and the primary device must support FEC
+                */
+               if (!drm_dp_mst_is_virtual_dpcd(fec_port) &&
+                   !fec_port->fec_capable)
+                       return NULL;
+
+               fec_port = fec_port->parent->port_parent;
+       }
+
+       /* DP-to-DP peer device */
+       if (drm_dp_mst_is_virtual_dpcd(immediate_upstream_port)) {
+               u8 upstream_dsc;
+
+               if (drm_dp_dpcd_read(&port->aux,
+                                    DP_DSC_SUPPORT, &endpoint_dsc, 1) != 1)
+                       return NULL;
+               if (drm_dp_dpcd_read(&port->aux,
+                                    DP_FEC_CAPABILITY, &endpoint_fec, 1) != 1)
+                       return NULL;
+               if (drm_dp_dpcd_read(&immediate_upstream_port->aux,
+                                    DP_DSC_SUPPORT, &upstream_dsc, 1) != 1)
+                       return NULL;
+
+               /* Enpoint decompression with DP-to-DP peer device */
+               if ((endpoint_dsc & DP_DSC_DECOMPRESSION_IS_SUPPORTED) &&
+                   (endpoint_fec & DP_FEC_CAPABLE) &&
+                   (upstream_dsc & 0x2) /* DSC passthrough */)
+                       return &port->aux;
+
+               /* Virtual DPCD decompression with DP-to-DP peer device */
+               return &immediate_upstream_port->aux;
+       }
+
+       /* Virtual DPCD decompression with DP-to-HDMI or Virtual DP Sink */
+       if (drm_dp_mst_is_virtual_dpcd(port))
+               return &port->aux;
+
+       /*
+        * Synaptics quirk
+        * Applies to ports for which:
+        * - Physical aux has Synaptics OUI
+        * - DPv1.4 or higher
+        * - Port is on primary branch device
+        * - Not a VGA adapter (DP_DWN_STRM_PORT_TYPE_ANALOG)
+        */
+       if (drm_dp_read_desc(port->mgr->aux, &desc, true))
+               return NULL;
+
+       if (drm_dp_has_quirk(&desc, DP_DPCD_QUIRK_DSC_WITHOUT_VIRTUAL_DPCD) &&
+           port->mgr->dpcd[DP_DPCD_REV] >= DP_DPCD_REV_14 &&
+           port->parent == port->mgr->mst_primary) {
+               u8 downstreamport;
+
+               if (drm_dp_dpcd_read(&port->aux, DP_DOWNSTREAMPORT_PRESENT,
+                                    &downstreamport, 1) < 0)
+                       return NULL;
+
+               if ((downstreamport & DP_DWN_STRM_PORT_PRESENT) &&
+                  ((downstreamport & DP_DWN_STRM_PORT_TYPE_MASK)
+                    != DP_DWN_STRM_PORT_TYPE_ANALOG))
+                       return port->mgr->aux;
+       }
+
+       /*
+        * The check below verifies if the MST sink
+        * connected to the GPU is capable of DSC -
+        * therefore the endpoint needs to be
+        * both DSC and FEC capable.
+        */
+       if (drm_dp_dpcd_read(&port->aux,
+          DP_DSC_SUPPORT, &endpoint_dsc, 1) != 1)
+               return NULL;
+       if (drm_dp_dpcd_read(&port->aux,
+          DP_FEC_CAPABILITY, &endpoint_fec, 1) != 1)
+               return NULL;
+       if ((endpoint_dsc & DP_DSC_DECOMPRESSION_IS_SUPPORTED) &&
+          (endpoint_fec & DP_FEC_CAPABLE))
+               return &port->aux;
+
+       return NULL;
+}
+EXPORT_SYMBOL(drm_dp_mst_dsc_aux_for_port);
index c0b0f60..9801c03 100644 (file)
@@ -9,6 +9,7 @@
  *  Copyright (C) 2012 Red Hat
  */
 
+#include <drm/drm_fb_cma_helper.h>
 #include <drm/drm_fourcc.h>
 #include <drm/drm_framebuffer.h>
 #include <drm/drm_gem_cma_helper.h>
index 2e8ce99..2c79e81 100644 (file)
@@ -360,7 +360,8 @@ void drm_legacy_lock_master_cleanup(struct drm_device *dev, struct drm_master *m
        /*
         * Since the master is disappearing, so is the
         * possibility to lock.
-        */     mutex_lock(&dev->struct_mutex);
+        */
+       mutex_lock(&dev->struct_mutex);
        if (master->lock.hw_lock) {
                if (dev->sigdata.lock == master->lock.hw_lock)
                        dev->sigdata.lock = NULL;
index 2a4eb61..10336b1 100644 (file)
@@ -233,7 +233,7 @@ struct drm_display_mode *drm_cvt_mode(struct drm_device *dev, int hdisplay,
                /* 3) Nominal HSync width (% of line period) - default 8 */
 #define CVT_HSYNC_PERCENTAGE   8
                unsigned int hblank_percentage;
-               int vsyncandback_porch, vback_porch, hblank;
+               int vsyncandback_porch, __maybe_unused vback_porch, hblank;
 
                /* estimated the horizontal period */
                tmp1 = HV_FACTOR * 1000000  -
@@ -386,9 +386,10 @@ drm_gtf_mode_complex(struct drm_device *dev, int hdisplay, int vdisplay,
        int top_margin, bottom_margin;
        int interlace;
        unsigned int hfreq_est;
-       int vsync_plus_bp, vback_porch;
-       unsigned int vtotal_lines, vfieldrate_est, hperiod;
-       unsigned int vfield_rate, vframe_rate;
+       int vsync_plus_bp, __maybe_unused vback_porch;
+       unsigned int vtotal_lines, __maybe_unused vfieldrate_est;
+       unsigned int __maybe_unused hperiod;
+       unsigned int vfield_rate, __maybe_unused vframe_rate;
        int left_margin, right_margin;
        unsigned int total_active_pixels, ideal_duty_cycle;
        unsigned int hblank, total_pixels, pixel_freq;
index 1f9c01b..76ecdf8 100644 (file)
@@ -65,12 +65,13 @@ static int etnaviv_open(struct drm_device *dev, struct drm_file *file)
 
        for (i = 0; i < ETNA_MAX_PIPES; i++) {
                struct etnaviv_gpu *gpu = priv->gpu[i];
-               struct drm_sched_rq *rq;
+               struct drm_gpu_scheduler *sched;
 
                if (gpu) {
-                       rq = &gpu->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];
+                       sched = &gpu->sched;
                        drm_sched_entity_init(&ctx->sched_entity[i],
-                                             &rq, 1, NULL);
+                                             DRM_SCHED_PRIORITY_NORMAL, &sched,
+                                             1, NULL);
                        }
        }
 
index 3955f84..33628d8 100644 (file)
@@ -1378,6 +1378,7 @@ static void exynos_dsi_unregister_te_irq(struct exynos_dsi *dsi)
 static void exynos_dsi_enable(struct drm_encoder *encoder)
 {
        struct exynos_dsi *dsi = encoder_to_dsi(encoder);
+       struct drm_bridge *iter;
        int ret;
 
        if (dsi->state & DSIM_STATE_ENABLED)
@@ -1391,7 +1392,11 @@ static void exynos_dsi_enable(struct drm_encoder *encoder)
                if (ret < 0)
                        goto err_put_sync;
        } else {
-               drm_bridge_chain_pre_enable(dsi->out_bridge);
+               list_for_each_entry_reverse(iter, &dsi->bridge_chain,
+                                           chain_node) {
+                       if (iter->funcs->pre_enable)
+                               iter->funcs->pre_enable(iter);
+               }
        }
 
        exynos_dsi_set_display_mode(dsi);
@@ -1402,7 +1407,10 @@ static void exynos_dsi_enable(struct drm_encoder *encoder)
                if (ret < 0)
                        goto err_display_disable;
        } else {
-               drm_bridge_chain_enable(dsi->out_bridge);
+               list_for_each_entry(iter, &dsi->bridge_chain, chain_node) {
+                       if (iter->funcs->enable)
+                               iter->funcs->enable(iter);
+               }
        }
 
        dsi->state |= DSIM_STATE_VIDOUT_AVAILABLE;
@@ -1420,6 +1428,7 @@ err_put_sync:
 static void exynos_dsi_disable(struct drm_encoder *encoder)
 {
        struct exynos_dsi *dsi = encoder_to_dsi(encoder);
+       struct drm_bridge *iter;
 
        if (!(dsi->state & DSIM_STATE_ENABLED))
                return;
@@ -1427,10 +1436,20 @@ static void exynos_dsi_disable(struct drm_encoder *encoder)
        dsi->state &= ~DSIM_STATE_VIDOUT_AVAILABLE;
 
        drm_panel_disable(dsi->panel);
-       drm_bridge_chain_disable(dsi->out_bridge);
+
+       list_for_each_entry_reverse(iter, &dsi->bridge_chain, chain_node) {
+               if (iter->funcs->disable)
+                       iter->funcs->disable(iter);
+       }
+
        exynos_dsi_set_display_enable(dsi, false);
        drm_panel_unprepare(dsi->panel);
-       drm_bridge_chain_post_disable(dsi->out_bridge);
+
+       list_for_each_entry(iter, &dsi->bridge_chain, chain_node) {
+               if (iter->funcs->post_disable)
+                       iter->funcs->post_disable(iter);
+       }
+
        dsi->state &= ~DSIM_STATE_ENABLED;
        pm_runtime_put_sync(dsi->dev);
 }
@@ -1523,7 +1542,7 @@ static int exynos_dsi_host_attach(struct mipi_dsi_host *host,
        if (out_bridge) {
                drm_bridge_attach(encoder, out_bridge, NULL);
                dsi->out_bridge = out_bridge;
-               list_splice(&encoder->bridge_chain, &dsi->bridge_chain);
+               list_splice_init(&encoder->bridge_chain, &dsi->bridge_chain);
        } else {
                int ret = exynos_dsi_create_connector(encoder);
 
index 40a37e4..91f9001 100644 (file)
@@ -470,12 +470,11 @@ void psb_irq_turn_off_dpst(struct drm_device *dev)
 {
        struct drm_psb_private *dev_priv =
            (struct drm_psb_private *) dev->dev_private;
-       u32 hist_reg;
        u32 pwm_reg;
 
        if (gma_power_begin(dev, false)) {
                PSB_WVDC32(0x00000000, HISTOGRAM_INT_CONTROL);
-               hist_reg = PSB_RVDC32(HISTOGRAM_INT_CONTROL);
+               PSB_RVDC32(HISTOGRAM_INT_CONTROL);
 
                psb_disable_pipestat(dev_priv, 0, PIPE_DPST_EVENT_ENABLE);
 
index d98988d..cba68c5 100644 (file)
@@ -61,10 +61,11 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
                crtc_state->pipe_bpp = bpp;
 
                crtc_state->pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock,
-                                                      crtc_state->pipe_bpp);
+                                                      crtc_state->pipe_bpp,
+                                                      false);
 
                slots = drm_dp_atomic_find_vcpi_slots(state, &intel_dp->mst_mgr,
-                                                     port, crtc_state->pbn);
+                                                     port, crtc_state->pbn, 0);
                if (slots == -EDEADLK)
                        return slots;
                if (slots >= 0)
index f522c5f..b561dd0 100644 (file)
@@ -159,9 +159,10 @@ int lima_sched_context_init(struct lima_sched_pipe *pipe,
                            struct lima_sched_context *context,
                            atomic_t *guilty)
 {
-       struct drm_sched_rq *rq = pipe->base.sched_rq + DRM_SCHED_PRIORITY_NORMAL;
+       struct drm_gpu_scheduler *sched = &pipe->base;
 
-       return drm_sched_entity_init(&context->base, &rq, 1, guilty);
+       return drm_sched_entity_init(&context->base, DRM_SCHED_PRIORITY_NORMAL,
+                                    &sched, 1, guilty);
 }
 
 void lima_sched_context_fini(struct lima_sched_pipe *pipe,
@@ -255,13 +256,17 @@ static struct dma_fence *lima_sched_run_job(struct drm_sched_job *job)
        return task->fence;
 }
 
-static void lima_sched_handle_error_task(struct lima_sched_pipe *pipe,
-                                        struct lima_sched_task *task)
+static void lima_sched_timedout_job(struct drm_sched_job *job)
 {
+       struct lima_sched_pipe *pipe = to_lima_pipe(job->sched);
+       struct lima_sched_task *task = to_lima_task(job);
+
+       if (!pipe->error)
+               DRM_ERROR("lima job timeout\n");
+
        drm_sched_stop(&pipe->base, &task->base);
 
-       if (task)
-               drm_sched_increase_karma(&task->base);
+       drm_sched_increase_karma(&task->base);
 
        pipe->task_error(pipe);
 
@@ -284,16 +289,6 @@ static void lima_sched_handle_error_task(struct lima_sched_pipe *pipe,
        drm_sched_start(&pipe->base, true);
 }
 
-static void lima_sched_timedout_job(struct drm_sched_job *job)
-{
-       struct lima_sched_pipe *pipe = to_lima_pipe(job->sched);
-       struct lima_sched_task *task = to_lima_task(job);
-
-       DRM_ERROR("lima job timeout\n");
-
-       lima_sched_handle_error_task(pipe, task);
-}
-
 static void lima_sched_free_job(struct drm_sched_job *job)
 {
        struct lima_sched_task *task = to_lima_task(job);
@@ -318,15 +313,6 @@ static const struct drm_sched_backend_ops lima_sched_ops = {
        .free_job = lima_sched_free_job,
 };
 
-static void lima_sched_error_work(struct work_struct *work)
-{
-       struct lima_sched_pipe *pipe =
-               container_of(work, struct lima_sched_pipe, error_work);
-       struct lima_sched_task *task = pipe->current_task;
-
-       lima_sched_handle_error_task(pipe, task);
-}
-
 int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name)
 {
        unsigned int timeout = lima_sched_timeout_ms > 0 ?
@@ -335,8 +321,6 @@ int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name)
        pipe->fence_context = dma_fence_context_alloc(1);
        spin_lock_init(&pipe->fence_lock);
 
-       INIT_WORK(&pipe->error_work, lima_sched_error_work);
-
        return drm_sched_init(&pipe->base, &lima_sched_ops, 1, 0,
                              msecs_to_jiffies(timeout), name);
 }
@@ -349,7 +333,7 @@ void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
 void lima_sched_pipe_task_done(struct lima_sched_pipe *pipe)
 {
        if (pipe->error)
-               schedule_work(&pipe->error_work);
+               drm_sched_fault(&pipe->base);
        else {
                struct lima_sched_task *task = pipe->current_task;
 
index 928af91..1d814fe 100644 (file)
@@ -68,8 +68,6 @@ struct lima_sched_pipe {
        void (*task_fini)(struct lima_sched_pipe *pipe);
        void (*task_error)(struct lima_sched_pipe *pipe);
        void (*task_mmu_error)(struct lima_sched_pipe *pipe);
-
-       struct work_struct error_work;
 };
 
 int lima_sched_task_init(struct lima_sched_task *task,
index 5044dfb..b7a82ed 100644 (file)
@@ -20,7 +20,7 @@ obj-$(CONFIG_DRM_MEDIATEK) += mediatek-drm.o
 mediatek-drm-hdmi-objs := mtk_cec.o \
                          mtk_hdmi.o \
                          mtk_hdmi_ddc.o \
-                          mtk_mt2701_hdmi_phy.o \
+                         mtk_mt2701_hdmi_phy.o \
                          mtk_mt8173_hdmi_phy.o \
                          mtk_hdmi_phy.o
 
index 59de2a4..6fb0d69 100644 (file)
@@ -9,6 +9,7 @@
 #include <linux/of_device.h>
 #include <linux/of_irq.h>
 #include <linux/platform_device.h>
+#include <linux/soc/mediatek/mtk-cmdq.h>
 
 #include "mtk_drm_crtc.h"
 #include "mtk_drm_ddp_comp.h"
@@ -45,12 +46,12 @@ static inline struct mtk_disp_color *comp_to_color(struct mtk_ddp_comp *comp)
 
 static void mtk_color_config(struct mtk_ddp_comp *comp, unsigned int w,
                             unsigned int h, unsigned int vrefresh,
-                            unsigned int bpc)
+                            unsigned int bpc, struct cmdq_pkt *cmdq_pkt)
 {
        struct mtk_disp_color *color = comp_to_color(comp);
 
-       writel(w, comp->regs + DISP_COLOR_WIDTH(color));
-       writel(h, comp->regs + DISP_COLOR_HEIGHT(color));
+       mtk_ddp_write(cmdq_pkt, w, comp, DISP_COLOR_WIDTH(color));
+       mtk_ddp_write(cmdq_pkt, h, comp, DISP_COLOR_HEIGHT(color));
 }
 
 static void mtk_color_start(struct mtk_ddp_comp *comp)
index 4a55bb6..891d80c 100644 (file)
@@ -11,6 +11,7 @@
 #include <linux/of_device.h>
 #include <linux/of_irq.h>
 #include <linux/platform_device.h>
+#include <linux/soc/mediatek/mtk-cmdq.h>
 
 #include "mtk_drm_crtc.h"
 #include "mtk_drm_ddp_comp.h"
@@ -124,14 +125,15 @@ static void mtk_ovl_stop(struct mtk_ddp_comp *comp)
 
 static void mtk_ovl_config(struct mtk_ddp_comp *comp, unsigned int w,
                           unsigned int h, unsigned int vrefresh,
-                          unsigned int bpc)
+                          unsigned int bpc, struct cmdq_pkt *cmdq_pkt)
 {
        if (w != 0 && h != 0)
-               writel_relaxed(h << 16 | w, comp->regs + DISP_REG_OVL_ROI_SIZE);
-       writel_relaxed(0x0, comp->regs + DISP_REG_OVL_ROI_BGCLR);
+               mtk_ddp_write_relaxed(cmdq_pkt, h << 16 | w, comp,
+                                     DISP_REG_OVL_ROI_SIZE);
+       mtk_ddp_write_relaxed(cmdq_pkt, 0x0, comp, DISP_REG_OVL_ROI_BGCLR);
 
-       writel(0x1, comp->regs + DISP_REG_OVL_RST);
-       writel(0x0, comp->regs + DISP_REG_OVL_RST);
+       mtk_ddp_write(cmdq_pkt, 0x1, comp, DISP_REG_OVL_RST);
+       mtk_ddp_write(cmdq_pkt, 0x0, comp, DISP_REG_OVL_RST);
 }
 
 static unsigned int mtk_ovl_layer_nr(struct mtk_ddp_comp *comp)
@@ -175,16 +177,16 @@ static int mtk_ovl_layer_check(struct mtk_ddp_comp *comp, unsigned int idx,
        return 0;
 }
 
-static void mtk_ovl_layer_on(struct mtk_ddp_comp *comp, unsigned int idx)
+static void mtk_ovl_layer_on(struct mtk_ddp_comp *comp, unsigned int idx,
+                            struct cmdq_pkt *cmdq_pkt)
 {
-       unsigned int reg;
        unsigned int gmc_thrshd_l;
        unsigned int gmc_thrshd_h;
        unsigned int gmc_value;
        struct mtk_disp_ovl *ovl = comp_to_ovl(comp);
 
-       writel(0x1, comp->regs + DISP_REG_OVL_RDMA_CTRL(idx));
-
+       mtk_ddp_write(cmdq_pkt, 0x1, comp,
+                     DISP_REG_OVL_RDMA_CTRL(idx));
        gmc_thrshd_l = GMC_THRESHOLD_LOW >>
                      (GMC_THRESHOLD_BITS - ovl->data->gmc_bits);
        gmc_thrshd_h = GMC_THRESHOLD_HIGH >>
@@ -194,22 +196,19 @@ static void mtk_ovl_layer_on(struct mtk_ddp_comp *comp, unsigned int idx)
        else
                gmc_value = gmc_thrshd_l | gmc_thrshd_l << 8 |
                            gmc_thrshd_h << 16 | gmc_thrshd_h << 24;
-       writel(gmc_value, comp->regs + DISP_REG_OVL_RDMA_GMC(idx));
-
-       reg = readl(comp->regs + DISP_REG_OVL_SRC_CON);
-       reg = reg | BIT(idx);
-       writel(reg, comp->regs + DISP_REG_OVL_SRC_CON);
+       mtk_ddp_write(cmdq_pkt, gmc_value,
+                     comp, DISP_REG_OVL_RDMA_GMC(idx));
+       mtk_ddp_write_mask(cmdq_pkt, BIT(idx), comp,
+                          DISP_REG_OVL_SRC_CON, BIT(idx));
 }
 
-static void mtk_ovl_layer_off(struct mtk_ddp_comp *comp, unsigned int idx)
+static void mtk_ovl_layer_off(struct mtk_ddp_comp *comp, unsigned int idx,
+                             struct cmdq_pkt *cmdq_pkt)
 {
-       unsigned int reg;
-
-       reg = readl(comp->regs + DISP_REG_OVL_SRC_CON);
-       reg = reg & ~BIT(idx);
-       writel(reg, comp->regs + DISP_REG_OVL_SRC_CON);
-
-       writel(0x0, comp->regs + DISP_REG_OVL_RDMA_CTRL(idx));
+       mtk_ddp_write_mask(cmdq_pkt, 0, comp,
+                          DISP_REG_OVL_SRC_CON, BIT(idx));
+       mtk_ddp_write(cmdq_pkt, 0, comp,
+                     DISP_REG_OVL_RDMA_CTRL(idx));
 }
 
 static unsigned int ovl_fmt_convert(struct mtk_disp_ovl *ovl, unsigned int fmt)
@@ -249,7 +248,8 @@ static unsigned int ovl_fmt_convert(struct mtk_disp_ovl *ovl, unsigned int fmt)
 }
 
 static void mtk_ovl_layer_config(struct mtk_ddp_comp *comp, unsigned int idx,
-                                struct mtk_plane_state *state)
+                                struct mtk_plane_state *state,
+                                struct cmdq_pkt *cmdq_pkt)
 {
        struct mtk_disp_ovl *ovl = comp_to_ovl(comp);
        struct mtk_plane_pending_state *pending = &state->pending;
@@ -260,11 +260,13 @@ static void mtk_ovl_layer_config(struct mtk_ddp_comp *comp, unsigned int idx,
        unsigned int src_size = (pending->height << 16) | pending->width;
        unsigned int con;
 
-       if (!pending->enable)
-               mtk_ovl_layer_off(comp, idx);
+       if (!pending->enable) {
+               mtk_ovl_layer_off(comp, idx, cmdq_pkt);
+               return;
+       }
 
        con = ovl_fmt_convert(ovl, fmt);
-       if (idx != 0)
+       if (state->base.fb->format->has_alpha)
                con |= OVL_CON_AEN | OVL_CON_ALPHA;
 
        if (pending->rotation & DRM_MODE_REFLECT_Y) {
@@ -277,14 +279,18 @@ static void mtk_ovl_layer_config(struct mtk_ddp_comp *comp, unsigned int idx,
                addr += pending->pitch - 1;
        }
 
-       writel_relaxed(con, comp->regs + DISP_REG_OVL_CON(idx));
-       writel_relaxed(pitch, comp->regs + DISP_REG_OVL_PITCH(idx));
-       writel_relaxed(src_size, comp->regs + DISP_REG_OVL_SRC_SIZE(idx));
-       writel_relaxed(offset, comp->regs + DISP_REG_OVL_OFFSET(idx));
-       writel_relaxed(addr, comp->regs + DISP_REG_OVL_ADDR(ovl, idx));
-
-       if (pending->enable)
-               mtk_ovl_layer_on(comp, idx);
+       mtk_ddp_write_relaxed(cmdq_pkt, con, comp,
+                             DISP_REG_OVL_CON(idx));
+       mtk_ddp_write_relaxed(cmdq_pkt, pitch, comp,
+                             DISP_REG_OVL_PITCH(idx));
+       mtk_ddp_write_relaxed(cmdq_pkt, src_size, comp,
+                             DISP_REG_OVL_SRC_SIZE(idx));
+       mtk_ddp_write_relaxed(cmdq_pkt, offset, comp,
+                             DISP_REG_OVL_OFFSET(idx));
+       mtk_ddp_write_relaxed(cmdq_pkt, addr, comp,
+                             DISP_REG_OVL_ADDR(ovl, idx));
+
+       mtk_ovl_layer_on(comp, idx, cmdq_pkt);
 }
 
 static void mtk_ovl_bgclr_in_on(struct mtk_ddp_comp *comp)
@@ -313,8 +319,6 @@ static const struct mtk_ddp_comp_funcs mtk_disp_ovl_funcs = {
        .disable_vblank = mtk_ovl_disable_vblank,
        .supported_rotations = mtk_ovl_supported_rotations,
        .layer_nr = mtk_ovl_layer_nr,
-       .layer_on = mtk_ovl_layer_on,
-       .layer_off = mtk_ovl_layer_off,
        .layer_check = mtk_ovl_layer_check,
        .layer_config = mtk_ovl_layer_config,
        .bgclr_in_on = mtk_ovl_bgclr_in_on,
index 405afef..0cb848d 100644 (file)
@@ -9,6 +9,7 @@
 #include <linux/of_device.h>
 #include <linux/of_irq.h>
 #include <linux/platform_device.h>
+#include <linux/soc/mediatek/mtk-cmdq.h>
 
 #include "mtk_drm_crtc.h"
 #include "mtk_drm_ddp_comp.h"
@@ -125,14 +126,16 @@ static void mtk_rdma_stop(struct mtk_ddp_comp *comp)
 
 static void mtk_rdma_config(struct mtk_ddp_comp *comp, unsigned int width,
                            unsigned int height, unsigned int vrefresh,
-                           unsigned int bpc)
+                           unsigned int bpc, struct cmdq_pkt *cmdq_pkt)
 {
        unsigned int threshold;
        unsigned int reg;
        struct mtk_disp_rdma *rdma = comp_to_rdma(comp);
 
-       rdma_update_bits(comp, DISP_REG_RDMA_SIZE_CON_0, 0xfff, width);
-       rdma_update_bits(comp, DISP_REG_RDMA_SIZE_CON_1, 0xfffff, height);
+       mtk_ddp_write_mask(cmdq_pkt, width, comp,
+                          DISP_REG_RDMA_SIZE_CON_0, 0xfff);
+       mtk_ddp_write_mask(cmdq_pkt, height, comp,
+                          DISP_REG_RDMA_SIZE_CON_1, 0xfffff);
 
        /*
         * Enable FIFO underflow since DSI and DPI can't be blocked.
@@ -144,7 +147,7 @@ static void mtk_rdma_config(struct mtk_ddp_comp *comp, unsigned int width,
        reg = RDMA_FIFO_UNDERFLOW_EN |
              RDMA_FIFO_PSEUDO_SIZE(RDMA_FIFO_SIZE(rdma)) |
              RDMA_OUTPUT_VALID_FIFO_THRESHOLD(threshold);
-       writel(reg, comp->regs + DISP_REG_RDMA_FIFO_CON);
+       mtk_ddp_write(cmdq_pkt, reg, comp, DISP_REG_RDMA_FIFO_CON);
 }
 
 static unsigned int rdma_fmt_convert(struct mtk_disp_rdma *rdma,
@@ -190,7 +193,8 @@ static unsigned int mtk_rdma_layer_nr(struct mtk_ddp_comp *comp)
 }
 
 static void mtk_rdma_layer_config(struct mtk_ddp_comp *comp, unsigned int idx,
-                                 struct mtk_plane_state *state)
+                                 struct mtk_plane_state *state,
+                                 struct cmdq_pkt *cmdq_pkt)
 {
        struct mtk_disp_rdma *rdma = comp_to_rdma(comp);
        struct mtk_plane_pending_state *pending = &state->pending;
@@ -200,24 +204,27 @@ static void mtk_rdma_layer_config(struct mtk_ddp_comp *comp, unsigned int idx,
        unsigned int con;
 
        con = rdma_fmt_convert(rdma, fmt);
-       writel_relaxed(con, comp->regs + DISP_RDMA_MEM_CON);
+       mtk_ddp_write_relaxed(cmdq_pkt, con, comp, DISP_RDMA_MEM_CON);
 
        if (fmt == DRM_FORMAT_UYVY || fmt == DRM_FORMAT_YUYV) {
-               rdma_update_bits(comp, DISP_REG_RDMA_SIZE_CON_0,
-                                RDMA_MATRIX_ENABLE, RDMA_MATRIX_ENABLE);
-               rdma_update_bits(comp, DISP_REG_RDMA_SIZE_CON_0,
-                                RDMA_MATRIX_INT_MTX_SEL,
-                                RDMA_MATRIX_INT_MTX_BT601_to_RGB);
+               mtk_ddp_write_mask(cmdq_pkt, RDMA_MATRIX_ENABLE, comp,
+                                  DISP_REG_RDMA_SIZE_CON_0,
+                                  RDMA_MATRIX_ENABLE);
+               mtk_ddp_write_mask(cmdq_pkt, RDMA_MATRIX_INT_MTX_BT601_to_RGB,
+                                  comp, DISP_REG_RDMA_SIZE_CON_0,
+                                  RDMA_MATRIX_INT_MTX_SEL);
        } else {
-               rdma_update_bits(comp, DISP_REG_RDMA_SIZE_CON_0,
-                                RDMA_MATRIX_ENABLE, 0);
+               mtk_ddp_write_mask(cmdq_pkt, 0, comp,
+                                  DISP_REG_RDMA_SIZE_CON_0,
+                                  RDMA_MATRIX_ENABLE);
        }
+       mtk_ddp_write_relaxed(cmdq_pkt, addr, comp, DISP_RDMA_MEM_START_ADDR);
+       mtk_ddp_write_relaxed(cmdq_pkt, pitch, comp, DISP_RDMA_MEM_SRC_PITCH);
+       mtk_ddp_write(cmdq_pkt, RDMA_MEM_GMC, comp,
+                     DISP_RDMA_MEM_GMC_SETTING_0);
+       mtk_ddp_write_mask(cmdq_pkt, RDMA_MODE_MEMORY, comp,
+                          DISP_REG_RDMA_GLOBAL_CON, RDMA_MODE_MEMORY);
 
-       writel_relaxed(addr, comp->regs + DISP_RDMA_MEM_START_ADDR);
-       writel_relaxed(pitch, comp->regs + DISP_RDMA_MEM_SRC_PITCH);
-       writel(RDMA_MEM_GMC, comp->regs + DISP_RDMA_MEM_GMC_SETTING_0);
-       rdma_update_bits(comp, DISP_REG_RDMA_GLOBAL_CON,
-                        RDMA_MODE_MEMORY, RDMA_MODE_MEMORY);
 }
 
 static const struct mtk_ddp_comp_funcs mtk_disp_rdma_funcs = {
index f80a8ba..0dfcd17 100644 (file)
@@ -5,6 +5,7 @@
 
 #include <linux/clk.h>
 #include <linux/pm_runtime.h>
+#include <linux/soc/mediatek/mtk-cmdq.h>
 
 #include <asm/barrier.h>
 #include <soc/mediatek/smi.h>
@@ -42,11 +43,20 @@ struct mtk_drm_crtc {
        struct drm_plane                *planes;
        unsigned int                    layer_nr;
        bool                            pending_planes;
+       bool                            pending_async_planes;
+
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+       struct cmdq_client              *cmdq_client;
+       u32                             cmdq_event;
+#endif
 
        void __iomem                    *config_regs;
        struct mtk_disp_mutex           *mutex;
        unsigned int                    ddp_comp_nr;
        struct mtk_ddp_comp             **ddp_comp;
+
+       /* lock for display hardware access */
+       struct mutex                    hw_lock;
 };
 
 struct mtk_crtc_state {
@@ -215,11 +225,12 @@ struct mtk_ddp_comp *mtk_drm_ddp_comp_for_plane(struct drm_crtc *crtc,
        struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
        struct mtk_ddp_comp *comp;
        int i, count = 0;
+       unsigned int local_index = plane - mtk_crtc->planes;
 
        for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {
                comp = mtk_crtc->ddp_comp[i];
-               if (plane->index < (count + mtk_ddp_comp_layer_nr(comp))) {
-                       *local_layer = plane->index - count;
+               if (local_index < (count + mtk_ddp_comp_layer_nr(comp))) {
+                       *local_layer = local_index - count;
                        return comp;
                }
                count += mtk_ddp_comp_layer_nr(comp);
@@ -229,6 +240,13 @@ struct mtk_ddp_comp *mtk_drm_ddp_comp_for_plane(struct drm_crtc *crtc,
        return NULL;
 }
 
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+static void ddp_cmdq_cb(struct cmdq_cb_data data)
+{
+       cmdq_pkt_destroy(data.data);
+}
+#endif
+
 static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc)
 {
        struct drm_crtc *crtc = &mtk_crtc->base;
@@ -297,7 +315,7 @@ static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc)
                if (i == 1)
                        mtk_ddp_comp_bgclr_in_on(comp);
 
-               mtk_ddp_comp_config(comp, width, height, vrefresh, bpc);
+               mtk_ddp_comp_config(comp, width, height, vrefresh, bpc, NULL);
                mtk_ddp_comp_start(comp);
        }
 
@@ -310,7 +328,9 @@ static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc)
 
                plane_state = to_mtk_plane_state(plane->state);
                comp = mtk_drm_ddp_comp_for_plane(crtc, plane, &local_layer);
-               mtk_ddp_comp_layer_config(comp, local_layer, plane_state);
+               if (comp)
+                       mtk_ddp_comp_layer_config(comp, local_layer,
+                                                 plane_state, NULL);
        }
 
        return 0;
@@ -325,6 +345,7 @@ err_pm_runtime_put:
 static void mtk_crtc_ddp_hw_fini(struct mtk_drm_crtc *mtk_crtc)
 {
        struct drm_device *drm = mtk_crtc->base.dev;
+       struct drm_crtc *crtc = &mtk_crtc->base;
        int i;
 
        DRM_DEBUG_DRIVER("%s\n", __func__);
@@ -350,9 +371,17 @@ static void mtk_crtc_ddp_hw_fini(struct mtk_drm_crtc *mtk_crtc)
        mtk_disp_mutex_unprepare(mtk_crtc->mutex);
 
        pm_runtime_put(drm->dev);
+
+       if (crtc->state->event && !crtc->state->active) {
+               spin_lock_irq(&crtc->dev->event_lock);
+               drm_crtc_send_vblank_event(crtc, crtc->state->event);
+               crtc->state->event = NULL;
+               spin_unlock_irq(&crtc->dev->event_lock);
+       }
 }
 
-static void mtk_crtc_ddp_config(struct drm_crtc *crtc)
+static void mtk_crtc_ddp_config(struct drm_crtc *crtc,
+                               struct cmdq_pkt *cmdq_handle)
 {
        struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
        struct mtk_crtc_state *state = to_mtk_crtc_state(mtk_crtc->base.state);
@@ -368,7 +397,8 @@ static void mtk_crtc_ddp_config(struct drm_crtc *crtc)
        if (state->pending_config) {
                mtk_ddp_comp_config(comp, state->pending_width,
                                    state->pending_height,
-                                   state->pending_vrefresh, 0);
+                                   state->pending_vrefresh, 0,
+                                   cmdq_handle);
 
                state->pending_config = false;
        }
@@ -386,12 +416,84 @@ static void mtk_crtc_ddp_config(struct drm_crtc *crtc)
                        comp = mtk_drm_ddp_comp_for_plane(crtc, plane,
                                                          &local_layer);
 
-                       mtk_ddp_comp_layer_config(comp, local_layer,
-                                                 plane_state);
+                       if (comp)
+                               mtk_ddp_comp_layer_config(comp, local_layer,
+                                                         plane_state,
+                                                         cmdq_handle);
                        plane_state->pending.config = false;
                }
                mtk_crtc->pending_planes = false;
        }
+
+       if (mtk_crtc->pending_async_planes) {
+               for (i = 0; i < mtk_crtc->layer_nr; i++) {
+                       struct drm_plane *plane = &mtk_crtc->planes[i];
+                       struct mtk_plane_state *plane_state;
+
+                       plane_state = to_mtk_plane_state(plane->state);
+
+                       if (!plane_state->pending.async_config)
+                               continue;
+
+                       comp = mtk_drm_ddp_comp_for_plane(crtc, plane,
+                                                         &local_layer);
+
+                       if (comp)
+                               mtk_ddp_comp_layer_config(comp, local_layer,
+                                                         plane_state,
+                                                         cmdq_handle);
+                       plane_state->pending.async_config = false;
+               }
+               mtk_crtc->pending_async_planes = false;
+       }
+}
+
+static void mtk_drm_crtc_hw_config(struct mtk_drm_crtc *mtk_crtc)
+{
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+       struct cmdq_pkt *cmdq_handle;
+#endif
+       struct drm_crtc *crtc = &mtk_crtc->base;
+       struct mtk_drm_private *priv = crtc->dev->dev_private;
+       unsigned int pending_planes = 0, pending_async_planes = 0;
+       int i;
+
+       mutex_lock(&mtk_crtc->hw_lock);
+       for (i = 0; i < mtk_crtc->layer_nr; i++) {
+               struct drm_plane *plane = &mtk_crtc->planes[i];
+               struct mtk_plane_state *plane_state;
+
+               plane_state = to_mtk_plane_state(plane->state);
+               if (plane_state->pending.dirty) {
+                       plane_state->pending.config = true;
+                       plane_state->pending.dirty = false;
+                       pending_planes |= BIT(i);
+               } else if (plane_state->pending.async_dirty) {
+                       plane_state->pending.async_config = true;
+                       plane_state->pending.async_dirty = false;
+                       pending_async_planes |= BIT(i);
+               }
+       }
+       if (pending_planes)
+               mtk_crtc->pending_planes = true;
+       if (pending_async_planes)
+               mtk_crtc->pending_async_planes = true;
+
+       if (priv->data->shadow_register) {
+               mtk_disp_mutex_acquire(mtk_crtc->mutex);
+               mtk_crtc_ddp_config(crtc, NULL);
+               mtk_disp_mutex_release(mtk_crtc->mutex);
+       }
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+       if (mtk_crtc->cmdq_client) {
+               cmdq_handle = cmdq_pkt_create(mtk_crtc->cmdq_client, PAGE_SIZE);
+               cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event);
+               cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event);
+               mtk_crtc_ddp_config(crtc, cmdq_handle);
+               cmdq_pkt_flush_async(cmdq_handle, ddp_cmdq_cb, cmdq_handle);
+       }
+#endif
+       mutex_unlock(&mtk_crtc->hw_lock);
 }
 
 int mtk_drm_crtc_plane_check(struct drm_crtc *crtc, struct drm_plane *plane,
@@ -401,7 +503,23 @@ int mtk_drm_crtc_plane_check(struct drm_crtc *crtc, struct drm_plane *plane,
        struct mtk_ddp_comp *comp;
 
        comp = mtk_drm_ddp_comp_for_plane(crtc, plane, &local_layer);
-       return mtk_ddp_comp_layer_check(comp, local_layer, state);
+       if (comp)
+               return mtk_ddp_comp_layer_check(comp, local_layer, state);
+       return 0;
+}
+
+void mtk_drm_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane,
+                              struct drm_plane_state *new_state)
+{
+       struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
+       const struct drm_plane_helper_funcs *plane_helper_funcs =
+                       plane->helper_private;
+
+       if (!mtk_crtc->enabled)
+               return;
+
+       plane_helper_funcs->atomic_update(plane, new_state);
+       mtk_drm_crtc_hw_config(mtk_crtc);
 }
 
 static void mtk_drm_crtc_atomic_enable(struct drm_crtc *crtc,
@@ -451,6 +569,7 @@ static void mtk_drm_crtc_atomic_disable(struct drm_crtc *crtc,
        }
        mtk_crtc->pending_planes = true;
 
+       mtk_drm_crtc_hw_config(mtk_crtc);
        /* Wait for planes to be disabled */
        drm_crtc_wait_one_vblank(crtc);
 
@@ -482,34 +601,16 @@ static void mtk_drm_crtc_atomic_flush(struct drm_crtc *crtc,
                                      struct drm_crtc_state *old_crtc_state)
 {
        struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
-       struct mtk_drm_private *priv = crtc->dev->dev_private;
-       unsigned int pending_planes = 0;
        int i;
 
        if (mtk_crtc->event)
                mtk_crtc->pending_needs_vblank = true;
-       for (i = 0; i < mtk_crtc->layer_nr; i++) {
-               struct drm_plane *plane = &mtk_crtc->planes[i];
-               struct mtk_plane_state *plane_state;
-
-               plane_state = to_mtk_plane_state(plane->state);
-               if (plane_state->pending.dirty) {
-                       plane_state->pending.config = true;
-                       plane_state->pending.dirty = false;
-                       pending_planes |= BIT(i);
-               }
-       }
-       if (pending_planes)
-               mtk_crtc->pending_planes = true;
        if (crtc->state->color_mgmt_changed)
-               for (i = 0; i < mtk_crtc->ddp_comp_nr; i++)
+               for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {
                        mtk_ddp_gamma_set(mtk_crtc->ddp_comp[i], crtc->state);
-
-       if (priv->data->shadow_register) {
-               mtk_disp_mutex_acquire(mtk_crtc->mutex);
-               mtk_crtc_ddp_config(crtc);
-               mtk_disp_mutex_release(mtk_crtc->mutex);
-       }
+                       mtk_ddp_ctm_set(mtk_crtc->ddp_comp[i], crtc->state);
+               }
+       mtk_drm_crtc_hw_config(mtk_crtc);
 }
 
 static const struct drm_crtc_funcs mtk_crtc_funcs = {
@@ -559,8 +660,12 @@ void mtk_crtc_ddp_irq(struct drm_crtc *crtc, struct mtk_ddp_comp *comp)
        struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
        struct mtk_drm_private *priv = crtc->dev->dev_private;
 
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+       if (!priv->data->shadow_register && !mtk_crtc->cmdq_client)
+#else
        if (!priv->data->shadow_register)
-               mtk_crtc_ddp_config(crtc);
+#endif
+               mtk_crtc_ddp_config(crtc, NULL);
 
        mtk_drm_finish_page_flip(mtk_crtc);
 }
@@ -627,6 +732,8 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
        int pipe = priv->num_pipes;
        int ret;
        int i;
+       bool has_ctm = false;
+       uint gamma_lut_size = 0;
 
        if (!path)
                return 0;
@@ -677,6 +784,14 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
                }
 
                mtk_crtc->ddp_comp[i] = comp;
+
+               if (comp->funcs) {
+                       if (comp->funcs->gamma_set)
+                               gamma_lut_size = MTK_LUT_SIZE;
+
+                       if (comp->funcs->ctm_set)
+                               has_ctm = true;
+               }
        }
 
        for (i = 0; i < mtk_crtc->ddp_comp_nr; i++)
@@ -697,9 +812,28 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
                                NULL, pipe);
        if (ret < 0)
                return ret;
-       drm_mode_crtc_set_gamma_size(&mtk_crtc->base, MTK_LUT_SIZE);
-       drm_crtc_enable_color_mgmt(&mtk_crtc->base, 0, false, MTK_LUT_SIZE);
-       priv->num_pipes++;
 
+       if (gamma_lut_size)
+               drm_mode_crtc_set_gamma_size(&mtk_crtc->base, gamma_lut_size);
+       drm_crtc_enable_color_mgmt(&mtk_crtc->base, 0, has_ctm, gamma_lut_size);
+       priv->num_pipes++;
+       mutex_init(&mtk_crtc->hw_lock);
+
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+       mtk_crtc->cmdq_client =
+                       cmdq_mbox_create(dev, drm_crtc_index(&mtk_crtc->base),
+                                        2000);
+       if (IS_ERR(mtk_crtc->cmdq_client)) {
+               dev_dbg(dev, "mtk_crtc %d failed to create mailbox client, writing register by CPU now\n",
+                       drm_crtc_index(&mtk_crtc->base));
+               mtk_crtc->cmdq_client = NULL;
+       }
+       ret = of_property_read_u32_index(dev->of_node, "mediatek,gce-events",
+                                        drm_crtc_index(&mtk_crtc->base),
+                                        &mtk_crtc->cmdq_event);
+       if (ret)
+               dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n",
+                       drm_crtc_index(&mtk_crtc->base));
+#endif
        return 0;
 }
index 6afe1c1..a2b4677 100644 (file)
@@ -21,5 +21,7 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
                        unsigned int path_len);
 int mtk_drm_crtc_plane_check(struct drm_crtc *crtc, struct drm_plane *plane,
                             struct mtk_plane_state *state);
+void mtk_drm_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane,
+                              struct drm_plane_state *plane_state);
 
 #endif /* MTK_DRM_CRTC_H */
index 7f21307..1f5a112 100644 (file)
@@ -12,7 +12,7 @@
 #include <linux/of_irq.h>
 #include <linux/of_platform.h>
 #include <linux/platform_device.h>
-
+#include <linux/soc/mediatek/mtk-cmdq.h>
 #include "mtk_drm_drv.h"
 #include "mtk_drm_plane.h"
 #include "mtk_drm_ddp_comp.h"
 #define CCORR_EN                               BIT(0)
 #define DISP_CCORR_CFG                         0x0020
 #define CCORR_RELAY_MODE                       BIT(0)
+#define CCORR_ENGINE_EN                                BIT(1)
+#define CCORR_GAMMA_OFF                                BIT(2)
+#define CCORR_WGAMUT_SRC_CLIP                  BIT(3)
 #define DISP_CCORR_SIZE                                0x0030
+#define DISP_CCORR_COEF_0                      0x0080
+#define DISP_CCORR_COEF_1                      0x0084
+#define DISP_CCORR_COEF_2                      0x0088
+#define DISP_CCORR_COEF_3                      0x008C
+#define DISP_CCORR_COEF_4                      0x0090
 
 #define DISP_DITHER_EN                         0x0000
 #define DITHER_EN                              BIT(0)
 #define DITHER_ADD_LSHIFT_G(x)                 (((x) & 0x7) << 4)
 #define DITHER_ADD_RSHIFT_G(x)                 (((x) & 0x7) << 0)
 
+void mtk_ddp_write(struct cmdq_pkt *cmdq_pkt, unsigned int value,
+                  struct mtk_ddp_comp *comp, unsigned int offset)
+{
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+       if (cmdq_pkt)
+               cmdq_pkt_write(cmdq_pkt, comp->subsys,
+                              comp->regs_pa + offset, value);
+       else
+#endif
+               writel(value, comp->regs + offset);
+}
+
+void mtk_ddp_write_relaxed(struct cmdq_pkt *cmdq_pkt, unsigned int value,
+                          struct mtk_ddp_comp *comp,
+                          unsigned int offset)
+{
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+       if (cmdq_pkt)
+               cmdq_pkt_write(cmdq_pkt, comp->subsys,
+                              comp->regs_pa + offset, value);
+       else
+#endif
+               writel_relaxed(value, comp->regs + offset);
+}
+
+void mtk_ddp_write_mask(struct cmdq_pkt *cmdq_pkt,
+                       unsigned int value,
+                       struct mtk_ddp_comp *comp,
+                       unsigned int offset,
+                       unsigned int mask)
+{
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+       if (cmdq_pkt) {
+               cmdq_pkt_write_mask(cmdq_pkt, comp->subsys,
+                                   comp->regs_pa + offset, value, mask);
+       } else {
+#endif
+               u32 tmp = readl(comp->regs + offset);
+
+               tmp = (tmp & ~mask) | (value & mask);
+               writel(tmp, comp->regs + offset);
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+       }
+#endif
+}
+
 void mtk_dither_set(struct mtk_ddp_comp *comp, unsigned int bpc,
-                   unsigned int CFG)
+                   unsigned int CFG, struct cmdq_pkt *cmdq_pkt)
 {
        /* If bpc equal to 0, the dithering function didn't be enabled */
        if (bpc == 0)
                return;
 
        if (bpc >= MTK_MIN_BPC) {
-               writel(0, comp->regs + DISP_DITHER_5);
-               writel(0, comp->regs + DISP_DITHER_7);
-               writel(DITHER_LSB_ERR_SHIFT_R(MTK_MAX_BPC - bpc) |
-                      DITHER_ADD_LSHIFT_R(MTK_MAX_BPC - bpc) |
-                      DITHER_NEW_BIT_MODE,
-                      comp->regs + DISP_DITHER_15);
-               writel(DITHER_LSB_ERR_SHIFT_B(MTK_MAX_BPC - bpc) |
-                      DITHER_ADD_LSHIFT_B(MTK_MAX_BPC - bpc) |
-                      DITHER_LSB_ERR_SHIFT_G(MTK_MAX_BPC - bpc) |
-                      DITHER_ADD_LSHIFT_G(MTK_MAX_BPC - bpc),
-                      comp->regs + DISP_DITHER_16);
-               writel(DISP_DITHERING, comp->regs + CFG);
+               mtk_ddp_write(cmdq_pkt, 0, comp, DISP_DITHER_5);
+               mtk_ddp_write(cmdq_pkt, 0, comp, DISP_DITHER_7);
+               mtk_ddp_write(cmdq_pkt,
+                             DITHER_LSB_ERR_SHIFT_R(MTK_MAX_BPC - bpc) |
+                             DITHER_ADD_LSHIFT_R(MTK_MAX_BPC - bpc) |
+                             DITHER_NEW_BIT_MODE,
+                             comp, DISP_DITHER_15);
+               mtk_ddp_write(cmdq_pkt,
+                             DITHER_LSB_ERR_SHIFT_B(MTK_MAX_BPC - bpc) |
+                             DITHER_ADD_LSHIFT_B(MTK_MAX_BPC - bpc) |
+                             DITHER_LSB_ERR_SHIFT_G(MTK_MAX_BPC - bpc) |
+                             DITHER_ADD_LSHIFT_G(MTK_MAX_BPC - bpc),
+                             comp, DISP_DITHER_16);
+               mtk_ddp_write(cmdq_pkt, DISP_DITHERING, comp, CFG);
        }
 }
 
 static void mtk_od_config(struct mtk_ddp_comp *comp, unsigned int w,
                          unsigned int h, unsigned int vrefresh,
-                         unsigned int bpc)
+                         unsigned int bpc, struct cmdq_pkt *cmdq_pkt)
 {
-       writel(w << 16 | h, comp->regs + DISP_OD_SIZE);
-       writel(OD_RELAYMODE, comp->regs + DISP_OD_CFG);
-       mtk_dither_set(comp, bpc, DISP_OD_CFG);
+       mtk_ddp_write(cmdq_pkt, w << 16 | h, comp, DISP_OD_SIZE);
+       mtk_ddp_write(cmdq_pkt, OD_RELAYMODE, comp, DISP_OD_CFG);
+       mtk_dither_set(comp, bpc, DISP_OD_CFG, cmdq_pkt);
 }
 
 static void mtk_od_start(struct mtk_ddp_comp *comp)
@@ -120,9 +176,9 @@ static void mtk_ufoe_start(struct mtk_ddp_comp *comp)
 
 static void mtk_aal_config(struct mtk_ddp_comp *comp, unsigned int w,
                           unsigned int h, unsigned int vrefresh,
-                          unsigned int bpc)
+                          unsigned int bpc, struct cmdq_pkt *cmdq_pkt)
 {
-       writel(h << 16 | w, comp->regs + DISP_AAL_SIZE);
+       mtk_ddp_write(cmdq_pkt, h << 16 | w, comp, DISP_AAL_SIZE);
 }
 
 static void mtk_aal_start(struct mtk_ddp_comp *comp)
@@ -137,10 +193,10 @@ static void mtk_aal_stop(struct mtk_ddp_comp *comp)
 
 static void mtk_ccorr_config(struct mtk_ddp_comp *comp, unsigned int w,
                             unsigned int h, unsigned int vrefresh,
-                            unsigned int bpc)
+                            unsigned int bpc, struct cmdq_pkt *cmdq_pkt)
 {
-       writel(h << 16 | w, comp->regs + DISP_CCORR_SIZE);
-       writel(CCORR_RELAY_MODE, comp->regs + DISP_CCORR_CFG);
+       mtk_ddp_write(cmdq_pkt, h << 16 | w, comp, DISP_CCORR_SIZE);
+       mtk_ddp_write(cmdq_pkt, CCORR_ENGINE_EN, comp, DISP_CCORR_CFG);
 }
 
 static void mtk_ccorr_start(struct mtk_ddp_comp *comp)
@@ -153,12 +209,63 @@ static void mtk_ccorr_stop(struct mtk_ddp_comp *comp)
        writel_relaxed(0x0, comp->regs + DISP_CCORR_EN);
 }
 
+/* Converts a DRM S31.32 value to the HW S1.10 format. */
+static u16 mtk_ctm_s31_32_to_s1_10(u64 in)
+{
+       u16 r;
+
+       /* Sign bit. */
+       r = in & BIT_ULL(63) ? BIT(11) : 0;
+
+       if ((in & GENMASK_ULL(62, 33)) > 0) {
+               /* identity value 0x100000000 -> 0x400, */
+               /* if bigger this, set it to max 0x7ff. */
+               r |= GENMASK(10, 0);
+       } else {
+               /* take the 11 most important bits. */
+               r |= (in >> 22) & GENMASK(10, 0);
+       }
+
+       return r;
+}
+
+static void mtk_ccorr_ctm_set(struct mtk_ddp_comp *comp,
+                             struct drm_crtc_state *state)
+{
+       struct drm_property_blob *blob = state->ctm;
+       struct drm_color_ctm *ctm;
+       const u64 *input;
+       uint16_t coeffs[9] = { 0 };
+       int i;
+       struct cmdq_pkt *cmdq_pkt = NULL;
+
+       if (!blob)
+               return;
+
+       ctm = (struct drm_color_ctm *)blob->data;
+       input = ctm->matrix;
+
+       for (i = 0; i < ARRAY_SIZE(coeffs); i++)
+               coeffs[i] = mtk_ctm_s31_32_to_s1_10(input[i]);
+
+       mtk_ddp_write(cmdq_pkt, coeffs[0] << 16 | coeffs[1],
+                     comp, DISP_CCORR_COEF_0);
+       mtk_ddp_write(cmdq_pkt, coeffs[2] << 16 | coeffs[3],
+                     comp, DISP_CCORR_COEF_1);
+       mtk_ddp_write(cmdq_pkt, coeffs[4] << 16 | coeffs[5],
+                     comp, DISP_CCORR_COEF_2);
+       mtk_ddp_write(cmdq_pkt, coeffs[6] << 16 | coeffs[7],
+                     comp, DISP_CCORR_COEF_3);
+       mtk_ddp_write(cmdq_pkt, coeffs[8] << 16,
+                     comp, DISP_CCORR_COEF_4);
+}
+
 static void mtk_dither_config(struct mtk_ddp_comp *comp, unsigned int w,
                              unsigned int h, unsigned int vrefresh,
-                             unsigned int bpc)
+                             unsigned int bpc, struct cmdq_pkt *cmdq_pkt)
 {
-       writel(h << 16 | w, comp->regs + DISP_DITHER_SIZE);
-       writel(DITHER_RELAY_MODE, comp->regs + DISP_DITHER_CFG);
+       mtk_ddp_write(cmdq_pkt, h << 16 | w, comp, DISP_DITHER_SIZE);
+       mtk_ddp_write(cmdq_pkt, DITHER_RELAY_MODE, comp, DISP_DITHER_CFG);
 }
 
 static void mtk_dither_start(struct mtk_ddp_comp *comp)
@@ -173,10 +280,10 @@ static void mtk_dither_stop(struct mtk_ddp_comp *comp)
 
 static void mtk_gamma_config(struct mtk_ddp_comp *comp, unsigned int w,
                             unsigned int h, unsigned int vrefresh,
-                            unsigned int bpc)
+                            unsigned int bpc, struct cmdq_pkt *cmdq_pkt)
 {
-       writel(h << 16 | w, comp->regs + DISP_GAMMA_SIZE);
-       mtk_dither_set(comp, bpc, DISP_GAMMA_CFG);
+       mtk_ddp_write(cmdq_pkt, h << 16 | w, comp, DISP_GAMMA_SIZE);
+       mtk_dither_set(comp, bpc, DISP_GAMMA_CFG, cmdq_pkt);
 }
 
 static void mtk_gamma_start(struct mtk_ddp_comp *comp)
@@ -223,6 +330,7 @@ static const struct mtk_ddp_comp_funcs ddp_ccorr = {
        .config = mtk_ccorr_config,
        .start = mtk_ccorr_start,
        .stop = mtk_ccorr_stop,
+       .ctm_set = mtk_ccorr_ctm_set,
 };
 
 static const struct mtk_ddp_comp_funcs ddp_dither = {
@@ -326,6 +434,11 @@ int mtk_ddp_comp_init(struct device *dev, struct device_node *node,
        enum mtk_ddp_comp_type type;
        struct device_node *larb_node;
        struct platform_device *larb_pdev;
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+       struct resource res;
+       struct cmdq_client_reg cmdq_reg;
+       int ret;
+#endif
 
        if (comp_id < 0 || comp_id >= DDP_COMPONENT_ID_MAX)
                return -EINVAL;
@@ -379,6 +492,19 @@ int mtk_ddp_comp_init(struct device *dev, struct device_node *node,
 
        comp->larb_dev = &larb_pdev->dev;
 
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+       if (of_address_to_resource(node, 0, &res) != 0) {
+               dev_err(dev, "Missing reg in %s node\n", node->full_name);
+               return -EINVAL;
+       }
+       comp->regs_pa = res.start;
+
+       ret = cmdq_dev_get_client_reg(dev, &cmdq_reg, 0);
+       if (ret)
+               dev_dbg(dev, "get mediatek,gce-client-reg fail!\n");
+       else
+               comp->subsys = cmdq_reg.subsys;
+#endif
        return 0;
 }
 
index 2f1e9e7..debe363 100644 (file)
@@ -69,27 +69,29 @@ enum mtk_ddp_comp_id {
 };
 
 struct mtk_ddp_comp;
-
+struct cmdq_pkt;
 struct mtk_ddp_comp_funcs {
        void (*config)(struct mtk_ddp_comp *comp, unsigned int w,
-                      unsigned int h, unsigned int vrefresh, unsigned int bpc);
+                      unsigned int h, unsigned int vrefresh,
+                      unsigned int bpc, struct cmdq_pkt *cmdq_pkt);
        void (*start)(struct mtk_ddp_comp *comp);
        void (*stop)(struct mtk_ddp_comp *comp);
        void (*enable_vblank)(struct mtk_ddp_comp *comp, struct drm_crtc *crtc);
        void (*disable_vblank)(struct mtk_ddp_comp *comp);
        unsigned int (*supported_rotations)(struct mtk_ddp_comp *comp);
        unsigned int (*layer_nr)(struct mtk_ddp_comp *comp);
-       void (*layer_on)(struct mtk_ddp_comp *comp, unsigned int idx);
-       void (*layer_off)(struct mtk_ddp_comp *comp, unsigned int idx);
        int (*layer_check)(struct mtk_ddp_comp *comp,
                           unsigned int idx,
                           struct mtk_plane_state *state);
        void (*layer_config)(struct mtk_ddp_comp *comp, unsigned int idx,
-                            struct mtk_plane_state *state);
+                            struct mtk_plane_state *state,
+                            struct cmdq_pkt *cmdq_pkt);
        void (*gamma_set)(struct mtk_ddp_comp *comp,
                          struct drm_crtc_state *state);
        void (*bgclr_in_on)(struct mtk_ddp_comp *comp);
        void (*bgclr_in_off)(struct mtk_ddp_comp *comp);
+       void (*ctm_set)(struct mtk_ddp_comp *comp,
+                       struct drm_crtc_state *state);
 };
 
 struct mtk_ddp_comp {
@@ -99,14 +101,17 @@ struct mtk_ddp_comp {
        struct device *larb_dev;
        enum mtk_ddp_comp_id id;
        const struct mtk_ddp_comp_funcs *funcs;
+       resource_size_t regs_pa;
+       u8 subsys;
 };
 
 static inline void mtk_ddp_comp_config(struct mtk_ddp_comp *comp,
                                       unsigned int w, unsigned int h,
-                                      unsigned int vrefresh, unsigned int bpc)
+                                      unsigned int vrefresh, unsigned int bpc,
+                                      struct cmdq_pkt *cmdq_pkt)
 {
        if (comp->funcs && comp->funcs->config)
-               comp->funcs->config(comp, w, h, vrefresh, bpc);
+               comp->funcs->config(comp, w, h, vrefresh, bpc, cmdq_pkt);
 }
 
 static inline void mtk_ddp_comp_start(struct mtk_ddp_comp *comp)
@@ -151,20 +156,6 @@ static inline unsigned int mtk_ddp_comp_layer_nr(struct mtk_ddp_comp *comp)
        return 0;
 }
 
-static inline void mtk_ddp_comp_layer_on(struct mtk_ddp_comp *comp,
-                                        unsigned int idx)
-{
-       if (comp->funcs && comp->funcs->layer_on)
-               comp->funcs->layer_on(comp, idx);
-}
-
-static inline void mtk_ddp_comp_layer_off(struct mtk_ddp_comp *comp,
-                                         unsigned int idx)
-{
-       if (comp->funcs && comp->funcs->layer_off)
-               comp->funcs->layer_off(comp, idx);
-}
-
 static inline int mtk_ddp_comp_layer_check(struct mtk_ddp_comp *comp,
                                           unsigned int idx,
                                           struct mtk_plane_state *state)
@@ -176,10 +167,11 @@ static inline int mtk_ddp_comp_layer_check(struct mtk_ddp_comp *comp,
 
 static inline void mtk_ddp_comp_layer_config(struct mtk_ddp_comp *comp,
                                             unsigned int idx,
-                                            struct mtk_plane_state *state)
+                                            struct mtk_plane_state *state,
+                                            struct cmdq_pkt *cmdq_pkt)
 {
        if (comp->funcs && comp->funcs->layer_config)
-               comp->funcs->layer_config(comp, idx, state);
+               comp->funcs->layer_config(comp, idx, state, cmdq_pkt);
 }
 
 static inline void mtk_ddp_gamma_set(struct mtk_ddp_comp *comp,
@@ -201,6 +193,13 @@ static inline void mtk_ddp_comp_bgclr_in_off(struct mtk_ddp_comp *comp)
                comp->funcs->bgclr_in_off(comp);
 }
 
+static inline void mtk_ddp_ctm_set(struct mtk_ddp_comp *comp,
+                                  struct drm_crtc_state *state)
+{
+       if (comp->funcs && comp->funcs->ctm_set)
+               comp->funcs->ctm_set(comp, state);
+}
+
 int mtk_ddp_comp_get_id(struct device_node *node,
                        enum mtk_ddp_comp_type comp_type);
 int mtk_ddp_comp_init(struct device *dev, struct device_node *comp_node,
@@ -209,6 +208,13 @@ int mtk_ddp_comp_init(struct device *dev, struct device_node *comp_node,
 int mtk_ddp_comp_register(struct drm_device *drm, struct mtk_ddp_comp *comp);
 void mtk_ddp_comp_unregister(struct drm_device *drm, struct mtk_ddp_comp *comp);
 void mtk_dither_set(struct mtk_ddp_comp *comp, unsigned int bpc,
-                   unsigned int CFG);
-
+                   unsigned int CFG, struct cmdq_pkt *cmdq_pkt);
+enum mtk_ddp_comp_type mtk_ddp_comp_get_type(enum mtk_ddp_comp_id comp_id);
+void mtk_ddp_write(struct cmdq_pkt *cmdq_pkt, unsigned int value,
+                  struct mtk_ddp_comp *comp, unsigned int offset);
+void mtk_ddp_write_relaxed(struct cmdq_pkt *cmdq_pkt, unsigned int value,
+                          struct mtk_ddp_comp *comp, unsigned int offset);
+void mtk_ddp_write_mask(struct cmdq_pkt *cmdq_pkt, unsigned int value,
+                       struct mtk_ddp_comp *comp, unsigned int offset,
+                       unsigned int mask);
 #endif /* MTK_DRM_DDP_COMP_H */
index 2b1c122..0563c68 100644 (file)
 #define DRIVER_MAJOR 1
 #define DRIVER_MINOR 0
 
-static void mtk_atomic_schedule(struct mtk_drm_private *private,
-                               struct drm_atomic_state *state)
-{
-       private->commit.state = state;
-       schedule_work(&private->commit.work);
-}
-
-static void mtk_atomic_complete(struct mtk_drm_private *private,
-                               struct drm_atomic_state *state)
-{
-       struct drm_device *drm = private->drm;
-
-       drm_atomic_helper_wait_for_fences(drm, state, false);
-
-       /*
-        * Mediatek drm supports runtime PM, so plane registers cannot be
-        * written when their crtc is disabled.
-        *
-        * The comment for drm_atomic_helper_commit states:
-        *     For drivers supporting runtime PM the recommended sequence is
-        *
-        *     drm_atomic_helper_commit_modeset_disables(dev, state);
-        *     drm_atomic_helper_commit_modeset_enables(dev, state);
-        *     drm_atomic_helper_commit_planes(dev, state,
-        *                                     DRM_PLANE_COMMIT_ACTIVE_ONLY);
-        *
-        * See the kerneldoc entries for these three functions for more details.
-        */
-       drm_atomic_helper_commit_modeset_disables(drm, state);
-       drm_atomic_helper_commit_modeset_enables(drm, state);
-       drm_atomic_helper_commit_planes(drm, state,
-                                       DRM_PLANE_COMMIT_ACTIVE_ONLY);
-
-       drm_atomic_helper_wait_for_vblanks(drm, state);
-
-       drm_atomic_helper_cleanup_planes(drm, state);
-       drm_atomic_state_put(state);
-}
-
-static void mtk_atomic_work(struct work_struct *work)
-{
-       struct mtk_drm_private *private = container_of(work,
-                       struct mtk_drm_private, commit.work);
-
-       mtk_atomic_complete(private, private->commit.state);
-}
-
-static int mtk_atomic_commit(struct drm_device *drm,
-                            struct drm_atomic_state *state,
-                            bool async)
-{
-       struct mtk_drm_private *private = drm->dev_private;
-       int ret;
-
-       ret = drm_atomic_helper_prepare_planes(drm, state);
-       if (ret)
-               return ret;
-
-       mutex_lock(&private->commit.lock);
-       flush_work(&private->commit.work);
-
-       ret = drm_atomic_helper_swap_state(state, true);
-       if (ret) {
-               mutex_unlock(&private->commit.lock);
-               drm_atomic_helper_cleanup_planes(drm, state);
-               return ret;
-       }
-
-       drm_atomic_state_get(state);
-       if (async)
-               mtk_atomic_schedule(private, state);
-       else
-               mtk_atomic_complete(private, state);
-
-       mutex_unlock(&private->commit.lock);
-
-       return 0;
-}
+static const struct drm_mode_config_helper_funcs mtk_drm_mode_config_helpers = {
+       .atomic_commit_tail = drm_atomic_helper_commit_tail_rpm,
+};
 
 static struct drm_framebuffer *
 mtk_drm_mode_fb_create(struct drm_device *dev,
@@ -132,7 +57,7 @@ mtk_drm_mode_fb_create(struct drm_device *dev,
 static const struct drm_mode_config_funcs mtk_drm_mode_config_funcs = {
        .fb_create = mtk_drm_mode_fb_create,
        .atomic_check = drm_atomic_helper_check,
-       .atomic_commit = mtk_atomic_commit,
+       .atomic_commit = drm_atomic_helper_commit,
 };
 
 static const enum mtk_ddp_comp_id mt2701_mtk_ddp_main[] = {
@@ -250,6 +175,7 @@ static int mtk_drm_kms_init(struct drm_device *drm)
        drm->mode_config.max_width = 4096;
        drm->mode_config.max_height = 4096;
        drm->mode_config.funcs = &mtk_drm_mode_config_funcs;
+       drm->mode_config.helper_private = &mtk_drm_mode_config_helpers;
 
        ret = component_bind_all(drm->dev, drm);
        if (ret)
@@ -509,8 +435,6 @@ static int mtk_drm_probe(struct platform_device *pdev)
        if (!private)
                return -ENOMEM;
 
-       mutex_init(&private->commit.lock);
-       INIT_WORK(&private->commit.work, mtk_atomic_work);
        private->data = of_device_get_match_data(dev);
 
        mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
index e03fea1..17bc99b 100644 (file)
@@ -43,13 +43,6 @@ struct mtk_drm_private {
        struct device_node *comp_node[DDP_COMPONENT_ID_MAX];
        struct mtk_ddp_comp *ddp_comp[DDP_COMPONENT_ID_MAX];
        const struct mtk_mmsys_driver_data *data;
-
-       struct {
-               struct drm_atomic_state *state;
-               struct work_struct work;
-               struct mutex lock;
-       } commit;
-
        struct drm_atomic_state *suspend_state;
 
        bool dma_parms_allocated;
index f0b0325..914cc76 100644 (file)
@@ -7,6 +7,7 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fourcc.h>
+#include <drm/drm_atomic_uapi.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 
@@ -75,6 +76,50 @@ static void mtk_drm_plane_destroy_state(struct drm_plane *plane,
        kfree(to_mtk_plane_state(state));
 }
 
+static int mtk_plane_atomic_async_check(struct drm_plane *plane,
+                                       struct drm_plane_state *state)
+{
+       struct drm_crtc_state *crtc_state;
+
+       if (plane != state->crtc->cursor)
+               return -EINVAL;
+
+       if (!plane->state)
+               return -EINVAL;
+
+       if (!plane->state->fb)
+               return -EINVAL;
+
+       if (state->state)
+               crtc_state = drm_atomic_get_existing_crtc_state(state->state,
+                                                               state->crtc);
+       else /* Special case for asynchronous cursor updates. */
+               crtc_state = state->crtc->state;
+
+       return drm_atomic_helper_check_plane_state(plane->state, crtc_state,
+                                                  DRM_PLANE_HELPER_NO_SCALING,
+                                                  DRM_PLANE_HELPER_NO_SCALING,
+                                                  true, true);
+}
+
+static void mtk_plane_atomic_async_update(struct drm_plane *plane,
+                                         struct drm_plane_state *new_state)
+{
+       struct mtk_plane_state *state = to_mtk_plane_state(plane->state);
+
+       plane->state->crtc_x = new_state->crtc_x;
+       plane->state->crtc_y = new_state->crtc_y;
+       plane->state->crtc_h = new_state->crtc_h;
+       plane->state->crtc_w = new_state->crtc_w;
+       plane->state->src_x = new_state->src_x;
+       plane->state->src_y = new_state->src_y;
+       plane->state->src_h = new_state->src_h;
+       plane->state->src_w = new_state->src_w;
+       state->pending.async_dirty = true;
+
+       mtk_drm_crtc_async_update(new_state->crtc, plane, new_state);
+}
+
 static const struct drm_plane_funcs mtk_plane_funcs = {
        .update_plane = drm_atomic_helper_update_plane,
        .disable_plane = drm_atomic_helper_disable_plane,
@@ -163,6 +208,8 @@ static const struct drm_plane_helper_funcs mtk_plane_helper_funcs = {
        .atomic_check = mtk_plane_atomic_check,
        .atomic_update = mtk_plane_atomic_update,
        .atomic_disable = mtk_plane_atomic_disable,
+       .atomic_async_update = mtk_plane_atomic_async_update,
+       .atomic_async_check = mtk_plane_atomic_async_check,
 };
 
 int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
index 760885e..d454bec 100644 (file)
@@ -22,6 +22,8 @@ struct mtk_plane_pending_state {
        unsigned int                    height;
        unsigned int                    rotation;
        bool                            dirty;
+       bool                            async_dirty;
+       bool                            async_config;
 };
 
 struct mtk_plane_state {
index f9a0c8e..04fdf38 100644 (file)
@@ -135,7 +135,7 @@ struct meson_drm {
        } venc;
 
        struct {
-               dma_addr_t addr_phys;
+               dma_addr_t addr_dma;
                uint32_t *addr;
                unsigned int offset;
        } rdma;
index 25b34b1..1303821 100644 (file)
@@ -27,7 +27,7 @@ int meson_rdma_init(struct meson_drm *priv)
                /* Allocate a PAGE buffer */
                priv->rdma.addr =
                        dma_alloc_coherent(priv->dev, SZ_4K,
-                                          &priv->rdma.addr_phys,
+                                          &priv->rdma.addr_dma,
                                           GFP_KERNEL);
                if (!priv->rdma.addr)
                        return -ENOMEM;
@@ -47,16 +47,16 @@ int meson_rdma_init(struct meson_drm *priv)
 
 void meson_rdma_free(struct meson_drm *priv)
 {
-       if (!priv->rdma.addr && !priv->rdma.addr_phys)
+       if (!priv->rdma.addr && !priv->rdma.addr_dma)
                return;
 
        meson_rdma_stop(priv);
 
        dma_free_coherent(priv->dev, SZ_4K,
-                         priv->rdma.addr, priv->rdma.addr_phys);
+                         priv->rdma.addr, priv->rdma.addr_dma);
 
        priv->rdma.addr = NULL;
-       priv->rdma.addr_phys = (dma_addr_t)NULL;
+       priv->rdma.addr_dma = (dma_addr_t)0;
 }
 
 void meson_rdma_setup(struct meson_drm *priv)
@@ -118,11 +118,11 @@ void meson_rdma_flush(struct meson_drm *priv)
        meson_rdma_stop(priv);
 
        /* Start of Channel 1 register writes buffer */
-       writel(priv->rdma.addr_phys,
+       writel(priv->rdma.addr_dma,
               priv->io_base + _REG(RDMA_AHB_START_ADDR_1));
 
        /* Last byte on Channel 1 register writes buffer */
-       writel(priv->rdma.addr_phys + (priv->rdma.offset * RDMA_DESC_SIZE) - 1,
+       writel(priv->rdma.addr_dma + (priv->rdma.offset * RDMA_DESC_SIZE) - 1,
               priv->io_base + _REG(RDMA_AHB_END_ADDR_1));
 
        /* Trigger Channel 1 on VSYNC event */
index 3624955..f607a04 100644 (file)
@@ -54,7 +54,7 @@ static void
 nv04_calc_arb(struct nv_fifo_info *fifo, struct nv_sim_state *arb)
 {
        int pagemiss, cas, width, bpp;
-       int nvclks, mclks, pclks, crtpagemiss;
+       int nvclks, mclks, crtpagemiss;
        int found, mclk_extra, mclk_loop, cbs, m1, p1;
        int mclk_freq, pclk_freq, nvclk_freq;
        int us_m, us_n, us_p, crtc_drain_rate;
@@ -69,7 +69,6 @@ nv04_calc_arb(struct nv_fifo_info *fifo, struct nv_sim_state *arb)
        bpp = arb->bpp;
        cbs = 128;
 
-       pclks = 2;
        nvclks = 10;
        mclks = 13 + cas;
        mclk_extra = 3;
index 03466f0..3a9489e 100644 (file)
@@ -644,16 +644,13 @@ static int nv17_tv_create_resources(struct drm_encoder *encoder,
        int i;
 
        if (nouveau_tv_norm) {
-               for (i = 0; i < num_tv_norms; i++) {
-                       if (!strcmp(nv17_tv_norm_names[i], nouveau_tv_norm)) {
-                               tv_enc->tv_norm = i;
-                               break;
-                       }
-               }
-
-               if (i == num_tv_norms)
+               i = match_string(nv17_tv_norm_names, num_tv_norms,
+                                nouveau_tv_norm);
+               if (i < 0)
                        NV_WARN(drm, "Invalid TV norm setting \"%s\"\n",
                                nouveau_tv_norm);
+               else
+                       tv_enc->tv_norm = i;
        }
 
        drm_mode_create_tv_properties(dev, num_tv_norms, nv17_tv_norm_names);
index 5f2de77..224a34c 100644 (file)
@@ -75,12 +75,16 @@ base907c_xlut_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw)
        }
 }
 
-static void
-base907c_ilut(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw)
+static bool
+base907c_ilut(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw, int size)
 {
-       asyw->xlut.i.mode = 7;
+       if (size != 256 && size != 1024)
+               return false;
+
+       asyw->xlut.i.mode = size == 1024 ? 4 : 7;
        asyw->xlut.i.enable = 2;
        asyw->xlut.i.load = head907d_olut_load;
+       return true;
 }
 
 static inline u32
@@ -160,6 +164,7 @@ base907c = {
        .csc_set = base907c_csc_set,
        .csc_clr = base907c_csc_clr,
        .olut_core = true,
+       .ilut_size = 1024,
        .xlut_set = base907c_xlut_set,
        .xlut_clr = base907c_xlut_clr,
        .image_set = base907c_image_set,
index 63425e2..5fabe2b 100644 (file)
@@ -660,7 +660,6 @@ struct nv50_mstm {
        struct nouveau_encoder *outp;
 
        struct drm_dp_mst_topology_mgr mgr;
-       struct nv50_msto *msto[4];
 
        bool modified;
        bool disabled;
@@ -726,7 +725,6 @@ nv50_msto_cleanup(struct nv50_msto *msto)
        drm_dp_mst_deallocate_vcpi(&mstm->mgr, mstc->port);
 
        msto->mstc = NULL;
-       msto->head = NULL;
        msto->disabled = false;
 }
 
@@ -806,11 +804,11 @@ nv50_msto_atomic_check(struct drm_encoder *encoder,
                 * topology
                 */
                asyh->or.bpc = min(connector->display_info.bpc, 8U);
-               asyh->dp.pbn = drm_dp_calc_pbn_mode(clock, asyh->or.bpc * 3);
+               asyh->dp.pbn = drm_dp_calc_pbn_mode(clock, asyh->or.bpc * 3, false);
        }
 
        slots = drm_dp_atomic_find_vcpi_slots(state, &mstm->mgr, mstc->port,
-                                             asyh->dp.pbn);
+                                             asyh->dp.pbn, 0);
        if (slots < 0)
                return slots;
 
@@ -872,7 +870,6 @@ nv50_msto_enable(struct drm_encoder *encoder)
        mstm->outp->update(mstm->outp, head->base.index, armh, proto,
                           nv50_dp_bpc_to_depth(armh->or.bpc));
 
-       msto->head = head;
        msto->mstc = mstc;
        mstm->modified = true;
 }
@@ -913,45 +910,40 @@ nv50_msto = {
        .destroy = nv50_msto_destroy,
 };
 
-static int
-nv50_msto_new(struct drm_device *dev, u32 heads, const char *name, int id,
-             struct nv50_msto **pmsto)
+static struct nv50_msto *
+nv50_msto_new(struct drm_device *dev, struct nv50_head *head, int id)
 {
        struct nv50_msto *msto;
        int ret;
 
-       if (!(msto = *pmsto = kzalloc(sizeof(*msto), GFP_KERNEL)))
-               return -ENOMEM;
+       msto = kzalloc(sizeof(*msto), GFP_KERNEL);
+       if (!msto)
+               return ERR_PTR(-ENOMEM);
 
        ret = drm_encoder_init(dev, &msto->encoder, &nv50_msto,
-                              DRM_MODE_ENCODER_DPMST, "%s-mst-%d", name, id);
+                              DRM_MODE_ENCODER_DPMST, "mst-%d", id);
        if (ret) {
-               kfree(*pmsto);
-               *pmsto = NULL;
-               return ret;
+               kfree(msto);
+               return ERR_PTR(ret);
        }
 
        drm_encoder_helper_add(&msto->encoder, &nv50_msto_help);
-       msto->encoder.possible_crtcs = heads;
-       return 0;
+       msto->encoder.possible_crtcs = drm_crtc_mask(&head->base.base);
+       msto->head = head;
+       return msto;
 }
 
 static struct drm_encoder *
 nv50_mstc_atomic_best_encoder(struct drm_connector *connector,
                              struct drm_connector_state *connector_state)
 {
-       struct nv50_head *head = nv50_head(connector_state->crtc);
        struct nv50_mstc *mstc = nv50_mstc(connector);
+       struct drm_crtc *crtc = connector_state->crtc;
 
-       return &mstc->mstm->msto[head->base.index]->encoder;
-}
-
-static struct drm_encoder *
-nv50_mstc_best_encoder(struct drm_connector *connector)
-{
-       struct nv50_mstc *mstc = nv50_mstc(connector);
+       if (!(mstc->mstm->outp->dcb->heads & drm_crtc_mask(crtc)))
+               return NULL;
 
-       return &mstc->mstm->msto[0]->encoder;
+       return &nv50_head(crtc)->msto->encoder;
 }
 
 static enum drm_mode_status
@@ -1038,7 +1030,6 @@ static const struct drm_connector_helper_funcs
 nv50_mstc_help = {
        .get_modes = nv50_mstc_get_modes,
        .mode_valid = nv50_mstc_mode_valid,
-       .best_encoder = nv50_mstc_best_encoder,
        .atomic_best_encoder = nv50_mstc_atomic_best_encoder,
        .atomic_check = nv50_mstc_atomic_check,
        .detect_ctx = nv50_mstc_detect,
@@ -1071,8 +1062,9 @@ nv50_mstc_new(struct nv50_mstm *mstm, struct drm_dp_mst_port *port,
              const char *path, struct nv50_mstc **pmstc)
 {
        struct drm_device *dev = mstm->outp->base.base.dev;
+       struct drm_crtc *crtc;
        struct nv50_mstc *mstc;
-       int ret, i;
+       int ret;
 
        if (!(mstc = *pmstc = kzalloc(sizeof(*mstc), GFP_KERNEL)))
                return -ENOMEM;
@@ -1092,8 +1084,13 @@ nv50_mstc_new(struct nv50_mstm *mstm, struct drm_dp_mst_port *port,
        mstc->connector.funcs->reset(&mstc->connector);
        nouveau_conn_attach_properties(&mstc->connector);
 
-       for (i = 0; i < ARRAY_SIZE(mstm->msto) && mstm->msto[i]; i++)
-               drm_connector_attach_encoder(&mstc->connector, &mstm->msto[i]->encoder);
+       drm_for_each_crtc(crtc, dev) {
+               if (!(mstm->outp->dcb->heads & drm_crtc_mask(crtc)))
+                       continue;
+
+               drm_connector_attach_encoder(&mstc->connector,
+                                            &nv50_head(crtc)->msto->encoder);
+       }
 
        drm_object_attach_property(&mstc->connector.base, dev->mode_config.path_property, 0);
        drm_object_attach_property(&mstc->connector.base, dev->mode_config.tile_property, 0);
@@ -1367,7 +1364,7 @@ nv50_mstm_new(struct nouveau_encoder *outp, struct drm_dp_aux *aux, int aux_max,
        const int max_payloads = hweight8(outp->dcb->heads);
        struct drm_device *dev = outp->base.base.dev;
        struct nv50_mstm *mstm;
-       int ret, i;
+       int ret;
        u8 dpcd;
 
        /* This is a workaround for some monitors not functioning
@@ -1390,13 +1387,6 @@ nv50_mstm_new(struct nouveau_encoder *outp, struct drm_dp_aux *aux, int aux_max,
        if (ret)
                return ret;
 
-       for (i = 0; i < max_payloads; i++) {
-               ret = nv50_msto_new(dev, outp->dcb->heads, outp->base.base.name,
-                                   i, &mstm->msto[i]);
-               if (ret)
-                       return ret;
-       }
-
        return 0;
 }
 
@@ -1569,17 +1559,24 @@ nv50_sor_func = {
        .destroy = nv50_sor_destroy,
 };
 
+static bool nv50_has_mst(struct nouveau_drm *drm)
+{
+       struct nvkm_bios *bios = nvxx_bios(&drm->client.device);
+       u32 data;
+       u8 ver, hdr, cnt, len;
+
+       data = nvbios_dp_table(bios, &ver, &hdr, &cnt, &len);
+       return data && ver >= 0x40 && (nvbios_rd08(bios, data + 0x08) & 0x04);
+}
+
 static int
 nv50_sor_create(struct drm_connector *connector, struct dcb_output *dcbe)
 {
        struct nouveau_connector *nv_connector = nouveau_connector(connector);
        struct nouveau_drm *drm = nouveau_drm(connector->dev);
-       struct nvkm_bios *bios = nvxx_bios(&drm->client.device);
        struct nvkm_i2c *i2c = nvxx_i2c(&drm->client.device);
        struct nouveau_encoder *nv_encoder;
        struct drm_encoder *encoder;
-       u8 ver, hdr, cnt, len;
-       u32 data;
        int type, ret;
 
        switch (dcbe->type) {
@@ -1624,10 +1621,9 @@ nv50_sor_create(struct drm_connector *connector, struct dcb_output *dcbe)
                }
 
                if (nv_connector->type != DCB_CONNECTOR_eDP &&
-                   (data = nvbios_dp_table(bios, &ver, &hdr, &cnt, &len)) &&
-                   ver >= 0x40 && (nvbios_rd08(bios, data + 0x08) & 0x04)) {
-                       ret = nv50_mstm_new(nv_encoder, &nv_connector->aux, 16,
-                                           nv_connector->base.base.id,
+                   nv50_has_mst(drm)) {
+                       ret = nv50_mstm_new(nv_encoder, &nv_connector->aux,
+                                           16, nv_connector->base.base.id,
                                            &nv_encoder->dp.mstm);
                        if (ret)
                                return ret;
@@ -2323,6 +2319,7 @@ nv50_display_create(struct drm_device *dev)
        struct nv50_disp *disp;
        struct dcb_output *dcbe;
        int crtcs, ret, i;
+       bool has_mst = nv50_has_mst(drm);
 
        disp = kzalloc(sizeof(*disp), GFP_KERNEL);
        if (!disp)
@@ -2371,11 +2368,37 @@ nv50_display_create(struct drm_device *dev)
                crtcs = 0x3;
 
        for (i = 0; i < fls(crtcs); i++) {
+               struct nv50_head *head;
+
                if (!(crtcs & (1 << i)))
                        continue;
-               ret = nv50_head_create(dev, i);
-               if (ret)
+
+               head = nv50_head_create(dev, i);
+               if (IS_ERR(head)) {
+                       ret = PTR_ERR(head);
                        goto out;
+               }
+
+               if (has_mst) {
+                       head->msto = nv50_msto_new(dev, head, i);
+                       if (IS_ERR(head->msto)) {
+                               ret = PTR_ERR(head->msto);
+                               head->msto = NULL;
+                               goto out;
+                       }
+
+                       /*
+                        * FIXME: This is a hack to workaround the following
+                        * issues:
+                        *
+                        * https://gitlab.gnome.org/GNOME/mutter/issues/759
+                        * https://gitlab.freedesktop.org/xorg/xserver/merge_requests/277
+                        *
+                        * Once these issues are closed, this should be
+                        * removed
+                        */
+                       head->msto->encoder.possible_crtcs = crtcs;
+               }
        }
 
        /* create encoder/connector objects based on VBIOS DCB table */
index c0a7953..d54fe00 100644 (file)
@@ -4,6 +4,8 @@
 
 #include "nouveau_display.h"
 
+struct nv50_msto;
+
 struct nv50_disp {
        struct nvif_disp *disp;
        struct nv50_core *core;
index c9692df..d9d6460 100644 (file)
@@ -213,6 +213,7 @@ nv50_head_atomic_check_lut(struct nv50_head *head,
 {
        struct nv50_disp *disp = nv50_disp(head->base.base.dev);
        struct drm_property_blob *olut = asyh->state.gamma_lut;
+       int size;
 
        /* Determine whether core output LUT should be enabled. */
        if (olut) {
@@ -229,14 +230,23 @@ nv50_head_atomic_check_lut(struct nv50_head *head,
                }
        }
 
-       if (!olut && !head->func->olut_identity) {
-               asyh->olut.handle = 0;
-               return 0;
+       if (!olut) {
+               if (!head->func->olut_identity) {
+                       asyh->olut.handle = 0;
+                       return 0;
+               }
+               size = 0;
+       } else {
+               size = drm_color_lut_size(olut);
        }
 
+       if (!head->func->olut(head, asyh, size)) {
+               DRM_DEBUG_KMS("Invalid olut\n");
+               return -EINVAL;
+       }
        asyh->olut.handle = disp->core->chan.vram.handle;
        asyh->olut.buffer = !asyh->olut.buffer;
-       head->func->olut(head, asyh);
+
        return 0;
 }
 
@@ -473,7 +483,7 @@ nv50_head_func = {
        .atomic_destroy_state = nv50_head_atomic_destroy_state,
 };
 
-int
+struct nv50_head *
 nv50_head_create(struct drm_device *dev, int index)
 {
        struct nouveau_drm *drm = nouveau_drm(dev);
@@ -485,7 +495,7 @@ nv50_head_create(struct drm_device *dev, int index)
 
        head = kzalloc(sizeof(*head), GFP_KERNEL);
        if (!head)
-               return -ENOMEM;
+               return ERR_PTR(-ENOMEM);
 
        head->func = disp->core->func->head;
        head->base.index = index;
@@ -503,27 +513,26 @@ nv50_head_create(struct drm_device *dev, int index)
                ret = nv50_curs_new(drm, head->base.index, &curs);
        if (ret) {
                kfree(head);
-               return ret;
+               return ERR_PTR(ret);
        }
 
        crtc = &head->base.base;
        drm_crtc_init_with_planes(dev, crtc, &base->plane, &curs->plane,
                                  &nv50_head_func, "head-%d", head->base.index);
        drm_crtc_helper_add(crtc, &nv50_head_help);
+       /* Keep the legacy gamma size at 256 to avoid compatibility issues */
        drm_mode_crtc_set_gamma_size(crtc, 256);
-       if (disp->disp->object.oclass >= GF110_DISP)
-               drm_crtc_enable_color_mgmt(crtc, 256, true, 256);
-       else
-               drm_crtc_enable_color_mgmt(crtc, 0, false, 256);
+       drm_crtc_enable_color_mgmt(crtc, base->func->ilut_size,
+                                  disp->disp->object.oclass >= GF110_DISP,
+                                  head->func->olut_size);
 
        if (head->func->olut_set) {
                ret = nv50_lut_init(disp, &drm->client.mmu, &head->olut);
-               if (ret)
-                       goto out;
+               if (ret) {
+                       nv50_head_destroy(crtc);
+                       return ERR_PTR(ret);
+               }
        }
 
-out:
-       if (ret)
-               nv50_head_destroy(crtc);
-       return ret;
+       return head;
 }
index d1c002f..c32b27c 100644 (file)
@@ -11,17 +11,19 @@ struct nv50_head {
        const struct nv50_head_func *func;
        struct nouveau_crtc base;
        struct nv50_lut olut;
+       struct nv50_msto *msto;
 };
 
-int nv50_head_create(struct drm_device *, int index);
+struct nv50_head *nv50_head_create(struct drm_device *, int index);
 void nv50_head_flush_set(struct nv50_head *, struct nv50_head_atom *);
 void nv50_head_flush_clr(struct nv50_head *, struct nv50_head_atom *, bool y);
 
 struct nv50_head_func {
        void (*view)(struct nv50_head *, struct nv50_head_atom *);
        void (*mode)(struct nv50_head *, struct nv50_head_atom *);
-       void (*olut)(struct nv50_head *, struct nv50_head_atom *);
+       bool (*olut)(struct nv50_head *, struct nv50_head_atom *, int);
        bool olut_identity;
+       int  olut_size;
        void (*olut_set)(struct nv50_head *, struct nv50_head_atom *);
        void (*olut_clr)(struct nv50_head *);
        void (*core_calc)(struct nv50_head *, struct nv50_head_atom *);
@@ -43,7 +45,7 @@ struct nv50_head_func {
 extern const struct nv50_head_func head507d;
 void head507d_view(struct nv50_head *, struct nv50_head_atom *);
 void head507d_mode(struct nv50_head *, struct nv50_head_atom *);
-void head507d_olut(struct nv50_head *, struct nv50_head_atom *);
+bool head507d_olut(struct nv50_head *, struct nv50_head_atom *, int);
 void head507d_core_calc(struct nv50_head *, struct nv50_head_atom *);
 void head507d_core_clr(struct nv50_head *);
 int head507d_curs_layout(struct nv50_head *, struct nv50_wndw_atom *,
@@ -60,7 +62,7 @@ extern const struct nv50_head_func head827d;
 extern const struct nv50_head_func head907d;
 void head907d_view(struct nv50_head *, struct nv50_head_atom *);
 void head907d_mode(struct nv50_head *, struct nv50_head_atom *);
-void head907d_olut(struct nv50_head *, struct nv50_head_atom *);
+bool head907d_olut(struct nv50_head *, struct nv50_head_atom *, int);
 void head907d_olut_set(struct nv50_head *, struct nv50_head_atom *);
 void head907d_olut_clr(struct nv50_head *);
 void head907d_core_set(struct nv50_head *, struct nv50_head_atom *);
index 7561be5..66ccf36 100644 (file)
@@ -271,15 +271,19 @@ head507d_olut_load(struct drm_color_lut *in, int size, void __iomem *mem)
        writew(readw(mem - 4), mem + 4);
 }
 
-void
-head507d_olut(struct nv50_head *head, struct nv50_head_atom *asyh)
+bool
+head507d_olut(struct nv50_head *head, struct nv50_head_atom *asyh, int size)
 {
+       if (size != 256)
+               return false;
+
        if (asyh->base.cpp == 1)
                asyh->olut.mode = 0;
        else
                asyh->olut.mode = 1;
 
        asyh->olut.load = head507d_olut_load;
+       return true;
 }
 
 void
@@ -328,6 +332,7 @@ head507d = {
        .view = head507d_view,
        .mode = head507d_mode,
        .olut = head507d_olut,
+       .olut_size = 256,
        .olut_set = head507d_olut_set,
        .olut_clr = head507d_olut_clr,
        .core_calc = head507d_core_calc,
index af5e7bd..1187711 100644 (file)
@@ -108,6 +108,7 @@ head827d = {
        .view = head507d_view,
        .mode = head507d_mode,
        .olut = head507d_olut,
+       .olut_size = 256,
        .olut_set = head827d_olut_set,
        .olut_clr = head827d_olut_clr,
        .core_calc = head507d_core_calc,
index c2d09dd..3002ec2 100644 (file)
@@ -230,11 +230,15 @@ head907d_olut_load(struct drm_color_lut *in, int size, void __iomem *mem)
        writew(readw(mem - 4), mem + 4);
 }
 
-void
-head907d_olut(struct nv50_head *head, struct nv50_head_atom *asyh)
+bool
+head907d_olut(struct nv50_head *head, struct nv50_head_atom *asyh, int size)
 {
-       asyh->olut.mode = 7;
+       if (size != 256 && size != 1024)
+               return false;
+
+       asyh->olut.mode = size == 1024 ? 4 : 7;
        asyh->olut.load = head907d_olut_load;
+       return true;
 }
 
 void
@@ -285,6 +289,7 @@ head907d = {
        .view = head907d_view,
        .mode = head907d_mode,
        .olut = head907d_olut,
+       .olut_size = 1024,
        .olut_set = head907d_olut_set,
        .olut_clr = head907d_olut_clr,
        .core_calc = head507d_core_calc,
index 303df84..76958ce 100644 (file)
@@ -83,6 +83,7 @@ head917d = {
        .view = head907d_view,
        .mode = head907d_mode,
        .olut = head907d_olut,
+       .olut_size = 1024,
        .olut_set = head907d_olut_set,
        .olut_clr = head907d_olut_clr,
        .core_calc = head507d_core_calc,
index ef6a99d..00011ce 100644 (file)
@@ -148,14 +148,18 @@ headc37d_olut_set(struct nv50_head *head, struct nv50_head_atom *asyh)
        }
 }
 
-static void
-headc37d_olut(struct nv50_head *head, struct nv50_head_atom *asyh)
+static bool
+headc37d_olut(struct nv50_head *head, struct nv50_head_atom *asyh, int size)
 {
+       if (size != 256 && size != 1024)
+               return false;
+
        asyh->olut.mode = 2;
-       asyh->olut.size = 0;
+       asyh->olut.size = size == 1024 ? 2 : 0;
        asyh->olut.range = 0;
        asyh->olut.output_mode = 1;
        asyh->olut.load = head907d_olut_load;
+       return true;
 }
 
 static void
@@ -201,6 +205,7 @@ headc37d = {
        .view = headc37d_view,
        .mode = headc37d_mode,
        .olut = headc37d_olut,
+       .olut_size = 1024,
        .olut_set = headc37d_olut_set,
        .olut_clr = headc37d_olut_clr,
        .curs_layout = head917d_curs_layout,
index 32a7f9e..938d910 100644 (file)
@@ -151,17 +151,20 @@ headc57d_olut_load(struct drm_color_lut *in, int size, void __iomem *mem)
        writew(readw(mem - 4), mem + 4);
 }
 
-void
-headc57d_olut(struct nv50_head *head, struct nv50_head_atom *asyh)
+bool
+headc57d_olut(struct nv50_head *head, struct nv50_head_atom *asyh, int size)
 {
+       if (size != 0 && size != 256 && size != 1024)
+               return false;
+
        asyh->olut.mode = 2; /* DIRECT10 */
        asyh->olut.size = 4 /* VSS header. */ + 1024 + 1 /* Entries. */;
        asyh->olut.output_mode = 1; /* INTERPOLATE_ENABLE. */
-       if (asyh->state.gamma_lut &&
-           asyh->state.gamma_lut->length / sizeof(struct drm_color_lut) == 256)
+       if (size == 256)
                asyh->olut.load = headc57d_olut_load_8;
        else
                asyh->olut.load = headc57d_olut_load;
+       return true;
 }
 
 static void
@@ -194,6 +197,7 @@ headc57d = {
        .mode = headc57d_mode,
        .olut = headc57d_olut,
        .olut_identity = true,
+       .olut_size = 1024,
        .olut_set = headc57d_olut_set,
        .olut_clr = headc57d_olut_clr,
        .curs_layout = head917d_curs_layout,
index 994def4..4e95ca5 100644 (file)
@@ -49,7 +49,7 @@ nv50_lut_load(struct nv50_lut *lut, int buffer, struct drm_property_blob *blob,
                        kvfree(in);
                }
        } else {
-               load(in, blob->length / sizeof(*in), mem);
+               load(in, drm_color_lut_size(blob), mem);
        }
 
        return addr;
index 5193b62..8903152 100644 (file)
@@ -318,7 +318,7 @@ nv50_wndw_atomic_check_acquire(struct nv50_wndw *wndw, bool modeset,
        return wndw->func->acquire(wndw, asyw, asyh);
 }
 
-static void
+static int
 nv50_wndw_atomic_check_lut(struct nv50_wndw *wndw,
                           struct nv50_wndw_atom *armw,
                           struct nv50_wndw_atom *asyw,
@@ -340,7 +340,7 @@ nv50_wndw_atomic_check_lut(struct nv50_wndw *wndw,
                 */
                if (!(ilut = asyh->state.gamma_lut)) {
                        asyw->visible = false;
-                       return;
+                       return 0;
                }
 
                if (wndw->func->ilut)
@@ -359,7 +359,10 @@ nv50_wndw_atomic_check_lut(struct nv50_wndw *wndw,
        /* Recalculate LUT state. */
        memset(&asyw->xlut, 0x00, sizeof(asyw->xlut));
        if ((asyw->ilut = wndw->func->ilut ? ilut : NULL)) {
-               wndw->func->ilut(wndw, asyw);
+               if (!wndw->func->ilut(wndw, asyw, drm_color_lut_size(ilut))) {
+                       DRM_DEBUG_KMS("Invalid ilut\n");
+                       return -EINVAL;
+               }
                asyw->xlut.handle = wndw->wndw.vram.handle;
                asyw->xlut.i.buffer = !asyw->xlut.i.buffer;
                asyw->set.xlut = true;
@@ -384,6 +387,7 @@ nv50_wndw_atomic_check_lut(struct nv50_wndw *wndw,
 
        /* Can't do an immediate flip while changing the LUT. */
        asyh->state.async_flip = false;
+       return 0;
 }
 
 static int
@@ -424,8 +428,11 @@ nv50_wndw_atomic_check(struct drm_plane *plane, struct drm_plane_state *state)
            (!armw->visible ||
             asyh->state.color_mgmt_changed ||
             asyw->state.fb->format->format !=
-            armw->state.fb->format->format))
-               nv50_wndw_atomic_check_lut(wndw, armw, asyw, asyh);
+            armw->state.fb->format->format)) {
+               ret = nv50_wndw_atomic_check_lut(wndw, armw, asyw, asyh);
+               if (ret)
+                       return ret;
+       }
 
        /* Calculate new window state. */
        if (asyw->visible) {
index c63bd3b..caf3974 100644 (file)
@@ -64,12 +64,13 @@ struct nv50_wndw_func {
        void (*ntfy_clr)(struct nv50_wndw *);
        int (*ntfy_wait_begun)(struct nouveau_bo *, u32 offset,
                               struct nvif_device *);
-       void (*ilut)(struct nv50_wndw *, struct nv50_wndw_atom *);
+       bool (*ilut)(struct nv50_wndw *, struct nv50_wndw_atom *, int);
        void (*csc)(struct nv50_wndw *, struct nv50_wndw_atom *,
                    const struct drm_color_ctm *);
        void (*csc_set)(struct nv50_wndw *, struct nv50_wndw_atom *);
        void (*csc_clr)(struct nv50_wndw *);
        bool ilut_identity;
+       int  ilut_size;
        bool olut_core;
        void (*xlut_set)(struct nv50_wndw *, struct nv50_wndw_atom *);
        void (*xlut_clr)(struct nv50_wndw *);
index 0f94021..b92dc34 100644 (file)
@@ -71,14 +71,18 @@ wndwc37e_ilut_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw)
        }
 }
 
-static void
-wndwc37e_ilut(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw)
+static bool
+wndwc37e_ilut(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw, int size)
 {
+       if (size != 256 && size != 1024)
+               return false;
+
        asyw->xlut.i.mode = 2;
-       asyw->xlut.i.size = 0;
+       asyw->xlut.i.size = size == 1024 ? 2 : 0;
        asyw->xlut.i.range = 0;
        asyw->xlut.i.output_mode = 1;
        asyw->xlut.i.load = head907d_olut_load;
+       return true;
 }
 
 void
@@ -261,6 +265,7 @@ wndwc37e = {
        .ntfy_reset = corec37d_ntfy_init,
        .ntfy_wait_begun = base507c_ntfy_wait_begun,
        .ilut = wndwc37e_ilut,
+       .ilut_size = 1024,
        .xlut_set = wndwc37e_ilut_set,
        .xlut_clr = wndwc37e_ilut_clr,
        .csc = base907c_csc,
index a311c79..35c9c52 100644 (file)
@@ -156,19 +156,21 @@ wndwc57e_ilut_load(struct drm_color_lut *in, int size, void __iomem *mem)
        writew(readw(mem - 4), mem + 4);
 }
 
-static void
-wndwc57e_ilut(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw)
+static bool
+wndwc57e_ilut(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw, int size)
 {
-       u16 size = asyw->ilut->length / sizeof(struct drm_color_lut);
+       if (size = size ? size : 1024, size != 256 && size != 1024)
+               return false;
+
        if (size == 256) {
                asyw->xlut.i.mode = 1; /* DIRECT8. */
        } else {
                asyw->xlut.i.mode = 2; /* DIRECT10. */
-               size = 1024;
        }
        asyw->xlut.i.size = 4 /* VSS header. */ + size + 1 /* Entries. */;
        asyw->xlut.i.output_mode = 0; /* INTERPOLATE_DISABLE. */
        asyw->xlut.i.load = wndwc57e_ilut_load;
+       return true;
 }
 
 static const struct nv50_wndw_func
@@ -183,6 +185,7 @@ wndwc57e = {
        .ntfy_wait_begun = base507c_ntfy_wait_begun,
        .ilut = wndwc57e_ilut,
        .ilut_identity = true,
+       .ilut_size = 1024,
        .xlut_set = wndwc57e_ilut_set,
        .xlut_clr = wndwc57e_ilut_clr,
        .csc = base907c_csc,
diff --git a/drivers/gpu/drm/nouveau/include/nvfw/acr.h b/drivers/gpu/drm/nouveau/include/nvfw/acr.h
new file mode 100644 (file)
index 0000000..e65d6a8
--- /dev/null
@@ -0,0 +1,152 @@
+#ifndef __NVFW_ACR_H__
+#define __NVFW_ACR_H__
+
+struct wpr_header {
+#define WPR_HEADER_V0_FALCON_ID_INVALID                              0xffffffff
+       u32 falcon_id;
+       u32 lsb_offset;
+       u32 bootstrap_owner;
+       u32 lazy_bootstrap;
+#define WPR_HEADER_V0_STATUS_NONE                                             0
+#define WPR_HEADER_V0_STATUS_COPY                                             1
+#define WPR_HEADER_V0_STATUS_VALIDATION_CODE_FAILED                           2
+#define WPR_HEADER_V0_STATUS_VALIDATION_DATA_FAILED                           3
+#define WPR_HEADER_V0_STATUS_VALIDATION_DONE                                  4
+#define WPR_HEADER_V0_STATUS_VALIDATION_SKIPPED                               5
+#define WPR_HEADER_V0_STATUS_BOOTSTRAP_READY                                  6
+       u32 status;
+};
+
+void wpr_header_dump(struct nvkm_subdev *, const struct wpr_header *);
+
+struct wpr_header_v1 {
+#define WPR_HEADER_V1_FALCON_ID_INVALID                              0xffffffff
+       u32 falcon_id;
+       u32 lsb_offset;
+       u32 bootstrap_owner;
+       u32 lazy_bootstrap;
+       u32 bin_version;
+#define WPR_HEADER_V1_STATUS_NONE                                             0
+#define WPR_HEADER_V1_STATUS_COPY                                             1
+#define WPR_HEADER_V1_STATUS_VALIDATION_CODE_FAILED                           2
+#define WPR_HEADER_V1_STATUS_VALIDATION_DATA_FAILED                           3
+#define WPR_HEADER_V1_STATUS_VALIDATION_DONE                                  4
+#define WPR_HEADER_V1_STATUS_VALIDATION_SKIPPED                               5
+#define WPR_HEADER_V1_STATUS_BOOTSTRAP_READY                                  6
+#define WPR_HEADER_V1_STATUS_REVOCATION_CHECK_FAILED                          7
+       u32 status;
+};
+
+void wpr_header_v1_dump(struct nvkm_subdev *, const struct wpr_header_v1 *);
+
+struct lsf_signature {
+       u8 prd_keys[2][16];
+       u8 dbg_keys[2][16];
+       u32 b_prd_present;
+       u32 b_dbg_present;
+       u32 falcon_id;
+};
+
+struct lsf_signature_v1 {
+       u8 prd_keys[2][16];
+       u8 dbg_keys[2][16];
+       u32 b_prd_present;
+       u32 b_dbg_present;
+       u32 falcon_id;
+       u32 supports_versioning;
+       u32 version;
+       u32 depmap_count;
+       u8 depmap[11/*LSF_LSB_DEPMAP_SIZE*/ * 2 * 4];
+       u8 kdf[16];
+};
+
+struct lsb_header_tail {
+       u32 ucode_off;
+       u32 ucode_size;
+       u32 data_size;
+       u32 bl_code_size;
+       u32 bl_imem_off;
+       u32 bl_data_off;
+       u32 bl_data_size;
+       u32 app_code_off;
+       u32 app_code_size;
+       u32 app_data_off;
+       u32 app_data_size;
+       u32 flags;
+};
+
+struct lsb_header {
+       struct lsf_signature signature;
+       struct lsb_header_tail tail;
+};
+
+void lsb_header_dump(struct nvkm_subdev *, struct lsb_header *);
+
+struct lsb_header_v1 {
+       struct lsf_signature_v1 signature;
+       struct lsb_header_tail tail;
+};
+
+void lsb_header_v1_dump(struct nvkm_subdev *, struct lsb_header_v1 *);
+
+struct flcn_acr_desc {
+       union {
+               u8 reserved_dmem[0x200];
+               u32 signatures[4];
+       } ucode_reserved_space;
+       u32 wpr_region_id;
+       u32 wpr_offset;
+       u32 mmu_mem_range;
+       struct {
+               u32 no_regions;
+               struct {
+                       u32 start_addr;
+                       u32 end_addr;
+                       u32 region_id;
+                       u32 read_mask;
+                       u32 write_mask;
+                       u32 client_mask;
+               } region_props[2];
+       } regions;
+       u32 ucode_blob_size;
+       u64 ucode_blob_base __aligned(8);
+       struct {
+               u32 vpr_enabled;
+               u32 vpr_start;
+               u32 vpr_end;
+               u32 hdcp_policies;
+       } vpr_desc;
+};
+
+void flcn_acr_desc_dump(struct nvkm_subdev *, struct flcn_acr_desc *);
+
+struct flcn_acr_desc_v1 {
+       u8 reserved_dmem[0x200];
+       u32 signatures[4];
+       u32 wpr_region_id;
+       u32 wpr_offset;
+       u32 mmu_memory_range;
+       struct {
+               u32 no_regions;
+               struct {
+                       u32 start_addr;
+                       u32 end_addr;
+                       u32 region_id;
+                       u32 read_mask;
+                       u32 write_mask;
+                       u32 client_mask;
+                       u32 shadow_mem_start_addr;
+               } region_props[2];
+       } regions;
+       u32 ucode_blob_size;
+       u64 ucode_blob_base __aligned(8);
+       struct {
+               u32 vpr_enabled;
+               u32 vpr_start;
+               u32 vpr_end;
+               u32 hdcp_policies;
+       } vpr_desc;
+};
+
+void flcn_acr_desc_v1_dump(struct nvkm_subdev *, struct flcn_acr_desc_v1 *);
+#endif
diff --git a/drivers/gpu/drm/nouveau/include/nvfw/flcn.h b/drivers/gpu/drm/nouveau/include/nvfw/flcn.h
new file mode 100644 (file)
index 0000000..e090f34
--- /dev/null
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: MIT */
+#ifndef __NVFW_FLCN_H__
+#define __NVFW_FLCN_H__
+#include <core/os.h>
+struct nvkm_subdev;
+
+struct loader_config {
+       u32 dma_idx;
+       u32 code_dma_base;
+       u32 code_size_total;
+       u32 code_size_to_load;
+       u32 code_entry_point;
+       u32 data_dma_base;
+       u32 data_size;
+       u32 overlay_dma_base;
+       u32 argc;
+       u32 argv;
+       u32 code_dma_base1;
+       u32 data_dma_base1;
+       u32 overlay_dma_base1;
+};
+
+void
+loader_config_dump(struct nvkm_subdev *, const struct loader_config *);
+
+struct loader_config_v1 {
+       u32 reserved;
+       u32 dma_idx;
+       u64 code_dma_base;
+       u32 code_size_total;
+       u32 code_size_to_load;
+       u32 code_entry_point;
+       u64 data_dma_base;
+       u32 data_size;
+       u64 overlay_dma_base;
+       u32 argc;
+       u32 argv;
+} __packed;
+
+void
+loader_config_v1_dump(struct nvkm_subdev *, const struct loader_config_v1 *);
+
+struct flcn_bl_dmem_desc {
+       u32 reserved[4];
+       u32 signature[4];
+       u32 ctx_dma;
+       u32 code_dma_base;
+       u32 non_sec_code_off;
+       u32 non_sec_code_size;
+       u32 sec_code_off;
+       u32 sec_code_size;
+       u32 code_entry_point;
+       u32 data_dma_base;
+       u32 data_size;
+       u32 code_dma_base1;
+       u32 data_dma_base1;
+};
+
+void
+flcn_bl_dmem_desc_dump(struct nvkm_subdev *, const struct flcn_bl_dmem_desc *);
+
+struct flcn_bl_dmem_desc_v1 {
+       u32 reserved[4];
+       u32 signature[4];
+       u32 ctx_dma;
+       u64 code_dma_base;
+       u32 non_sec_code_off;
+       u32 non_sec_code_size;
+       u32 sec_code_off;
+       u32 sec_code_size;
+       u32 code_entry_point;
+       u64 data_dma_base;
+       u32 data_size;
+} __packed;
+
+void flcn_bl_dmem_desc_v1_dump(struct nvkm_subdev *,
+                              const struct flcn_bl_dmem_desc_v1 *);
+
+struct flcn_bl_dmem_desc_v2 {
+       u32 reserved[4];
+       u32 signature[4];
+       u32 ctx_dma;
+       u64 code_dma_base;
+       u32 non_sec_code_off;
+       u32 non_sec_code_size;
+       u32 sec_code_off;
+       u32 sec_code_size;
+       u32 code_entry_point;
+       u64 data_dma_base;
+       u32 data_size;
+       u32 argc;
+       u32 argv;
+} __packed;
+
+void flcn_bl_dmem_desc_v2_dump(struct nvkm_subdev *,
+                              const struct flcn_bl_dmem_desc_v2 *);
+#endif
diff --git a/drivers/gpu/drm/nouveau/include/nvfw/fw.h b/drivers/gpu/drm/nouveau/include/nvfw/fw.h
new file mode 100644 (file)
index 0000000..a7cf118
--- /dev/null
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: MIT */
+#ifndef __NVFW_FW_H__
+#define __NVFW_FW_H__
+#include <core/os.h>
+struct nvkm_subdev;
+
+struct nvfw_bin_hdr {
+       u32 bin_magic;
+       u32 bin_ver;
+       u32 bin_size;
+       u32 header_offset;
+       u32 data_offset;
+       u32 data_size;
+};
+
+const struct nvfw_bin_hdr *nvfw_bin_hdr(struct nvkm_subdev *, const void *);
+
+struct nvfw_bl_desc {
+       u32 start_tag;
+       u32 dmem_load_off;
+       u32 code_off;
+       u32 code_size;
+       u32 data_off;
+       u32 data_size;
+};
+
+const struct nvfw_bl_desc *nvfw_bl_desc(struct nvkm_subdev *, const void *);
+#endif
diff --git a/drivers/gpu/drm/nouveau/include/nvfw/hs.h b/drivers/gpu/drm/nouveau/include/nvfw/hs.h
new file mode 100644 (file)
index 0000000..64d0d32
--- /dev/null
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: MIT */
+#ifndef __NVFW_HS_H__
+#define __NVFW_HS_H__
+#include <core/os.h>
+struct nvkm_subdev;
+
+struct nvfw_hs_header {
+       u32 sig_dbg_offset;
+       u32 sig_dbg_size;
+       u32 sig_prod_offset;
+       u32 sig_prod_size;
+       u32 patch_loc;
+       u32 patch_sig;
+       u32 hdr_offset;
+       u32 hdr_size;
+};
+
+const struct nvfw_hs_header *nvfw_hs_header(struct nvkm_subdev *, const void *);
+
+struct nvfw_hs_load_header {
+       u32 non_sec_code_off;
+       u32 non_sec_code_size;
+       u32 data_dma_base;
+       u32 data_size;
+       u32 num_apps;
+       u32 apps[0];
+};
+
+const struct nvfw_hs_load_header *
+nvfw_hs_load_header(struct nvkm_subdev *, const void *);
+#endif
diff --git a/drivers/gpu/drm/nouveau/include/nvfw/ls.h b/drivers/gpu/drm/nouveau/include/nvfw/ls.h
new file mode 100644 (file)
index 0000000..f63692a
--- /dev/null
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: MIT */
+#ifndef __NVFW_LS_H__
+#define __NVFW_LS_H__
+#include <core/os.h>
+struct nvkm_subdev;
+
+struct nvfw_ls_desc_head {
+       u32 descriptor_size;
+       u32 image_size;
+       u32 tools_version;
+       u32 app_version;
+       char date[64];
+       u32 bootloader_start_offset;
+       u32 bootloader_size;
+       u32 bootloader_imem_offset;
+       u32 bootloader_entry_point;
+       u32 app_start_offset;
+       u32 app_size;
+       u32 app_imem_offset;
+       u32 app_imem_entry;
+       u32 app_dmem_offset;
+       u32 app_resident_code_offset;
+       u32 app_resident_code_size;
+       u32 app_resident_data_offset;
+       u32 app_resident_data_size;
+};
+
+struct nvfw_ls_desc {
+       struct nvfw_ls_desc_head head;
+       u32 nb_overlays;
+       struct {
+               u32 start;
+               u32 size;
+       } load_ovl[64];
+       u32 compressed;
+};
+
+const struct nvfw_ls_desc *nvfw_ls_desc(struct nvkm_subdev *, const void *);
+
+struct nvfw_ls_desc_v1 {
+       struct nvfw_ls_desc_head head;
+       u32 nb_imem_overlays;
+       u32 nb_dmem_overlays;
+       struct {
+               u32 start;
+               u32 size;
+       } load_ovl[64];
+       u32 compressed;
+};
+
+const struct nvfw_ls_desc_v1 *
+nvfw_ls_desc_v1(struct nvkm_subdev *, const void *);
+#endif
diff --git a/drivers/gpu/drm/nouveau/include/nvfw/pmu.h b/drivers/gpu/drm/nouveau/include/nvfw/pmu.h
new file mode 100644 (file)
index 0000000..452ed7d
--- /dev/null
@@ -0,0 +1,98 @@
+#ifndef __NVFW_PMU_H__
+#define __NVFW_PMU_H__
+
+struct nv_pmu_args {
+       u32 reserved;
+       u32 freq_hz;
+       u32 trace_size;
+       u32 trace_dma_base;
+       u16 trace_dma_base1;
+       u8 trace_dma_offset;
+       u32 trace_dma_idx;
+       bool secure_mode;
+       bool raise_priv_sec;
+       struct {
+               u32 dma_base;
+               u16 dma_base1;
+               u8 dma_offset;
+               u16 fb_size;
+               u8 dma_idx;
+       } gc6_ctx;
+       u8 pad;
+};
+
+#define NV_PMU_UNIT_INIT                                                   0x07
+#define NV_PMU_UNIT_ACR                                                    0x0a
+
+struct nv_pmu_init_msg {
+       struct nv_falcon_msg hdr;
+#define NV_PMU_INIT_MSG_INIT                                               0x00
+       u8 msg_type;
+
+       u8 pad;
+       u16 os_debug_entry_point;
+
+       struct {
+               u16 size;
+               u16 offset;
+               u8 index;
+               u8 pad;
+       } queue_info[5];
+
+       u16 sw_managed_area_offset;
+       u16 sw_managed_area_size;
+};
+
+struct nv_pmu_acr_cmd {
+       struct nv_falcon_cmd hdr;
+#define NV_PMU_ACR_CMD_INIT_WPR_REGION                                     0x00
+#define NV_PMU_ACR_CMD_BOOTSTRAP_FALCON                                    0x01
+#define NV_PMU_ACR_CMD_BOOTSTRAP_MULTIPLE_FALCONS                          0x03
+       u8 cmd_type;
+};
+
+struct nv_pmu_acr_msg {
+       struct nv_falcon_cmd hdr;
+       u8 msg_type;
+};
+
+struct nv_pmu_acr_init_wpr_region_cmd {
+       struct nv_pmu_acr_cmd cmd;
+       u32 region_id;
+       u32 wpr_offset;
+};
+
+struct nv_pmu_acr_init_wpr_region_msg {
+       struct nv_pmu_acr_msg msg;
+       u32 error_code;
+};
+
+struct nv_pmu_acr_bootstrap_falcon_cmd {
+       struct nv_pmu_acr_cmd cmd;
+#define NV_PMU_ACR_BOOTSTRAP_FALCON_FLAGS_RESET_YES                  0x00000000
+#define NV_PMU_ACR_BOOTSTRAP_FALCON_FLAGS_RESET_NO                   0x00000001
+       u32 flags;
+       u32 falcon_id;
+};
+
+struct nv_pmu_acr_bootstrap_falcon_msg {
+       struct nv_pmu_acr_msg msg;
+       u32 falcon_id;
+};
+
+struct nv_pmu_acr_bootstrap_multiple_falcons_cmd {
+       struct nv_pmu_acr_cmd cmd;
+#define NV_PMU_ACR_BOOTSTRAP_MULTIPLE_FALCONS_FLAGS_RESET_YES        0x00000000
+#define NV_PMU_ACR_BOOTSTRAP_MULTIPLE_FALCONS_FLAGS_RESET_NO         0x00000001
+       u32 flags;
+       u32 falcon_mask;
+       u32 use_va_mask;
+       u32 wpr_lo;
+       u32 wpr_hi;
+};
+
+struct nv_pmu_acr_bootstrap_multiple_falcons_msg {
+       struct nv_pmu_acr_msg msg;
+       u32 falcon_mask;
+};
+#endif
diff --git a/drivers/gpu/drm/nouveau/include/nvfw/sec2.h b/drivers/gpu/drm/nouveau/include/nvfw/sec2.h
new file mode 100644 (file)
index 0000000..0349655
--- /dev/null
@@ -0,0 +1,60 @@
+#ifndef __NVFW_SEC2_H__
+#define __NVFW_SEC2_H__
+
+struct nv_sec2_args {
+       u32 freq_hz;
+       u32 falc_trace_size;
+       u32 falc_trace_dma_base;
+       u32 falc_trace_dma_idx;
+       bool secure_mode;
+};
+
+#define NV_SEC2_UNIT_INIT                                                  0x01
+#define NV_SEC2_UNIT_ACR                                                   0x08
+
+struct nv_sec2_init_msg {
+       struct nv_falcon_msg hdr;
+#define NV_SEC2_INIT_MSG_INIT                                              0x00
+       u8 msg_type;
+
+       u8 num_queues;
+       u16 os_debug_entry_point;
+
+       struct {
+               u32 offset;
+               u16 size;
+               u8 index;
+#define NV_SEC2_INIT_MSG_QUEUE_ID_CMDQ                                     0x00
+#define NV_SEC2_INIT_MSG_QUEUE_ID_MSGQ                                     0x01
+               u8 id;
+       } queue_info[2];
+
+       u32 sw_managed_area_offset;
+       u16 sw_managed_area_size;
+};
+
+struct nv_sec2_acr_cmd {
+       struct nv_falcon_cmd hdr;
+#define NV_SEC2_ACR_CMD_BOOTSTRAP_FALCON                                   0x00
+       u8 cmd_type;
+};
+
+struct nv_sec2_acr_msg {
+       struct nv_falcon_cmd hdr;
+       u8 msg_type;
+};
+
+struct nv_sec2_acr_bootstrap_falcon_cmd {
+       struct nv_sec2_acr_cmd cmd;
+#define NV_SEC2_ACR_BOOTSTRAP_FALCON_FLAGS_RESET_YES                 0x00000000
+#define NV_SEC2_ACR_BOOTSTRAP_FALCON_FLAGS_RESET_NO                  0x00000001
+       u32 flags;
+       u32 falcon_id;
+};
+
+struct nv_sec2_acr_bootstrap_falcon_msg {
+       struct nv_sec2_acr_msg msg;
+       u32 error_code;
+       u32 falcon_id;
+};
+#endif
index f704ae6..3065974 100644 (file)
 
 #define VOLTA_A                                       /* cl9097.h */ 0x0000c397
 
+#define TURING_A                                      /* cl9097.h */ 0x0000c597
+
 #define NV74_BSP                                                     0x000074b0
 
 #define GT212_MSVLD                                                  0x000085b1
 #define PASCAL_COMPUTE_A                                             0x0000c0c0
 #define PASCAL_COMPUTE_B                                             0x0000c1c0
 #define VOLTA_COMPUTE_A                                              0x0000c3c0
+#define TURING_COMPUTE_A                                             0x0000c5c0
 
 #define NV74_CIPHER                                                  0x000074c1
 #endif
index 8450127..c21d09f 100644 (file)
@@ -35,7 +35,7 @@ struct nvif_mmu_type_v0 {
 
 struct nvif_mmu_kind_v0 {
        __u8  version;
-       __u8  pad01[1];
+       __u8  kind_inv;
        __u16 count;
        __u8  data[];
 };
index 747ecf6..cec1e88 100644 (file)
@@ -7,6 +7,7 @@ struct nvif_mmu {
        u8  dmabits;
        u8  heap_nr;
        u8  type_nr;
+       u8  kind_inv;
        u16 kind_nr;
        s32 mem;
 
@@ -36,9 +37,8 @@ void nvif_mmu_fini(struct nvif_mmu *);
 static inline bool
 nvif_mmu_kind_valid(struct nvif_mmu *mmu, u8 kind)
 {
-       const u8 invalid = mmu->kind_nr - 1;
        if (kind) {
-               if (kind >= mmu->kind_nr || mmu->kind[kind] == invalid)
+               if (kind >= mmu->kind_nr || mmu->kind[kind] == mmu->kind_inv)
                        return false;
        }
        return true;
index 6d55cd0..5c007ce 100644 (file)
@@ -23,13 +23,13 @@ enum nvkm_devidx {
        NVKM_SUBDEV_MMU,
        NVKM_SUBDEV_BAR,
        NVKM_SUBDEV_FAULT,
+       NVKM_SUBDEV_ACR,
        NVKM_SUBDEV_PMU,
        NVKM_SUBDEV_VOLT,
        NVKM_SUBDEV_ICCSENSE,
        NVKM_SUBDEV_THERM,
        NVKM_SUBDEV_CLK,
        NVKM_SUBDEV_GSP,
-       NVKM_SUBDEV_SECBOOT,
 
        NVKM_ENGINE_BSP,
 
@@ -129,6 +129,7 @@ struct nvkm_device {
                struct notifier_block nb;
        } acpi;
 
+       struct nvkm_acr *acr;
        struct nvkm_bar *bar;
        struct nvkm_bios *bios;
        struct nvkm_bus *bus;
@@ -149,7 +150,6 @@ struct nvkm_device {
        struct nvkm_subdev *mxm;
        struct nvkm_pci *pci;
        struct nvkm_pmu *pmu;
-       struct nvkm_secboot *secboot;
        struct nvkm_therm *therm;
        struct nvkm_timer *timer;
        struct nvkm_top *top;
@@ -169,7 +169,7 @@ struct nvkm_device {
        struct nvkm_engine *mspdec;
        struct nvkm_engine *msppp;
        struct nvkm_engine *msvld;
-       struct nvkm_engine *nvenc[3];
+       struct nvkm_nvenc *nvenc[3];
        struct nvkm_nvdec *nvdec[3];
        struct nvkm_pm *pm;
        struct nvkm_engine *sec;
@@ -202,6 +202,7 @@ struct nvkm_device_quirk {
 struct nvkm_device_chip {
        const char *name;
 
+       int (*acr     )(struct nvkm_device *, int idx, struct nvkm_acr **);
        int (*bar     )(struct nvkm_device *, int idx, struct nvkm_bar **);
        int (*bios    )(struct nvkm_device *, int idx, struct nvkm_bios **);
        int (*bus     )(struct nvkm_device *, int idx, struct nvkm_bus **);
@@ -222,7 +223,6 @@ struct nvkm_device_chip {
        int (*mxm     )(struct nvkm_device *, int idx, struct nvkm_subdev **);
        int (*pci     )(struct nvkm_device *, int idx, struct nvkm_pci **);
        int (*pmu     )(struct nvkm_device *, int idx, struct nvkm_pmu **);
-       int (*secboot )(struct nvkm_device *, int idx, struct nvkm_secboot **);
        int (*therm   )(struct nvkm_device *, int idx, struct nvkm_therm **);
        int (*timer   )(struct nvkm_device *, int idx, struct nvkm_timer **);
        int (*top     )(struct nvkm_device *, int idx, struct nvkm_top **);
@@ -242,7 +242,7 @@ struct nvkm_device_chip {
        int (*mspdec  )(struct nvkm_device *, int idx, struct nvkm_engine **);
        int (*msppp   )(struct nvkm_device *, int idx, struct nvkm_engine **);
        int (*msvld   )(struct nvkm_device *, int idx, struct nvkm_engine **);
-       int (*nvenc[3])(struct nvkm_device *, int idx, struct nvkm_engine **);
+       int (*nvenc[3])(struct nvkm_device *, int idx, struct nvkm_nvenc **);
        int (*nvdec[3])(struct nvkm_device *, int idx, struct nvkm_nvdec **);
        int (*pm      )(struct nvkm_device *, int idx, struct nvkm_pm **);
        int (*sec     )(struct nvkm_device *, int idx, struct nvkm_engine **);
diff --git a/drivers/gpu/drm/nouveau/include/nvkm/core/falcon.h b/drivers/gpu/drm/nouveau/include/nvkm/core/falcon.h
new file mode 100644 (file)
index 0000000..daa8e4b
--- /dev/null
@@ -0,0 +1,77 @@
+#ifndef __NVKM_FALCON_H__
+#define __NVKM_FALCON_H__
+#include <engine/falcon.h>
+
+int nvkm_falcon_ctor(const struct nvkm_falcon_func *, struct nvkm_subdev *owner,
+                    const char *name, u32 addr, struct nvkm_falcon *);
+void nvkm_falcon_dtor(struct nvkm_falcon *);
+
+void nvkm_falcon_v1_load_imem(struct nvkm_falcon *,
+                             void *, u32, u32, u16, u8, bool);
+void nvkm_falcon_v1_load_dmem(struct nvkm_falcon *, void *, u32, u32, u8);
+void nvkm_falcon_v1_read_dmem(struct nvkm_falcon *, u32, u32, u8, void *);
+void nvkm_falcon_v1_bind_context(struct nvkm_falcon *, struct nvkm_memory *);
+int nvkm_falcon_v1_wait_for_halt(struct nvkm_falcon *, u32);
+int nvkm_falcon_v1_clear_interrupt(struct nvkm_falcon *, u32);
+void nvkm_falcon_v1_set_start_addr(struct nvkm_falcon *, u32 start_addr);
+void nvkm_falcon_v1_start(struct nvkm_falcon *);
+int nvkm_falcon_v1_enable(struct nvkm_falcon *);
+void nvkm_falcon_v1_disable(struct nvkm_falcon *);
+
+void gp102_sec2_flcn_bind_context(struct nvkm_falcon *, struct nvkm_memory *);
+int gp102_sec2_flcn_enable(struct nvkm_falcon *);
+
+#define FLCN_PRINTK(t,f,fmt,a...) do {                                         \
+       if (nvkm_subdev_name[(f)->owner->index] != (f)->name)                  \
+               nvkm_##t((f)->owner, "%s: "fmt"\n", (f)->name, ##a);           \
+       else                                                                   \
+               nvkm_##t((f)->owner, fmt"\n", ##a);                            \
+} while(0)
+#define FLCN_DBG(f,fmt,a...) FLCN_PRINTK(debug, (f), fmt, ##a)
+#define FLCN_ERR(f,fmt,a...) FLCN_PRINTK(error, (f), fmt, ##a)
+
+/**
+ * struct nv_falcon_msg - header for all messages
+ *
+ * @unit_id:   id of firmware process that sent the message
+ * @size:      total size of message
+ * @ctrl_flags:        control flags
+ * @seq_id:    used to match a message from its corresponding command
+ */
+struct nv_falcon_msg {
+       u8 unit_id;
+       u8 size;
+       u8 ctrl_flags;
+       u8 seq_id;
+};
+
+#define nv_falcon_cmd nv_falcon_msg
+#define NV_FALCON_CMD_UNIT_ID_REWIND                                       0x00
+
+struct nvkm_falcon_qmgr;
+int nvkm_falcon_qmgr_new(struct nvkm_falcon *, struct nvkm_falcon_qmgr **);
+void nvkm_falcon_qmgr_del(struct nvkm_falcon_qmgr **);
+
+typedef int
+(*nvkm_falcon_qmgr_callback)(void *priv, struct nv_falcon_msg *);
+
+struct nvkm_falcon_cmdq;
+int nvkm_falcon_cmdq_new(struct nvkm_falcon_qmgr *, const char *name,
+                        struct nvkm_falcon_cmdq **);
+void nvkm_falcon_cmdq_del(struct nvkm_falcon_cmdq **);
+void nvkm_falcon_cmdq_init(struct nvkm_falcon_cmdq *,
+                          u32 index, u32 offset, u32 size);
+void nvkm_falcon_cmdq_fini(struct nvkm_falcon_cmdq *);
+int nvkm_falcon_cmdq_send(struct nvkm_falcon_cmdq *, struct nv_falcon_cmd *,
+                         nvkm_falcon_qmgr_callback, void *priv,
+                         unsigned long timeout_jiffies);
+
+struct nvkm_falcon_msgq;
+int nvkm_falcon_msgq_new(struct nvkm_falcon_qmgr *, const char *name,
+                        struct nvkm_falcon_msgq **);
+void nvkm_falcon_msgq_del(struct nvkm_falcon_msgq **);
+void nvkm_falcon_msgq_init(struct nvkm_falcon_msgq *,
+                          u32 index, u32 offset, u32 size);
+int nvkm_falcon_msgq_recv_initmsg(struct nvkm_falcon_msgq *, void *, u32 size);
+void nvkm_falcon_msgq_recv(struct nvkm_falcon_msgq *);
+#endif
index 383370c..d14b7fb 100644 (file)
@@ -1,12 +1,55 @@
 /* SPDX-License-Identifier: MIT */
 #ifndef __NVKM_FIRMWARE_H__
 #define __NVKM_FIRMWARE_H__
+#include <core/option.h>
 #include <core/subdev.h>
 
-int nvkm_firmware_get_version(const struct nvkm_subdev *, const char *fwname,
-                             int min_version, int max_version,
-                             const struct firmware **);
-int nvkm_firmware_get(const struct nvkm_subdev *, const char *fwname,
+int nvkm_firmware_get(const struct nvkm_subdev *, const char *fwname, int ver,
                      const struct firmware **);
 void nvkm_firmware_put(const struct firmware *);
+
+int nvkm_firmware_load_blob(const struct nvkm_subdev *subdev, const char *path,
+                           const char *name, int ver, struct nvkm_blob *);
+int nvkm_firmware_load_name(const struct nvkm_subdev *subdev, const char *path,
+                           const char *name, int ver,
+                           const struct firmware **);
+
+#define nvkm_firmware_load(s,l,o,p...) ({                                      \
+       struct nvkm_subdev *_s = (s);                                          \
+       const char *_opts = (o);                                               \
+       char _option[32];                                                      \
+       typeof(l[0]) *_list = (l), *_next, *_fwif = NULL;                      \
+       int _ver, _fwv, _ret = 0;                                              \
+                                                                               \
+       snprintf(_option, sizeof(_option), "Nv%sFw", _opts);                   \
+       _ver = nvkm_longopt(_s->device->cfgopt, _option, -2);                  \
+       if (_ver >= -1) {                                                      \
+               for (_next = _list; !_fwif && _next->load; _next++) {          \
+                       if (_next->version == _ver)                            \
+                               _fwif = _next;                                 \
+               }                                                              \
+               _ret = _fwif ? 0 : -EINVAL;                                    \
+       }                                                                      \
+                                                                               \
+       if (_ret == 0) {                                                       \
+               snprintf(_option, sizeof(_option), "Nv%sFwVer", _opts);        \
+               _fwv = _fwif ? _fwif->version : -1;                            \
+               _ver = nvkm_longopt(_s->device->cfgopt, _option, _fwv);        \
+               for (_next = _fwif ? _fwif : _list; _next->load; _next++) {    \
+                       _fwv = (_ver >= 0) ? _ver : _next->version;            \
+                       _ret = _next->load(p, _fwv, _next);                    \
+                       if (_ret == 0 || _ver >= 0) {                          \
+                               _fwif = _next;                                 \
+                               break;                                         \
+                       }                                                      \
+               }                                                              \
+       }                                                                      \
+                                                                               \
+       if (_ret) {                                                            \
+               nvkm_error(_s, "failed to load firmware\n");                   \
+               _fwif = ERR_PTR(_ret);                                         \
+       }                                                                      \
+                                                                              \
+       _fwif;                                                                 \
+})
 #endif
index b23bf61..74d3f1a 100644 (file)
@@ -84,6 +84,22 @@ void nvkm_memory_tags_put(struct nvkm_memory *, struct nvkm_device *,
        nvkm_wo32((o), __a + 4, upper_32_bits(__d));                           \
 } while(0)
 
+#define nvkm_robj(o,a,p,s) do {                                                \
+       u32 _addr = (a), _size = (s) >> 2, *_data = (void *)(p);               \
+       while (_size--) {                                                      \
+               *(_data++) = nvkm_ro32((o), _addr);                            \
+               _addr += 4;                                                    \
+       }                                                                      \
+} while(0)
+
+#define nvkm_wobj(o,a,p,s) do {                                                \
+       u32 _addr = (a), _size = (s) >> 2, *_data = (void *)(p);               \
+       while (_size--) {                                                      \
+               nvkm_wo32((o), _addr, *(_data++));                             \
+               _addr += 4;                                                    \
+       }                                                                      \
+} while(0)
+
 #define nvkm_fill(t,s,o,a,d,c) do {                                            \
        u64 _a = (a), _c = (c), _d = (d), _o = _a >> s, _s = _c << s;          \
        u##t __iomem *_m = nvkm_kmap(o);                                       \
diff --git a/drivers/gpu/drm/nouveau/include/nvkm/core/msgqueue.h b/drivers/gpu/drm/nouveau/include/nvkm/core/msgqueue.h
deleted file mode 100644 (file)
index bf3e532..0000000
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#ifndef __NVKM_CORE_MSGQUEUE_H
-#define __NVKM_CORE_MSGQUEUE_H
-#include <subdev/secboot.h>
-struct nvkm_msgqueue;
-
-/* Hopefully we will never have firmware arguments larger than that... */
-#define NVKM_MSGQUEUE_CMDLINE_SIZE 0x100
-
-int nvkm_msgqueue_new(u32, struct nvkm_falcon *, const struct nvkm_secboot *,
-                     struct nvkm_msgqueue **);
-void nvkm_msgqueue_del(struct nvkm_msgqueue **);
-void nvkm_msgqueue_recv(struct nvkm_msgqueue *);
-int nvkm_msgqueue_reinit(struct nvkm_msgqueue *);
-
-/* useful if we run a NVIDIA-signed firmware */
-void nvkm_msgqueue_write_cmdline(struct nvkm_msgqueue *, void *);
-
-/* interface to ACR unit running on falcon (NVIDIA signed firmware) */
-int nvkm_msgqueue_acr_boot_falcons(struct nvkm_msgqueue *, unsigned long);
-
-#endif
index 029a416..d7ba320 100644 (file)
        iowrite32_native(lower_32_bits(_v), &_p[0]);                           \
        iowrite32_native(upper_32_bits(_v), &_p[1]);                           \
 } while(0)
+
+struct nvkm_blob {
+       void *data;
+       u32 size;
+};
+
+static inline void
+nvkm_blob_dtor(struct nvkm_blob *blob)
+{
+       kfree(blob->data);
+       blob->data = NULL;
+       blob->size = 0;
+}
 #endif
index 23b582d..27c1f86 100644 (file)
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: MIT */
-#ifndef __NVKM_FALCON_H__
-#define __NVKM_FALCON_H__
+#ifndef __NVKM_FLCNEN_H__
+#define __NVKM_FLCNEN_H__
 #define nvkm_falcon(p) container_of((p), struct nvkm_falcon, engine)
 #include <core/engine.h>
 struct nvkm_fifo_chan;
@@ -23,12 +23,13 @@ struct nvkm_falcon {
 
        struct mutex mutex;
        struct mutex dmem_mutex;
+       bool oneinit;
+
        const struct nvkm_subdev *user;
 
        u8 version;
        u8 secret;
        bool debug;
-       bool has_emem;
 
        struct nvkm_memory *core;
        bool external;
@@ -76,9 +77,14 @@ struct nvkm_falcon_func {
        } data;
        void (*init)(struct nvkm_falcon *);
        void (*intr)(struct nvkm_falcon *, struct nvkm_fifo_chan *);
+
+       u32 debug;
+       u32 fbif;
+
        void (*load_imem)(struct nvkm_falcon *, void *, u32, u32, u16, u8, bool);
        void (*load_dmem)(struct nvkm_falcon *, void *, u32, u32, u8);
        void (*read_dmem)(struct nvkm_falcon *, u32, u32, u8, void *);
+       u32 emem_addr;
        void (*bind_context)(struct nvkm_falcon *, struct nvkm_memory *);
        int (*wait_for_halt)(struct nvkm_falcon *, u32);
        int (*clear_interrupt)(struct nvkm_falcon *, u32);
@@ -86,6 +92,13 @@ struct nvkm_falcon_func {
        void (*start)(struct nvkm_falcon *);
        int (*enable)(struct nvkm_falcon *falcon);
        void (*disable)(struct nvkm_falcon *falcon);
+       int (*reset)(struct nvkm_falcon *);
+
+       struct {
+               u32 head;
+               u32 tail;
+               u32 stride;
+       } cmdq, msgq;
 
        struct nvkm_sclass sclass[];
 };
@@ -122,5 +135,4 @@ int nvkm_falcon_clear_interrupt(struct nvkm_falcon *, u32);
 int nvkm_falcon_enable(struct nvkm_falcon *);
 void nvkm_falcon_disable(struct nvkm_falcon *);
 int nvkm_falcon_reset(struct nvkm_falcon *);
-
 #endif
index 2cde36f..1530c81 100644 (file)
@@ -50,6 +50,8 @@ int gp100_gr_new(struct nvkm_device *, int, struct nvkm_gr **);
 int gp102_gr_new(struct nvkm_device *, int, struct nvkm_gr **);
 int gp104_gr_new(struct nvkm_device *, int, struct nvkm_gr **);
 int gp107_gr_new(struct nvkm_device *, int, struct nvkm_gr **);
+int gp108_gr_new(struct nvkm_device *, int, struct nvkm_gr **);
 int gp10b_gr_new(struct nvkm_device *, int, struct nvkm_gr **);
 int gv100_gr_new(struct nvkm_device *, int, struct nvkm_gr **);
+int tu102_gr_new(struct nvkm_device *, int, struct nvkm_gr **);
 #endif
index 7c7d7f0..1b3183e 100644 (file)
@@ -3,13 +3,13 @@
 #define __NVKM_NVDEC_H__
 #define nvkm_nvdec(p) container_of((p), struct nvkm_nvdec, engine)
 #include <core/engine.h>
+#include <core/falcon.h>
 
 struct nvkm_nvdec {
+       const struct nvkm_nvdec_func *func;
        struct nvkm_engine engine;
-       u32 addr;
-
-       struct nvkm_falcon *falcon;
+       struct nvkm_falcon falcon;
 };
 
-int gp102_nvdec_new(struct nvkm_device *, int, struct nvkm_nvdec **);
+int gm107_nvdec_new(struct nvkm_device *, int, struct nvkm_nvdec **);
 #endif
index 2162404..33e6ba8 100644 (file)
@@ -1,5 +1,15 @@
 /* SPDX-License-Identifier: MIT */
 #ifndef __NVKM_NVENC_H__
 #define __NVKM_NVENC_H__
+#define nvkm_nvenc(p) container_of((p), struct nvkm_nvenc, engine)
 #include <core/engine.h>
+#include <core/falcon.h>
+
+struct nvkm_nvenc {
+       const struct nvkm_nvenc_func *func;
+       struct nvkm_engine engine;
+       struct nvkm_falcon falcon;
+};
+
+int gm107_nvenc_new(struct nvkm_device *, int, struct nvkm_nvenc **);
 #endif
index 33078f8..34dc765 100644 (file)
@@ -1,17 +1,24 @@
 /* SPDX-License-Identifier: MIT */
 #ifndef __NVKM_SEC2_H__
 #define __NVKM_SEC2_H__
+#define nvkm_sec2(p) container_of((p), struct nvkm_sec2, engine)
 #include <core/engine.h>
+#include <core/falcon.h>
 
 struct nvkm_sec2 {
+       const struct nvkm_sec2_func *func;
        struct nvkm_engine engine;
-       u32 addr;
+       struct nvkm_falcon falcon;
+
+       struct nvkm_falcon_qmgr *qmgr;
+       struct nvkm_falcon_cmdq *cmdq;
+       struct nvkm_falcon_msgq *msgq;
 
-       struct nvkm_falcon *falcon;
-       struct nvkm_msgqueue *queue;
        struct work_struct work;
+       bool initmsg_received;
 };
 
 int gp102_sec2_new(struct nvkm_device *, int, struct nvkm_sec2 **);
+int gp108_sec2_new(struct nvkm_device *, int, struct nvkm_sec2 **);
 int tu102_sec2_new(struct nvkm_device *, int, struct nvkm_sec2 **);
 #endif
diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/acr.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/acr.h
new file mode 100644 (file)
index 0000000..5d9c3a9
--- /dev/null
@@ -0,0 +1,126 @@
+/* SPDX-License-Identifier: MIT */
+#ifndef __NVKM_ACR_H__
+#define __NVKM_ACR_H__
+#define nvkm_acr(p) container_of((p), struct nvkm_acr, subdev)
+#include <core/subdev.h>
+#include <core/falcon.h>
+
+enum nvkm_acr_lsf_id {
+       NVKM_ACR_LSF_PMU = 0,
+       NVKM_ACR_LSF_GSPLITE = 1,
+       NVKM_ACR_LSF_FECS = 2,
+       NVKM_ACR_LSF_GPCCS = 3,
+       NVKM_ACR_LSF_NVDEC = 4,
+       NVKM_ACR_LSF_SEC2 = 7,
+       NVKM_ACR_LSF_MINION = 10,
+       NVKM_ACR_LSF_NUM
+};
+
+static inline const char *
+nvkm_acr_lsf_id(enum nvkm_acr_lsf_id id)
+{
+       switch (id) {
+       case NVKM_ACR_LSF_PMU    : return "pmu";
+       case NVKM_ACR_LSF_GSPLITE: return "gsplite";
+       case NVKM_ACR_LSF_FECS   : return "fecs";
+       case NVKM_ACR_LSF_GPCCS  : return "gpccs";
+       case NVKM_ACR_LSF_NVDEC  : return "nvdec";
+       case NVKM_ACR_LSF_SEC2   : return "sec2";
+       case NVKM_ACR_LSF_MINION : return "minion";
+       default:
+               return "unknown";
+       }
+}
+
+struct nvkm_acr {
+       const struct nvkm_acr_func *func;
+       struct nvkm_subdev subdev;
+
+       struct list_head hsfw, hsf;
+       struct list_head lsfw, lsf;
+
+       struct nvkm_memory *wpr;
+       u64 wpr_start;
+       u64 wpr_end;
+       u64 shadow_start;
+
+       struct nvkm_memory *inst;
+       struct nvkm_vmm *vmm;
+
+       bool done;
+
+       const struct firmware *wpr_fw;
+       bool wpr_comp;
+       u64 wpr_prev;
+};
+
+bool nvkm_acr_managed_falcon(struct nvkm_device *, enum nvkm_acr_lsf_id);
+int nvkm_acr_bootstrap_falcons(struct nvkm_device *, unsigned long mask);
+
+int gm200_acr_new(struct nvkm_device *, int, struct nvkm_acr **);
+int gm20b_acr_new(struct nvkm_device *, int, struct nvkm_acr **);
+int gp102_acr_new(struct nvkm_device *, int, struct nvkm_acr **);
+int gp108_acr_new(struct nvkm_device *, int, struct nvkm_acr **);
+int gp10b_acr_new(struct nvkm_device *, int, struct nvkm_acr **);
+int tu102_acr_new(struct nvkm_device *, int, struct nvkm_acr **);
+
+struct nvkm_acr_lsfw {
+       const struct nvkm_acr_lsf_func *func;
+       struct nvkm_falcon *falcon;
+       enum nvkm_acr_lsf_id id;
+
+       struct list_head head;
+
+       struct nvkm_blob img;
+
+       const struct firmware *sig;
+
+       u32 bootloader_size;
+       u32 bootloader_imem_offset;
+
+       u32 app_size;
+       u32 app_start_offset;
+       u32 app_imem_entry;
+       u32 app_resident_code_offset;
+       u32 app_resident_code_size;
+       u32 app_resident_data_offset;
+       u32 app_resident_data_size;
+
+       u32 ucode_size;
+       u32 data_size;
+
+       struct {
+               u32 lsb;
+               u32 img;
+               u32 bld;
+       } offset;
+       u32 bl_data_size;
+};
+
+struct nvkm_acr_lsf_func {
+/* The (currently) map directly to LSB header flags. */
+#define NVKM_ACR_LSF_LOAD_CODE_AT_0                                  0x00000001
+#define NVKM_ACR_LSF_DMACTL_REQ_CTX                                  0x00000004
+#define NVKM_ACR_LSF_FORCE_PRIV_LOAD                                 0x00000008
+       u32 flags;
+       u32 bld_size;
+       void (*bld_write)(struct nvkm_acr *, u32 bld, struct nvkm_acr_lsfw *);
+       void (*bld_patch)(struct nvkm_acr *, u32 bld, s64 adjust);
+       int (*boot)(struct nvkm_falcon *);
+       int (*bootstrap_falcon)(struct nvkm_falcon *, enum nvkm_acr_lsf_id);
+       int (*bootstrap_multiple_falcons)(struct nvkm_falcon *, u32 mask);
+};
+
+int
+nvkm_acr_lsfw_load_sig_image_desc(struct nvkm_subdev *, struct nvkm_falcon *,
+                                 enum nvkm_acr_lsf_id, const char *path,
+                                 int ver, const struct nvkm_acr_lsf_func *);
+int
+nvkm_acr_lsfw_load_sig_image_desc_v1(struct nvkm_subdev *, struct nvkm_falcon *,
+                                    enum nvkm_acr_lsf_id, const char *path,
+                                    int ver, const struct nvkm_acr_lsf_func *);
+int
+nvkm_acr_lsfw_load_bl_inst_data_sig(struct nvkm_subdev *, struct nvkm_falcon *,
+                                   enum nvkm_acr_lsf_id, const char *path,
+                                   int ver, const struct nvkm_acr_lsf_func *);
+#endif
index 97322f9..a513c16 100644 (file)
@@ -31,6 +31,7 @@ struct nvkm_fault_data {
 };
 
 int gp100_fault_new(struct nvkm_device *, int, struct nvkm_fault **);
+int gp10b_fault_new(struct nvkm_device *, int, struct nvkm_fault **);
 int gv100_fault_new(struct nvkm_device *, int, struct nvkm_fault **);
 int tu102_fault_new(struct nvkm_device *, int, struct nvkm_fault **);
 #endif
index 239ad22..34b56b1 100644 (file)
@@ -33,6 +33,8 @@ struct nvkm_fb {
        const struct nvkm_fb_func *func;
        struct nvkm_subdev subdev;
 
+       struct nvkm_blob vpr_scrubber;
+
        struct nvkm_ram *ram;
        struct nvkm_mm tags;
 
index 4c672a5..06db676 100644 (file)
@@ -2,12 +2,11 @@
 #define __NVKM_GSP_H__
 #define nvkm_gsp(p) container_of((p), struct nvkm_gsp, subdev)
 #include <core/subdev.h>
+#include <core/falcon.h>
 
 struct nvkm_gsp {
        struct nvkm_subdev subdev;
-       u32 addr;
-
-       struct nvkm_falcon *falcon;
+       struct nvkm_falcon falcon;
 };
 
 int gv100_gsp_new(struct nvkm_device *, int, struct nvkm_gsp **);
index 644d527..d76f60d 100644 (file)
@@ -40,4 +40,5 @@ int gm107_ltc_new(struct nvkm_device *, int, struct nvkm_ltc **);
 int gm200_ltc_new(struct nvkm_device *, int, struct nvkm_ltc **);
 int gp100_ltc_new(struct nvkm_device *, int, struct nvkm_ltc **);
 int gp102_ltc_new(struct nvkm_device *, int, struct nvkm_ltc **);
+int gp10b_ltc_new(struct nvkm_device *, int, struct nvkm_ltc **);
 #endif
index 4752006..da55308 100644 (file)
@@ -2,13 +2,20 @@
 #ifndef __NVKM_PMU_H__
 #define __NVKM_PMU_H__
 #include <core/subdev.h>
-#include <engine/falcon.h>
+#include <core/falcon.h>
 
 struct nvkm_pmu {
        const struct nvkm_pmu_func *func;
        struct nvkm_subdev subdev;
-       struct nvkm_falcon *falcon;
-       struct nvkm_msgqueue *queue;
+       struct nvkm_falcon falcon;
+
+       struct nvkm_falcon_qmgr *qmgr;
+       struct nvkm_falcon_cmdq *hpq;
+       struct nvkm_falcon_cmdq *lpq;
+       struct nvkm_falcon_msgq *msgq;
+       bool initmsg_received;
+
+       struct completion wpr_ready;
 
        struct {
                u32 base;
@@ -43,6 +50,7 @@ int gm107_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **);
 int gm20b_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **);
 int gp100_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **);
 int gp102_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **);
+int gp10b_pmu_new(struct nvkm_device *, int, struct nvkm_pmu **);
 
 /* interface to MEMX process running on PMU */
 struct nvkm_memx;
index f8015e0..1b62ccc 100644 (file)
@@ -1162,7 +1162,7 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object *bo, int evict, bool intr,
 void
 nouveau_bo_move_init(struct nouveau_drm *drm)
 {
-       static const struct {
+       static const struct _method_table {
                const char *name;
                int engine;
                s32 oclass;
@@ -1192,7 +1192,8 @@ nouveau_bo_move_init(struct nouveau_drm *drm)
                {  "M2MF", 0, 0x0039, nv04_bo_move_m2mf, nv04_bo_move_init },
                {},
                { "CRYPT", 0, 0x88b4, nv98_bo_move_exec, nv50_bo_move_init },
-       }, *mthd = _methods;
+       };
+       const struct _method_table *mthd = _methods;
        const char *name = "CPU";
        int ret;
 
index fa14399..0ad5d87 100644 (file)
@@ -635,10 +635,10 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
        unsigned long c, i;
        int ret = -ENOMEM;
 
-       args.src = kcalloc(max, sizeof(args.src), GFP_KERNEL);
+       args.src = kcalloc(max, sizeof(*args.src), GFP_KERNEL);
        if (!args.src)
                goto out;
-       args.dst = kcalloc(max, sizeof(args.dst), GFP_KERNEL);
+       args.dst = kcalloc(max, sizeof(*args.dst), GFP_KERNEL);
        if (!args.dst)
                goto out_free_src;
 
index 2cd8384..b65ae81 100644 (file)
@@ -715,7 +715,6 @@ fail_nvkm:
 void
 nouveau_drm_device_remove(struct drm_device *dev)
 {
-       struct pci_dev *pdev = dev->pdev;
        struct nouveau_drm *drm = nouveau_drm(dev);
        struct nvkm_client *client;
        struct nvkm_device *device;
@@ -727,7 +726,6 @@ nouveau_drm_device_remove(struct drm_device *dev)
        device = nvkm_device_find(client->device);
 
        nouveau_drm_device_fini(dev);
-       pci_disable_device(pdev);
        drm_dev_put(dev);
        nvkm_device_del(&device);
 }
@@ -738,6 +736,7 @@ nouveau_drm_remove(struct pci_dev *pdev)
        struct drm_device *dev = pci_get_drvdata(pdev);
 
        nouveau_drm_device_remove(dev);
+       pci_disable_device(pdev);
 }
 
 static int
index 9118df0..70bb6bb 100644 (file)
@@ -156,7 +156,7 @@ nouveau_fence_wait_uevent_handler(struct nvif_notify *notify)
 
                fence = list_entry(fctx->pending.next, typeof(*fence), head);
                chan = rcu_dereference_protected(fence->channel, lockdep_is_held(&fctx->lock));
-               if (nouveau_fence_update(fence->channel, fctx))
+               if (nouveau_fence_update(chan, fctx))
                        ret = NVIF_NOTIFY_DROP;
        }
        spin_unlock_irqrestore(&fctx->lock, flags);
index d445c6f..1c3104d 100644 (file)
@@ -741,7 +741,7 @@ nouveau_hwmon_init(struct drm_device *dev)
                        special_groups[i++] = &pwm_fan_sensor_group;
        }
 
-       special_groups[i] = 0;
+       special_groups[i] = NULL;
        hwmon_dev = hwmon_device_register_with_info(dev->dev, "nouveau", dev,
                                                        &nouveau_chip_info,
                                                        special_groups);
index 77a0c6a..7ca0a24 100644 (file)
@@ -63,14 +63,12 @@ nouveau_vram_manager_new(struct ttm_mem_type_manager *man,
 {
        struct nouveau_bo *nvbo = nouveau_bo(bo);
        struct nouveau_drm *drm = nouveau_bdev(bo->bdev);
-       struct nouveau_mem *mem;
        int ret;
 
        if (drm->client.device.info.ram_size == 0)
                return -ENOMEM;
 
        ret = nouveau_mem_new(&drm->master, nvbo->kind, nvbo->comp, reg);
-       mem = nouveau_mem(reg);
        if (ret)
                return ret;
 
@@ -103,11 +101,9 @@ nouveau_gart_manager_new(struct ttm_mem_type_manager *man,
 {
        struct nouveau_bo *nvbo = nouveau_bo(bo);
        struct nouveau_drm *drm = nouveau_bdev(bo->bdev);
-       struct nouveau_mem *mem;
        int ret;
 
        ret = nouveau_mem_new(&drm->master, nvbo->kind, nvbo->comp, reg);
-       mem = nouveau_mem(reg);
        if (ret)
                return ret;
 
index 5641bda..47efc40 100644 (file)
@@ -121,6 +121,7 @@ nvif_mmu_init(struct nvif_object *parent, s32 oclass, struct nvif_mmu *mmu)
                                       kind, argc);
                if (ret == 0)
                        memcpy(mmu->kind, kind->data, kind->count);
+               mmu->kind_inv = kind->kind_inv;
                kfree(kind);
        }
 
index b53de9b..db3ade1 100644 (file)
@@ -1,5 +1,6 @@
 # SPDX-License-Identifier: MIT
 include $(src)/nvkm/core/Kbuild
+include $(src)/nvkm/nvfw/Kbuild
 include $(src)/nvkm/falcon/Kbuild
 include $(src)/nvkm/subdev/Kbuild
 include $(src)/nvkm/engine/Kbuild
index 092acde..8b25367 100644 (file)
 #include <core/device.h>
 #include <core/firmware.h>
 
+int
+nvkm_firmware_load_name(const struct nvkm_subdev *subdev, const char *base,
+                       const char *name, int ver, const struct firmware **pfw)
+{
+       char path[64];
+       int ret;
+
+       snprintf(path, sizeof(path), "%s%s", base, name);
+       ret = nvkm_firmware_get(subdev, path, ver, pfw);
+       if (ret < 0)
+               return ret;
+
+       return 0;
+}
+
+int
+nvkm_firmware_load_blob(const struct nvkm_subdev *subdev, const char *base,
+                       const char *name, int ver, struct nvkm_blob *blob)
+{
+       const struct firmware *fw;
+       int ret;
+
+       ret = nvkm_firmware_load_name(subdev, base, name, ver, &fw);
+       if (ret == 0) {
+               blob->data = kmemdup(fw->data, fw->size, GFP_KERNEL);
+               blob->size = fw->size;
+               nvkm_firmware_put(fw);
+               if (!blob->data)
+                       return -ENOMEM;
+       }
+
+       return ret;
+}
+
 /**
  * nvkm_firmware_get - load firmware from the official nvidia/chip/ directory
  * @subdev     subdevice that will use that firmware
@@ -32,9 +66,8 @@
  * Firmware files released by NVIDIA will always follow this format.
  */
 int
-nvkm_firmware_get_version(const struct nvkm_subdev *subdev, const char *fwname,
-                         int min_version, int max_version,
-                         const struct firmware **fw)
+nvkm_firmware_get(const struct nvkm_subdev *subdev, const char *fwname, int ver,
+                 const struct firmware **fw)
 {
        struct nvkm_device *device = subdev->device;
        char f[64];
@@ -50,31 +83,21 @@ nvkm_firmware_get_version(const struct nvkm_subdev *subdev, const char *fwname,
                cname[i] = tolower(cname[i]);
        }
 
-       for (i = max_version; i >= min_version; i--) {
-               if (i != 0)
-                       snprintf(f, sizeof(f), "nvidia/%s/%s-%d.bin", cname, fwname, i);
-               else
-                       snprintf(f, sizeof(f), "nvidia/%s/%s.bin", cname, fwname);
-
-               if (!firmware_request_nowarn(fw, f, device->dev)) {
-                       nvkm_debug(subdev, "firmware \"%s\" loaded\n", f);
-                       return i;
-               }
+       if (ver != 0)
+               snprintf(f, sizeof(f), "nvidia/%s/%s-%d.bin", cname, fwname, ver);
+       else
+               snprintf(f, sizeof(f), "nvidia/%s/%s.bin", cname, fwname);
 
-               nvkm_debug(subdev, "firmware \"%s\" unavailable\n", f);
+       if (!firmware_request_nowarn(fw, f, device->dev)) {
+               nvkm_debug(subdev, "firmware \"%s\" loaded - %zu byte(s)\n",
+                          f, (*fw)->size);
+               return 0;
        }
 
-       nvkm_error(subdev, "failed to load firmware \"%s\"", fwname);
+       nvkm_debug(subdev, "firmware \"%s\" unavailable\n", f);
        return -ENOENT;
 }
 
-int
-nvkm_firmware_get(const struct nvkm_subdev *subdev, const char *fwname,
-                 const struct firmware **fw)
-{
-       return nvkm_firmware_get_version(subdev, fwname, 0, 0, fw);
-}
-
 /**
  * nvkm_firmware_put - release firmware loaded with nvkm_firmware_get
  */
index 245990d..79a8f9d 100644 (file)
@@ -30,6 +30,7 @@ static struct lock_class_key nvkm_subdev_lock_class[NVKM_SUBDEV_NR];
 
 const char *
 nvkm_subdev_name[NVKM_SUBDEV_NR] = {
+       [NVKM_SUBDEV_ACR     ] = "acr",
        [NVKM_SUBDEV_BAR     ] = "bar",
        [NVKM_SUBDEV_VBIOS   ] = "bios",
        [NVKM_SUBDEV_BUS     ] = "bus",
@@ -50,7 +51,6 @@ nvkm_subdev_name[NVKM_SUBDEV_NR] = {
        [NVKM_SUBDEV_MXM     ] = "mxm",
        [NVKM_SUBDEV_PCI     ] = "pci",
        [NVKM_SUBDEV_PMU     ] = "pmu",
-       [NVKM_SUBDEV_SECBOOT ] = "secboot",
        [NVKM_SUBDEV_THERM   ] = "therm",
        [NVKM_SUBDEV_TIMER   ] = "tmr",
        [NVKM_SUBDEV_TOP     ] = "top",
index c3c7159..c7d7009 100644 (file)
@@ -1987,6 +1987,8 @@ nv117_chipset = {
        .dma = gf119_dma_new,
        .fifo = gm107_fifo_new,
        .gr = gm107_gr_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
        .sw = gf100_sw_new,
 };
 
@@ -2027,6 +2029,7 @@ nv118_chipset = {
 static const struct nvkm_device_chip
 nv120_chipset = {
        .name = "GM200",
+       .acr = gm200_acr_new,
        .bar = gm107_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2045,7 +2048,6 @@ nv120_chipset = {
        .pci = gk104_pci_new,
        .pmu = gm107_pmu_new,
        .therm = gm200_therm_new,
-       .secboot = gm200_secboot_new,
        .timer = gk20a_timer_new,
        .top = gk104_top_new,
        .volt = gk104_volt_new,
@@ -2056,12 +2058,16 @@ nv120_chipset = {
        .dma = gf119_dma_new,
        .fifo = gm200_fifo_new,
        .gr = gm200_gr_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
+       .nvenc[1] = gm107_nvenc_new,
        .sw = gf100_sw_new,
 };
 
 static const struct nvkm_device_chip
 nv124_chipset = {
        .name = "GM204",
+       .acr = gm200_acr_new,
        .bar = gm107_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2080,7 +2086,6 @@ nv124_chipset = {
        .pci = gk104_pci_new,
        .pmu = gm107_pmu_new,
        .therm = gm200_therm_new,
-       .secboot = gm200_secboot_new,
        .timer = gk20a_timer_new,
        .top = gk104_top_new,
        .volt = gk104_volt_new,
@@ -2091,12 +2096,16 @@ nv124_chipset = {
        .dma = gf119_dma_new,
        .fifo = gm200_fifo_new,
        .gr = gm200_gr_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
+       .nvenc[1] = gm107_nvenc_new,
        .sw = gf100_sw_new,
 };
 
 static const struct nvkm_device_chip
 nv126_chipset = {
        .name = "GM206",
+       .acr = gm200_acr_new,
        .bar = gm107_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2115,7 +2124,6 @@ nv126_chipset = {
        .pci = gk104_pci_new,
        .pmu = gm107_pmu_new,
        .therm = gm200_therm_new,
-       .secboot = gm200_secboot_new,
        .timer = gk20a_timer_new,
        .top = gk104_top_new,
        .volt = gk104_volt_new,
@@ -2126,12 +2134,15 @@ nv126_chipset = {
        .dma = gf119_dma_new,
        .fifo = gm200_fifo_new,
        .gr = gm200_gr_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
        .sw = gf100_sw_new,
 };
 
 static const struct nvkm_device_chip
 nv12b_chipset = {
        .name = "GM20B",
+       .acr = gm20b_acr_new,
        .bar = gm20b_bar_new,
        .bus = gf100_bus_new,
        .clk = gm20b_clk_new,
@@ -2143,7 +2154,6 @@ nv12b_chipset = {
        .mc = gk20a_mc_new,
        .mmu = gm20b_mmu_new,
        .pmu = gm20b_pmu_new,
-       .secboot = gm20b_secboot_new,
        .timer = gk20a_timer_new,
        .top = gk104_top_new,
        .ce[2] = gm200_ce_new,
@@ -2157,6 +2167,7 @@ nv12b_chipset = {
 static const struct nvkm_device_chip
 nv130_chipset = {
        .name = "GP100",
+       .acr = gm200_acr_new,
        .bar = gm107_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2172,7 +2183,6 @@ nv130_chipset = {
        .mc = gp100_mc_new,
        .mmu = gp100_mmu_new,
        .therm = gp100_therm_new,
-       .secboot = gm200_secboot_new,
        .pci = gp100_pci_new,
        .pmu = gp100_pmu_new,
        .timer = gk20a_timer_new,
@@ -2187,12 +2197,17 @@ nv130_chipset = {
        .disp = gp100_disp_new,
        .fifo = gp100_fifo_new,
        .gr = gp100_gr_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
+       .nvenc[1] = gm107_nvenc_new,
+       .nvenc[2] = gm107_nvenc_new,
        .sw = gf100_sw_new,
 };
 
 static const struct nvkm_device_chip
 nv132_chipset = {
        .name = "GP102",
+       .acr = gp102_acr_new,
        .bar = gm107_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2208,7 +2223,6 @@ nv132_chipset = {
        .mc = gp100_mc_new,
        .mmu = gp100_mmu_new,
        .therm = gp100_therm_new,
-       .secboot = gp102_secboot_new,
        .pci = gp100_pci_new,
        .pmu = gp102_pmu_new,
        .timer = gk20a_timer_new,
@@ -2221,7 +2235,9 @@ nv132_chipset = {
        .dma = gf119_dma_new,
        .fifo = gp100_fifo_new,
        .gr = gp102_gr_new,
-       .nvdec[0] = gp102_nvdec_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
+       .nvenc[1] = gm107_nvenc_new,
        .sec2 = gp102_sec2_new,
        .sw = gf100_sw_new,
 };
@@ -2229,6 +2245,7 @@ nv132_chipset = {
 static const struct nvkm_device_chip
 nv134_chipset = {
        .name = "GP104",
+       .acr = gp102_acr_new,
        .bar = gm107_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2244,7 +2261,6 @@ nv134_chipset = {
        .mc = gp100_mc_new,
        .mmu = gp100_mmu_new,
        .therm = gp100_therm_new,
-       .secboot = gp102_secboot_new,
        .pci = gp100_pci_new,
        .pmu = gp102_pmu_new,
        .timer = gk20a_timer_new,
@@ -2257,7 +2273,9 @@ nv134_chipset = {
        .dma = gf119_dma_new,
        .fifo = gp100_fifo_new,
        .gr = gp104_gr_new,
-       .nvdec[0] = gp102_nvdec_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
+       .nvenc[1] = gm107_nvenc_new,
        .sec2 = gp102_sec2_new,
        .sw = gf100_sw_new,
 };
@@ -2265,6 +2283,7 @@ nv134_chipset = {
 static const struct nvkm_device_chip
 nv136_chipset = {
        .name = "GP106",
+       .acr = gp102_acr_new,
        .bar = gm107_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2280,7 +2299,6 @@ nv136_chipset = {
        .mc = gp100_mc_new,
        .mmu = gp100_mmu_new,
        .therm = gp100_therm_new,
-       .secboot = gp102_secboot_new,
        .pci = gp100_pci_new,
        .pmu = gp102_pmu_new,
        .timer = gk20a_timer_new,
@@ -2293,7 +2311,8 @@ nv136_chipset = {
        .dma = gf119_dma_new,
        .fifo = gp100_fifo_new,
        .gr = gp104_gr_new,
-       .nvdec[0] = gp102_nvdec_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
        .sec2 = gp102_sec2_new,
        .sw = gf100_sw_new,
 };
@@ -2301,6 +2320,7 @@ nv136_chipset = {
 static const struct nvkm_device_chip
 nv137_chipset = {
        .name = "GP107",
+       .acr = gp102_acr_new,
        .bar = gm107_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2316,7 +2336,6 @@ nv137_chipset = {
        .mc = gp100_mc_new,
        .mmu = gp100_mmu_new,
        .therm = gp100_therm_new,
-       .secboot = gp102_secboot_new,
        .pci = gp100_pci_new,
        .pmu = gp102_pmu_new,
        .timer = gk20a_timer_new,
@@ -2329,7 +2348,9 @@ nv137_chipset = {
        .dma = gf119_dma_new,
        .fifo = gp100_fifo_new,
        .gr = gp107_gr_new,
-       .nvdec[0] = gp102_nvdec_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
+       .nvenc[1] = gm107_nvenc_new,
        .sec2 = gp102_sec2_new,
        .sw = gf100_sw_new,
 };
@@ -2337,6 +2358,7 @@ nv137_chipset = {
 static const struct nvkm_device_chip
 nv138_chipset = {
        .name = "GP108",
+       .acr = gp108_acr_new,
        .bar = gm107_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2352,7 +2374,6 @@ nv138_chipset = {
        .mc = gp100_mc_new,
        .mmu = gp100_mmu_new,
        .therm = gp100_therm_new,
-       .secboot = gp108_secboot_new,
        .pci = gp100_pci_new,
        .pmu = gp102_pmu_new,
        .timer = gk20a_timer_new,
@@ -2364,30 +2385,30 @@ nv138_chipset = {
        .disp = gp102_disp_new,
        .dma = gf119_dma_new,
        .fifo = gp100_fifo_new,
-       .gr = gp107_gr_new,
-       .nvdec[0] = gp102_nvdec_new,
-       .sec2 = gp102_sec2_new,
+       .gr = gp108_gr_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .sec2 = gp108_sec2_new,
        .sw = gf100_sw_new,
 };
 
 static const struct nvkm_device_chip
 nv13b_chipset = {
        .name = "GP10B",
+       .acr = gp10b_acr_new,
        .bar = gm20b_bar_new,
        .bus = gf100_bus_new,
-       .fault = gp100_fault_new,
+       .fault = gp10b_fault_new,
        .fb = gp10b_fb_new,
        .fuse = gm107_fuse_new,
        .ibus = gp10b_ibus_new,
        .imem = gk20a_instmem_new,
-       .ltc = gp102_ltc_new,
+       .ltc = gp10b_ltc_new,
        .mc = gp10b_mc_new,
        .mmu = gp10b_mmu_new,
-       .secboot = gp10b_secboot_new,
-       .pmu = gm20b_pmu_new,
+       .pmu = gp10b_pmu_new,
        .timer = gk20a_timer_new,
        .top = gk104_top_new,
-       .ce[2] = gp102_ce_new,
+       .ce[0] = gp100_ce_new,
        .dma = gf119_dma_new,
        .fifo = gp10b_fifo_new,
        .gr = gp10b_gr_new,
@@ -2397,6 +2418,7 @@ nv13b_chipset = {
 static const struct nvkm_device_chip
 nv140_chipset = {
        .name = "GV100",
+       .acr = gp108_acr_new,
        .bar = gm107_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2414,7 +2436,6 @@ nv140_chipset = {
        .mmu = gv100_mmu_new,
        .pci = gp100_pci_new,
        .pmu = gp102_pmu_new,
-       .secboot = gp108_secboot_new,
        .therm = gp100_therm_new,
        .timer = gk20a_timer_new,
        .top = gk104_top_new,
@@ -2431,13 +2452,17 @@ nv140_chipset = {
        .dma = gv100_dma_new,
        .fifo = gv100_fifo_new,
        .gr = gv100_gr_new,
-       .nvdec[0] = gp102_nvdec_new,
-       .sec2 = gp102_sec2_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
+       .nvenc[1] = gm107_nvenc_new,
+       .nvenc[2] = gm107_nvenc_new,
+       .sec2 = gp108_sec2_new,
 };
 
 static const struct nvkm_device_chip
 nv162_chipset = {
        .name = "TU102",
+       .acr = tu102_acr_new,
        .bar = tu102_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2466,13 +2491,16 @@ nv162_chipset = {
        .disp = tu102_disp_new,
        .dma = gv100_dma_new,
        .fifo = tu102_fifo_new,
-       .nvdec[0] = gp102_nvdec_new,
+       .gr = tu102_gr_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
        .sec2 = tu102_sec2_new,
 };
 
 static const struct nvkm_device_chip
 nv164_chipset = {
        .name = "TU104",
+       .acr = tu102_acr_new,
        .bar = tu102_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2501,13 +2529,17 @@ nv164_chipset = {
        .disp = tu102_disp_new,
        .dma = gv100_dma_new,
        .fifo = tu102_fifo_new,
-       .nvdec[0] = gp102_nvdec_new,
+       .gr = tu102_gr_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvdec[1] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
        .sec2 = tu102_sec2_new,
 };
 
 static const struct nvkm_device_chip
 nv166_chipset = {
        .name = "TU106",
+       .acr = tu102_acr_new,
        .bar = tu102_bar_new,
        .bios = nvkm_bios_new,
        .bus = gf100_bus_new,
@@ -2536,7 +2568,11 @@ nv166_chipset = {
        .disp = tu102_disp_new,
        .dma = gv100_dma_new,
        .fifo = tu102_fifo_new,
-       .nvdec[0] = gp102_nvdec_new,
+       .gr = tu102_gr_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvdec[1] = gm107_nvdec_new,
+       .nvdec[2] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
        .sec2 = tu102_sec2_new,
 };
 
@@ -2571,7 +2607,8 @@ nv167_chipset = {
        .disp = tu102_disp_new,
        .dma = gv100_dma_new,
        .fifo = tu102_fifo_new,
-       .nvdec[0] = gp102_nvdec_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
        .sec2 = tu102_sec2_new,
 };
 
@@ -2606,7 +2643,8 @@ nv168_chipset = {
        .disp = tu102_disp_new,
        .dma = gv100_dma_new,
        .fifo = tu102_fifo_new,
-       .nvdec[0] = gp102_nvdec_new,
+       .nvdec[0] = gm107_nvdec_new,
+       .nvenc[0] = gm107_nvenc_new,
        .sec2 = tu102_sec2_new,
 };
 
@@ -2638,6 +2676,7 @@ nvkm_device_subdev(struct nvkm_device *device, int index)
 
        switch (index) {
 #define _(n,p,m) case NVKM_SUBDEV_##n: if (p) return (m); break
+       _(ACR     , device->acr     , &device->acr->subdev);
        _(BAR     , device->bar     , &device->bar->subdev);
        _(VBIOS   , device->bios    , &device->bios->subdev);
        _(BUS     , device->bus     , &device->bus->subdev);
@@ -2658,7 +2697,6 @@ nvkm_device_subdev(struct nvkm_device *device, int index)
        _(MXM     , device->mxm     ,  device->mxm);
        _(PCI     , device->pci     , &device->pci->subdev);
        _(PMU     , device->pmu     , &device->pmu->subdev);
-       _(SECBOOT , device->secboot , &device->secboot->subdev);
        _(THERM   , device->therm   , &device->therm->subdev);
        _(TIMER   , device->timer   , &device->timer->subdev);
        _(TOP     , device->top     , &device->top->subdev);
@@ -2703,9 +2741,9 @@ nvkm_device_engine(struct nvkm_device *device, int index)
        _(MSPDEC , device->mspdec  ,  device->mspdec);
        _(MSPPP  , device->msppp   ,  device->msppp);
        _(MSVLD  , device->msvld   ,  device->msvld);
-       _(NVENC0 , device->nvenc[0],  device->nvenc[0]);
-       _(NVENC1 , device->nvenc[1],  device->nvenc[1]);
-       _(NVENC2 , device->nvenc[2],  device->nvenc[2]);
+       _(NVENC0 , device->nvenc[0], &device->nvenc[0]->engine);
+       _(NVENC1 , device->nvenc[1], &device->nvenc[1]->engine);
+       _(NVENC2 , device->nvenc[2], &device->nvenc[2]->engine);
        _(NVDEC0 , device->nvdec[0], &device->nvdec[0]->engine);
        _(NVDEC1 , device->nvdec[1], &device->nvdec[1]->engine);
        _(NVDEC2 , device->nvdec[2], &device->nvdec[2]->engine);
@@ -3144,6 +3182,7 @@ nvkm_device_ctor(const struct nvkm_device_func *func,
        }                                                                      \
        break
                switch (i) {
+               _(NVKM_SUBDEV_ACR     ,      acr);
                _(NVKM_SUBDEV_BAR     ,      bar);
                _(NVKM_SUBDEV_VBIOS   ,     bios);
                _(NVKM_SUBDEV_BUS     ,      bus);
@@ -3164,7 +3203,6 @@ nvkm_device_ctor(const struct nvkm_device_func *func,
                _(NVKM_SUBDEV_MXM     ,      mxm);
                _(NVKM_SUBDEV_PCI     ,      pci);
                _(NVKM_SUBDEV_PMU     ,      pmu);
-               _(NVKM_SUBDEV_SECBOOT ,  secboot);
                _(NVKM_SUBDEV_THERM   ,    therm);
                _(NVKM_SUBDEV_TIMER   ,    timer);
                _(NVKM_SUBDEV_TOP     ,      top);
index d8be2f7..54eab5e 100644 (file)
@@ -3,6 +3,7 @@
 #define __NVKM_DEVICE_PRIV_H__
 #include <core/device.h>
 
+#include <subdev/acr.h>
 #include <subdev/bar.h>
 #include <subdev/bios.h>
 #include <subdev/bus.h>
@@ -27,7 +28,6 @@
 #include <subdev/timer.h>
 #include <subdev/top.h>
 #include <subdev/volt.h>
-#include <subdev/secboot.h>
 
 #include <engine/bsp.h>
 #include <engine/ce.h>
index 0e372a1..d0d52c1 100644 (file)
@@ -52,18 +52,18 @@ nvkm_device_tegra_power_up(struct nvkm_device_tegra *tdev)
        clk_set_rate(tdev->clk_pwr, 204000000);
        udelay(10);
 
-       reset_control_assert(tdev->rst);
-       udelay(10);
-
        if (!tdev->pdev->dev.pm_domain) {
+               reset_control_assert(tdev->rst);
+               udelay(10);
+
                ret = tegra_powergate_remove_clamping(TEGRA_POWERGATE_3D);
                if (ret)
                        goto err_clamp;
                udelay(10);
-       }
 
-       reset_control_deassert(tdev->rst);
-       udelay(10);
+               reset_control_deassert(tdev->rst);
+               udelay(10);
+       }
 
        return 0;
 
@@ -279,6 +279,7 @@ nvkm_device_tegra_new(const struct nvkm_device_tegra_func *func,
                      struct nvkm_device **pdevice)
 {
        struct nvkm_device_tegra *tdev;
+       unsigned long rate;
        int ret;
 
        if (!(tdev = kzalloc(sizeof(*tdev), GFP_KERNEL)))
@@ -307,6 +308,17 @@ nvkm_device_tegra_new(const struct nvkm_device_tegra_func *func,
                goto free;
        }
 
+       rate = clk_get_rate(tdev->clk);
+       if (rate == 0) {
+               ret = clk_set_rate(tdev->clk, ULONG_MAX);
+               if (ret < 0)
+                       goto free;
+
+               rate = clk_get_rate(tdev->clk);
+
+               dev_dbg(&pdev->dev, "GPU clock set to %lu\n", rate);
+       }
+
        if (func->require_ref_clk)
                tdev->clk_ref = devm_clk_get(&pdev->dev, "ref");
        if (IS_ERR(tdev->clk_ref)) {
index 818d21b..3800aeb 100644 (file)
@@ -365,7 +365,7 @@ nvkm_dp_train(struct nvkm_dp *dp, u32 dataKBps)
         * and it's better to have a failed modeset than that.
         */
        for (cfg = nvkm_dp_rates; cfg->rate; cfg++) {
-               if (cfg->nr <= outp_nr && cfg->nr <= outp_bw) {
+               if (cfg->nr <= outp_nr && cfg->bw <= outp_bw) {
                        /* Try to respect sink limits too when selecting
                         * lowest link configuration.
                         */
index 73724a8..558c86f 100644 (file)
@@ -36,8 +36,10 @@ nvkm-y += nvkm/engine/gr/gp100.o
 nvkm-y += nvkm/engine/gr/gp102.o
 nvkm-y += nvkm/engine/gr/gp104.o
 nvkm-y += nvkm/engine/gr/gp107.o
+nvkm-y += nvkm/engine/gr/gp108.o
 nvkm-y += nvkm/engine/gr/gp10b.o
 nvkm-y += nvkm/engine/gr/gv100.o
+nvkm-y += nvkm/engine/gr/tu102.o
 
 nvkm-y += nvkm/engine/gr/ctxnv40.o
 nvkm-y += nvkm/engine/gr/ctxnv50.o
@@ -60,3 +62,4 @@ nvkm-y += nvkm/engine/gr/ctxgp102.o
 nvkm-y += nvkm/engine/gr/ctxgp104.o
 nvkm-y += nvkm/engine/gr/ctxgp107.o
 nvkm-y += nvkm/engine/gr/ctxgv100.o
+nvkm-y += nvkm/engine/gr/ctxtu102.o
index 85f2d1e..2979157 100644 (file)
@@ -1324,10 +1324,8 @@ gf100_grctx_generate_sm_id(struct gf100_gr *gr, int gpc, int tpc, int sm)
 void
 gf100_grctx_generate_floorsweep(struct gf100_gr *gr)
 {
-       struct nvkm_device *device = gr->base.engine.subdev.device;
        const struct gf100_grctx_func *func = gr->func->grctx;
-       int gpc, sm, i, j;
-       u32 data;
+       int sm;
 
        for (sm = 0; sm < gr->sm_nr; sm++) {
                func->sm_id(gr, gr->sm[sm].gpc, gr->sm[sm].tpc, sm);
@@ -1335,12 +1333,9 @@ gf100_grctx_generate_floorsweep(struct gf100_gr *gr)
                        func->tpc_nr(gr, gr->sm[sm].gpc);
        }
 
-       for (gpc = 0, i = 0; i < 4; i++) {
-               for (data = 0, j = 0; j < 8 && gpc < gr->gpc_nr; j++, gpc++)
-                       data |= gr->tpc_nr[gpc] << (j * 4);
-               nvkm_wr32(device, 0x406028 + (i * 4), data);
-               nvkm_wr32(device, 0x405870 + (i * 4), data);
-       }
+       gf100_gr_init_num_tpc_per_gpc(gr, false, true);
+       if (!func->skip_pd_num_tpc_per_gpc)
+               gf100_gr_init_num_tpc_per_gpc(gr, true, false);
 
        if (func->r4060a8)
                func->r4060a8(gr);
@@ -1374,7 +1369,7 @@ gf100_grctx_generate_main(struct gf100_gr *gr, struct gf100_grctx *info)
 
        nvkm_mc_unk260(device, 0);
 
-       if (!gr->fuc_sw_ctx) {
+       if (!gr->sw_ctx) {
                gf100_gr_mmio(gr, grctx->hub);
                gf100_gr_mmio(gr, grctx->gpc_0);
                gf100_gr_mmio(gr, grctx->zcull);
@@ -1382,7 +1377,7 @@ gf100_grctx_generate_main(struct gf100_gr *gr, struct gf100_grctx *info)
                gf100_gr_mmio(gr, grctx->tpc);
                gf100_gr_mmio(gr, grctx->ppc);
        } else {
-               gf100_gr_mmio(gr, gr->fuc_sw_ctx);
+               gf100_gr_mmio(gr, gr->sw_ctx);
        }
 
        gf100_gr_wait_idle(gr);
@@ -1401,8 +1396,8 @@ gf100_grctx_generate_main(struct gf100_gr *gr, struct gf100_grctx *info)
        gf100_gr_wait_idle(gr);
 
        if (grctx->r400088) grctx->r400088(gr, false);
-       if (gr->fuc_bundle)
-               gf100_gr_icmd(gr, gr->fuc_bundle);
+       if (gr->bundle)
+               gf100_gr_icmd(gr, gr->bundle);
        else
                gf100_gr_icmd(gr, grctx->icmd);
        if (grctx->sw_veid_bundle_init)
@@ -1411,8 +1406,8 @@ gf100_grctx_generate_main(struct gf100_gr *gr, struct gf100_grctx *info)
 
        nvkm_wr32(device, 0x404154, idle_timeout);
 
-       if (gr->fuc_method)
-               gf100_gr_mthd(gr, gr->fuc_method);
+       if (gr->method)
+               gf100_gr_mthd(gr, gr->method);
        else
                gf100_gr_mthd(gr, grctx->mthd);
        nvkm_mc_unk260(device, 1);
@@ -1431,6 +1426,8 @@ gf100_grctx_generate_main(struct gf100_gr *gr, struct gf100_grctx *info)
                grctx->r419a3c(gr);
        if (grctx->r408840)
                grctx->r408840(gr);
+       if (grctx->r419c0c)
+               grctx->r419c0c(gr);
 }
 
 #define CB_RESERVED 0x80000
index 478b472..32bbddc 100644 (file)
@@ -57,6 +57,7 @@ struct gf100_grctx_func {
        /* floorsweeping */
        void (*sm_id)(struct gf100_gr *, int gpc, int tpc, int sm);
        void (*tpc_nr)(struct gf100_gr *, int gpc);
+       bool skip_pd_num_tpc_per_gpc;
        void (*r4060a8)(struct gf100_gr *);
        void (*rop_mapping)(struct gf100_gr *);
        void (*alpha_beta_tables)(struct gf100_gr *);
@@ -76,6 +77,7 @@ struct gf100_grctx_func {
        void (*r418e94)(struct gf100_gr *);
        void (*r419a3c)(struct gf100_gr *);
        void (*r408840)(struct gf100_gr *);
+       void (*r419c0c)(struct gf100_gr *);
 };
 
 extern const struct gf100_grctx_func gf100_grctx;
@@ -153,6 +155,14 @@ extern const struct gf100_grctx_func gp107_grctx;
 
 extern const struct gf100_grctx_func gv100_grctx;
 
+extern const struct gf100_grctx_func tu102_grctx;
+void gv100_grctx_unkn88c(struct gf100_gr *, bool);
+void gv100_grctx_generate_unkn(struct gf100_gr *);
+extern const struct gf100_gr_init gv100_grctx_init_sw_veid_bundle_init_0[];
+void gv100_grctx_generate_attrib(struct gf100_grctx *);
+void gv100_grctx_generate_rop_mapping(struct gf100_gr *);
+void gv100_grctx_generate_r400088(struct gf100_gr *, bool);
+
 /* context init value lists */
 
 extern const struct gf100_gr_pack gf100_grctx_pack_icmd[];
index 896d473..c0d36bc 100644 (file)
@@ -32,7 +32,7 @@ gk20a_grctx_generate_main(struct gf100_gr *gr, struct gf100_grctx *info)
        u32 idle_timeout;
        int i;
 
-       gf100_gr_mmio(gr, gr->fuc_sw_ctx);
+       gf100_gr_mmio(gr, gr->sw_ctx);
 
        gf100_gr_wait_idle(gr);
 
@@ -56,10 +56,10 @@ gk20a_grctx_generate_main(struct gf100_gr *gr, struct gf100_grctx *info)
        nvkm_wr32(device, 0x404154, idle_timeout);
        gf100_gr_wait_idle(gr);
 
-       gf100_gr_mthd(gr, gr->fuc_method);
+       gf100_gr_mthd(gr, gr->method);
        gf100_gr_wait_idle(gr);
 
-       gf100_gr_icmd(gr, gr->fuc_bundle);
+       gf100_gr_icmd(gr, gr->bundle);
        grctx->pagepool(info);
        grctx->bundle(info);
 }
index a1d9e11..6b92f8a 100644 (file)
@@ -29,7 +29,7 @@ gm20b_grctx_generate_main(struct gf100_gr *gr, struct gf100_grctx *info)
        u32 idle_timeout;
        int i, tmp;
 
-       gf100_gr_mmio(gr, gr->fuc_sw_ctx);
+       gf100_gr_mmio(gr, gr->sw_ctx);
 
        gf100_gr_wait_idle(gr);
 
@@ -59,10 +59,10 @@ gm20b_grctx_generate_main(struct gf100_gr *gr, struct gf100_grctx *info)
        nvkm_wr32(device, 0x404154, idle_timeout);
        gf100_gr_wait_idle(gr);
 
-       gf100_gr_mthd(gr, gr->fuc_method);
+       gf100_gr_mthd(gr, gr->method);
        gf100_gr_wait_idle(gr);
 
-       gf100_gr_icmd(gr, gr->fuc_bundle);
+       gf100_gr_icmd(gr, gr->bundle);
        grctx->pagepool(info);
        grctx->bundle(info);
 }
index 0990765..39553d5 100644 (file)
@@ -25,7 +25,7 @@
  * PGRAPH context implementation
  ******************************************************************************/
 
-static const struct gf100_gr_init
+const struct gf100_gr_init
 gv100_grctx_init_sw_veid_bundle_init_0[] = {
        { 0x00001000, 64, 0x00100000, 0x00000008 },
        { 0x00000941, 64, 0x00100000, 0x00000000 },
@@ -58,7 +58,7 @@ gv100_grctx_pack_sw_veid_bundle_init[] = {
        {}
 };
 
-static void
+void
 gv100_grctx_generate_attrib(struct gf100_grctx *info)
 {
        struct gf100_gr *gr = info->gr;
@@ -67,14 +67,14 @@ gv100_grctx_generate_attrib(struct gf100_grctx *info)
        const u32 attrib = grctx->attrib_nr;
        const u32   gfxp = grctx->gfxp_nr;
        const int s = 12;
-       const int max_batches = 0xffff;
        u32 size = grctx->alpha_nr_max * gr->tpc_total;
        u32 ao = 0;
        u32 bo = ao + size;
        int gpc, ppc, b, n = 0;
 
-       size += grctx->gfxp_nr * gr->tpc_total;
-       size = ((size * 0x20) + 128) & ~127;
+       for (gpc = 0; gpc < gr->gpc_nr; gpc++)
+               size += grctx->gfxp_nr * gr->ppc_nr[gpc] * gr->ppc_tpc_max;
+       size = ((size * 0x20) + 127) & ~127;
        b = mmio_vram(info, size, (1 << s), false);
 
        mmio_refn(info, 0x418810, 0x80000000, s, b);
@@ -84,13 +84,12 @@ gv100_grctx_generate_attrib(struct gf100_grctx *info)
        mmio_wr32(info, 0x419e04, 0x80000000 | size >> 7);
        mmio_wr32(info, 0x405830, attrib);
        mmio_wr32(info, 0x40585c, alpha);
-       mmio_wr32(info, 0x4064c4, ((alpha / 4) << 16) | max_batches);
 
        for (gpc = 0; gpc < gr->gpc_nr; gpc++) {
                for (ppc = 0; ppc < gr->ppc_nr[gpc]; ppc++, n++) {
                        const u32 as =  alpha * gr->ppc_tpc_nr[gpc][ppc];
-                       const u32 bs = attrib * gr->ppc_tpc_nr[gpc][ppc];
-                       const u32 gs =   gfxp * gr->ppc_tpc_nr[gpc][ppc];
+                       const u32 bs = attrib * gr->ppc_tpc_max;
+                       const u32 gs =   gfxp * gr->ppc_tpc_max;
                        const u32 u = 0x418ea0 + (n * 0x04);
                        const u32 o = PPC_UNIT(gpc, ppc, 0);
                        if (!(gr->ppc_mask[gpc] & (1 << ppc)))
@@ -110,7 +109,7 @@ gv100_grctx_generate_attrib(struct gf100_grctx *info)
        mmio_wr32(info, 0x41befc, 0x00000100);
 }
 
-static void
+void
 gv100_grctx_generate_rop_mapping(struct gf100_gr *gr)
 {
        struct nvkm_device *device = gr->base.engine.subdev.device;
@@ -147,7 +146,7 @@ gv100_grctx_generate_rop_mapping(struct gf100_gr *gr)
                                     gr->screen_tile_row_offset);
 }
 
-static void
+void
 gv100_grctx_generate_r400088(struct gf100_gr *gr, bool on)
 {
        struct nvkm_device *device = gr->base.engine.subdev.device;
@@ -163,7 +162,7 @@ gv100_grctx_generate_sm_id(struct gf100_gr *gr, int gpc, int tpc, int sm)
        nvkm_wr32(device, TPC_UNIT(gpc, tpc, 0x088), sm);
 }
 
-static void
+void
 gv100_grctx_generate_unkn(struct gf100_gr *gr)
 {
        struct nvkm_device *device = gr->base.engine.subdev.device;
@@ -174,7 +173,7 @@ gv100_grctx_generate_unkn(struct gf100_gr *gr)
        nvkm_mask(device, 0x419c00, 0x00000008, 0x00000008);
 }
 
-static void
+void
 gv100_grctx_unkn88c(struct gf100_gr *gr, bool on)
 {
        struct nvkm_device *device = gr->base.engine.subdev.device;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxtu102.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxtu102.c
new file mode 100644 (file)
index 0000000..2299ca0
--- /dev/null
@@ -0,0 +1,95 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "ctxgf100.h"
+
+static void
+tu102_grctx_generate_r419c0c(struct gf100_gr *gr)
+{
+       struct nvkm_device *device = gr->base.engine.subdev.device;
+       nvkm_mask(device, 0x419c0c, 0x80000000, 0x80000000);
+       nvkm_mask(device, 0x40584c, 0x00000008, 0x00000000);
+       nvkm_mask(device, 0x400080, 0x00000000, 0x00000000);
+}
+
+static void
+tu102_grctx_generate_sm_id(struct gf100_gr *gr, int gpc, int tpc, int sm)
+{
+       struct nvkm_device *device = gr->base.engine.subdev.device;
+       nvkm_wr32(device, TPC_UNIT(gpc, tpc, 0x608), sm);
+       nvkm_wr32(device, TPC_UNIT(gpc, tpc, 0x088), sm);
+}
+
+static const struct gf100_gr_init
+tu102_grctx_init_unknown_bundle_init_0[] = {
+       { 0x00001000,  1, 0x00000001, 0x00000004 },
+       { 0x00002020, 64, 0x00000001, 0x00000000 },
+       { 0x0001e100,  1, 0x00000001, 0x00000001 },
+       {}
+};
+
+static const struct gf100_gr_pack
+tu102_grctx_pack_sw_veid_bundle_init[] = {
+       { gv100_grctx_init_sw_veid_bundle_init_0 },
+       { tu102_grctx_init_unknown_bundle_init_0 },
+       {}
+};
+
+static void
+tu102_grctx_generate_attrib(struct gf100_grctx *info)
+{
+       const u64 size = 0x80000; /*XXX: educated guess */
+       const int s = 8;
+       const int b = mmio_vram(info, size, (1 << s), true);
+
+       gv100_grctx_generate_attrib(info);
+
+       mmio_refn(info, 0x408070, 0x00000000, s, b);
+       mmio_wr32(info, 0x408074, size >> s); /*XXX: guess */
+       mmio_refn(info, 0x419034, 0x00000000, s, b);
+       mmio_wr32(info, 0x408078, 0x00000000);
+}
+
+const struct gf100_grctx_func
+tu102_grctx = {
+       .unkn88c = gv100_grctx_unkn88c,
+       .main = gf100_grctx_generate_main,
+       .unkn = gv100_grctx_generate_unkn,
+       .sw_veid_bundle_init = tu102_grctx_pack_sw_veid_bundle_init,
+       .bundle = gm107_grctx_generate_bundle,
+       .bundle_size = 0x3000,
+       .bundle_min_gpm_fifo_depth = 0x180,
+       .bundle_token_limit = 0xa80,
+       .pagepool = gp100_grctx_generate_pagepool,
+       .pagepool_size = 0x20000,
+       .attrib = tu102_grctx_generate_attrib,
+       .attrib_nr_max = 0x800,
+       .attrib_nr = 0x700,
+       .alpha_nr_max = 0xc00,
+       .alpha_nr = 0x800,
+       .gfxp_nr = 0xfa8,
+       .sm_id = tu102_grctx_generate_sm_id,
+       .skip_pd_num_tpc_per_gpc = true,
+       .rop_mapping = gv100_grctx_generate_rop_mapping,
+       .r406500 = gm200_grctx_generate_r406500,
+       .r400088 = gv100_grctx_generate_r400088,
+       .r419c0c = tu102_grctx_generate_r419c0c,
+};
index c24f35a..ae2d5b6 100644 (file)
@@ -441,7 +441,7 @@ static uint32_t gk208_grhub_code[] = {
        0x020014fe,
        0x12004002,
        0xbd0002f6,
-       0x05c94104,
+       0x05ca4104,
        0xbd0010fe,
        0x07004024,
        0xbd0002f6,
@@ -460,423 +460,423 @@ static uint32_t gk208_grhub_code[] = {
        0x01039204,
        0x03090080,
        0xbd0003f6,
-       0x87044204,
-       0xf6040040,
-       0x04bd0002,
-       0x00400402,
-       0x0002f603,
-       0x31f404bd,
-       0x96048e10,
-       0x00657e40,
-       0xc7feb200,
-       0x01b590f1,
-       0x1ff4f003,
-       0x01020fb5,
-       0x041fbb01,
-       0x800112b6,
-       0xf6010300,
-       0x04bd0001,
-       0x01040080,
+       0x87048204,
+       0x04004000,
+       0xbd0002f6,
+       0x40040204,
+       0x02f60300,
+       0xf404bd00,
+       0x048e1031,
+       0x657e4096,
+       0xfeb20000,
+       0xb590f1c7,
+       0xf4f00301,
+       0x020fb51f,
+       0x1fbb0101,
+       0x0112b604,
+       0x01030080,
        0xbd0001f6,
-       0x01004104,
-       0xac7e020f,
-       0xbb7e0006,
-       0x100f0006,
-       0x0006fd7e,
-       0x98000e98,
-       0x207e010f,
-       0x14950001,
-       0xc0008008,
-       0x0004f601,
-       0x008004bd,
-       0x04f601c1,
-       0xb704bd00,
-       0xbb130030,
-       0xf5b6001f,
-       0xd3008002,
-       0x000ff601,
-       0x15b604bd,
-       0x0110b608,
-       0xb20814b6,
-       0x02687e1f,
-       0x001fbb00,
-       0x84020398,
-/* 0x041f: init_gpc */
-       0xb8502000,
-       0x0008044e,
-       0x8f7e1fb2,
+       0x04008004,
+       0x0001f601,
+       0x004104bd,
+       0x7e020f01,
+       0x7e0006ad,
+       0x0f0006bc,
+       0x06fe7e10,
+       0x000e9800,
+       0x7e010f98,
+       0x95000120,
+       0x00800814,
+       0x04f601c0,
+       0x8004bd00,
+       0xf601c100,
+       0x04bd0004,
+       0x130030b7,
+       0xb6001fbb,
+       0x008002f5,
+       0x0ff601d3,
+       0xb604bd00,
+       0x10b60815,
+       0x0814b601,
+       0x687e1fb2,
+       0x1fbb0002,
+       0x02039800,
+       0x50200084,
+/* 0x0420: init_gpc */
+       0x08044eb8,
+       0x7e1fb200,
+       0xb800008f,
+       0x00010c4e,
+       0x8f7ef4bd,
        0x4eb80000,
-       0xbd00010c,
-       0x008f7ef4,
-       0x044eb800,
-       0x8f7e0001,
+       0x7e000104,
+       0xb800008f,
+       0x0001004e,
+       0x8f7e020f,
        0x4eb80000,
-       0x0f000100,
-       0x008f7e02,
-       0x004eb800,
-/* 0x044e: init_gpc_wait */
+/* 0x044f: init_gpc_wait */
+       0x7e000800,
+       0xc8000065,
+       0x0bf41fff,
+       0x044eb8f9,
        0x657e0008,
-       0xffc80000,
-       0xf90bf41f,
-       0x08044eb8,
-       0x00657e00,
-       0x001fbb00,
-       0x800040b7,
-       0xf40132b6,
-       0x000fb41b,
-       0x0006fd7e,
-       0xac7e000f,
-       0x00800006,
-       0x01f60201,
-       0xbd04bd00,
-       0x1f19f014,
-       0x02300080,
-       0xbd0001f6,
-/* 0x0491: wait */
-       0x0028f404,
-/* 0x0497: main */
-       0x0d0031f4,
-       0x00377e10,
-       0xf401f400,
-       0x4001e4b1,
-       0x00c71bf5,
-       0x99f094bd,
-       0x37008004,
-       0x0009f602,
-       0x008104bd,
-       0x11cf02c0,
-       0xc1008200,
-       0x0022cf02,
-       0xf41f13c8,
-       0x23c8770b,
-       0x550bf41f,
-       0x12b220f9,
-       0x99f094bd,
-       0x37008007,
-       0x0009f602,
-       0x32f404bd,
-       0x0231f401,
-       0x0008807e,
-       0x99f094bd,
-       0x17008007,
-       0x0009f602,
-       0x20fc04bd,
-       0x99f094bd,
-       0x37008006,
-       0x0009f602,
-       0x31f404bd,
-       0x08807e01,
+       0x1fbb0000,
+       0x0040b700,
+       0x0132b680,
+       0x0fb41bf4,
+       0x06fe7e00,
+       0x7e000f00,
+       0x800006ad,
+       0xf6020100,
+       0x04bd0001,
+       0x19f014bd,
+       0x3000801f,
+       0x0001f602,
+/* 0x0492: wait */
+       0x28f404bd,
+       0x0031f400,
+/* 0x0498: main */
+       0x377e100d,
+       0x01f40000,
+       0x01e4b1f4,
+       0xc71bf540,
        0xf094bd00,
-       0x00800699,
+       0x00800499,
+       0x09f60237,
+       0x8104bd00,
+       0xcf02c000,
+       0x00820011,
+       0x22cf02c1,
+       0x1f13c800,
+       0xc8770bf4,
+       0x0bf41f23,
+       0xb220f955,
+       0xf094bd12,
+       0x00800799,
+       0x09f60237,
+       0xf404bd00,
+       0x31f40132,
+       0x08817e02,
+       0xf094bd00,
+       0x00800799,
        0x09f60217,
+       0xfc04bd00,
+       0xf094bd20,
+       0x00800699,
+       0x09f60237,
        0xf404bd00,
-/* 0x0522: chsw_prev_no_next */
-       0x20f92f0e,
-       0x32f412b2,
-       0x0232f401,
-       0x0008807e,
-       0x008020fc,
-       0x02f602c0,
+       0x817e0131,
+       0x94bd0008,
+       0x800699f0,
+       0xf6021700,
+       0x04bd0009,
+/* 0x0523: chsw_prev_no_next */
+       0xf92f0ef4,
+       0xf412b220,
+       0x32f40132,
+       0x08817e02,
+       0x8020fc00,
+       0xf602c000,
+       0x04bd0002,
+/* 0x053f: chsw_no_prev */
+       0xc8130ef4,
+       0x0bf41f23,
+       0x0131f40d,
+       0x7e0232f4,
+/* 0x054f: chsw_done */
+       0x02000881,
+       0xc3008001,
+       0x0002f602,
+       0x94bd04bd,
+       0x800499f0,
+       0xf6021700,
+       0x04bd0009,
+       0xff300ef5,
+/* 0x056c: main_not_ctx_switch */
+       0xf401e4b0,
+       0xf2b20c1b,
+       0x0008217e,
+/* 0x057b: main_not_ctx_chan */
+       0xb0400ef4,
+       0x1bf402e4,
+       0xf094bd2c,
+       0x00800799,
+       0x09f60237,
        0xf404bd00,
-/* 0x053e: chsw_no_prev */
-       0x23c8130e,
-       0x0d0bf41f,
-       0xf40131f4,
-       0x807e0232,
-/* 0x054e: chsw_done */
-       0x01020008,
-       0x02c30080,
-       0xbd0002f6,
-       0xf094bd04,
-       0x00800499,
+       0x32f40132,
+       0x08817e02,
+       0xf094bd00,
+       0x00800799,
        0x09f60217,
-       0xf504bd00,
-/* 0x056b: main_not_ctx_switch */
-       0xb0ff300e,
-       0x1bf401e4,
-       0x7ef2b20c,
-       0xf4000820,
-/* 0x057a: main_not_ctx_chan */
-       0xe4b0400e,
-       0x2c1bf402,
-       0x99f094bd,
-       0x37008007,
-       0x0009f602,
-       0x32f404bd,
-       0x0232f401,
-       0x0008807e,
-       0x99f094bd,
-       0x17008007,
-       0x0009f602,
-       0x0ef404bd,
-/* 0x05a9: main_not_ctx_save */
-       0x10ef9411,
-       0x7e01f5f0,
-       0xf50002f8,
-/* 0x05b7: main_done */
-       0xbdfee40e,
-       0x1f29f024,
-       0x02300080,
-       0xbd0002f6,
-       0xd20ef504,
-/* 0x05c9: ih */
-       0xf900f9fe,
-       0x0188fe80,
-       0x90f980f9,
-       0xb0f9a0f9,
-       0xe0f9d0f9,
-       0x04bdf0f9,
-       0xcf02004a,
-       0xabc400aa,
-       0x230bf404,
-       0x004e100d,
-       0x00eecf1a,
-       0xcf19004f,
-       0x047e00ff,
-       0xb0b70000,
-       0x010e0400,
-       0xf61d0040,
-       0x04bd000e,
-/* 0x060c: ih_no_fifo */
-       0x0100abe4,
-       0x0d0c0bf4,
-       0x40014e10,
-       0x0000047e,
-/* 0x061c: ih_no_ctxsw */
-       0x0400abe4,
-       0x8e560bf4,
-       0x7e400708,
+       0xf404bd00,
+/* 0x05aa: main_not_ctx_save */
+       0xef94110e,
+       0x01f5f010,
+       0x0002f87e,
+       0xfee40ef5,
+/* 0x05b8: main_done */
+       0x29f024bd,
+       0x3000801f,
+       0x0002f602,
+       0x0ef504bd,
+/* 0x05ca: ih */
+       0x00f9fed2,
+       0x88fe80f9,
+       0xf980f901,
+       0xf9a0f990,
+       0xf9d0f9b0,
+       0xbdf0f9e0,
+       0x02004a04,
+       0xc400aacf,
+       0x0bf404ab,
+       0x4e100d23,
+       0xeecf1a00,
+       0x19004f00,
+       0x7e00ffcf,
+       0xb7000004,
+       0x0e0400b0,
+       0x1d004001,
+       0xbd000ef6,
+/* 0x060d: ih_no_fifo */
+       0x00abe404,
+       0x0c0bf401,
+       0x014e100d,
+       0x00047e40,
+/* 0x061d: ih_no_ctxsw */
+       0x00abe400,
+       0x560bf404,
+       0x4007088e,
+       0x0000657e,
+       0x0080ffb2,
+       0x0ff60204,
+       0x8e04bd00,
+       0x7e400704,
        0xb2000065,
-       0x040080ff,
+       0x030080ff,
        0x000ff602,
-       0x048e04bd,
-       0x657e4007,
-       0xffb20000,
-       0x02030080,
-       0xbd000ff6,
-       0x50fec704,
-       0x8f02ee94,
-       0xbb400700,
-       0x657e00ef,
-       0x00800000,
-       0x0ff60202,
+       0xfec704bd,
+       0x02ee9450,
+       0x4007008f,
+       0x7e00efbb,
+       0x80000065,
+       0xf6020200,
+       0x04bd000f,
+       0xf87e030f,
+       0x004b0002,
+       0x8ebfb201,
+       0x7e400144,
+/* 0x0677: ih_no_fwmthd */
+       0x4b00008f,
+       0xb0bd0504,
+       0xf4b4abff,
+       0x00800c0b,
+       0x0bf60307,
+/* 0x068b: ih_no_other */
+       0x4004bd00,
+       0x0af60100,
+       0xfc04bd00,
+       0xfce0fcf0,
+       0xfcb0fcd0,
+       0xfc90fca0,
+       0x0088fe80,
+       0x00fc80fc,
+       0xf80032f4,
+/* 0x06ad: ctx_4170s */
+       0x10f5f001,
+       0x708effb2,
+       0x8f7e4041,
+       0x00f80000,
+/* 0x06bc: ctx_4170w */
+       0x4041708e,
+       0x0000657e,
+       0xf4f0ffb2,
+       0xf31bf410,
+/* 0x06ce: ctx_redswitch */
+       0x004e00f8,
+       0x40e5f002,
+       0xf020e5f0,
+       0x008010e5,
+       0x0ef60185,
        0x0f04bd00,
-       0x02f87e03,
-       0x01004b00,
-       0x448ebfb2,
-       0x8f7e4001,
-/* 0x0676: ih_no_fwmthd */
-       0x044b0000,
-       0xffb0bd05,
-       0x0bf4b4ab,
-       0x0700800c,
-       0x000bf603,
-/* 0x068a: ih_no_other */
-       0x004004bd,
-       0x000af601,
-       0xf0fc04bd,
-       0xd0fce0fc,
-       0xa0fcb0fc,
-       0x80fc90fc,
-       0xfc0088fe,
-       0xf400fc80,
-       0x01f80032,
-/* 0x06ac: ctx_4170s */
-       0xb210f5f0,
-       0x41708eff,
+/* 0x06e5: ctx_redswitch_delay */
+       0x01f2b608,
+       0xf1fd1bf4,
+       0xf10400e5,
+       0x800100e5,
+       0xf6018500,
+       0x04bd000e,
+/* 0x06fe: ctx_86c */
+       0x008000f8,
+       0x0ff60223,
+       0xb204bd00,
+       0x8a148eff,
        0x008f7e40,
-/* 0x06bb: ctx_4170w */
-       0x8e00f800,
-       0x7e404170,
-       0xb2000065,
-       0x10f4f0ff,
-       0xf8f31bf4,
-/* 0x06cd: ctx_redswitch */
-       0x02004e00,
-       0xf040e5f0,
-       0xe5f020e5,
-       0x85008010,
-       0x000ef601,
-       0x080f04bd,
-/* 0x06e4: ctx_redswitch_delay */
-       0xf401f2b6,
-       0xe5f1fd1b,
-       0xe5f10400,
-       0x00800100,
-       0x0ef60185,
-       0xf804bd00,
-/* 0x06fd: ctx_86c */
-       0x23008000,
+       0x8effb200,
+       0x7e41a88c,
+       0xf800008f,
+/* 0x071d: ctx_mem */
+       0x84008000,
        0x000ff602,
-       0xffb204bd,
-       0x408a148e,
-       0x00008f7e,
-       0x8c8effb2,
-       0x8f7e41a8,
-       0x00f80000,
-/* 0x071c: ctx_mem */
-       0x02840080,
-       0xbd000ff6,
-/* 0x0725: ctx_mem_wait */
-       0x84008f04,
-       0x00ffcf02,
-       0xf405fffd,
-       0x00f8f61b,
-/* 0x0734: ctx_load */
-       0x99f094bd,
-       0x37008005,
-       0x0009f602,
-       0x0c0a04bd,
-       0x0000b87e,
-       0x0080f4bd,
-       0x0ff60289,
-       0x8004bd00,
-       0xf602c100,
-       0x04bd0002,
-       0x02830080,
+/* 0x0726: ctx_mem_wait */
+       0x008f04bd,
+       0xffcf0284,
+       0x05fffd00,
+       0xf8f61bf4,
+/* 0x0735: ctx_load */
+       0xf094bd00,
+       0x00800599,
+       0x09f60237,
+       0x0a04bd00,
+       0x00b87e0c,
+       0x80f4bd00,
+       0xf6028900,
+       0x04bd000f,
+       0x02c10080,
        0xbd0002f6,
-       0x7e070f04,
-       0x8000071c,
-       0xf602c000,
-       0x04bd0002,
-       0xf0000bfe,
-       0x24b61f2a,
-       0x0220b604,
-       0x99f094bd,
-       0x37008008,
-       0x0009f602,
-       0x008004bd,
-       0x02f60281,
-       0xd204bd00,
-       0x80000000,
-       0x800225f0,
-       0xf6028800,
-       0x04bd0002,
-       0x00421001,
-       0x0223f002,
-       0xf80512fa,
-       0xf094bd03,
+       0x83008004,
+       0x0002f602,
+       0x070f04bd,
+       0x00071d7e,
+       0x02c00080,
+       0xbd0002f6,
+       0x000bfe04,
+       0xb61f2af0,
+       0x20b60424,
+       0xf094bd02,
        0x00800899,
-       0x09f60217,
-       0x9804bd00,
-       0x14b68101,
-       0x80029818,
-       0xfd0825b6,
-       0x01b50512,
-       0xf094bd16,
-       0x00800999,
        0x09f60237,
        0x8004bd00,
        0xf6028100,
-       0x04bd0001,
-       0x00800102,
-       0x02f60288,
-       0x4104bd00,
-       0x13f00100,
-       0x0501fa06,
+       0x04bd0002,
+       0x000000d2,
+       0x0225f080,
+       0x02880080,
+       0xbd0002f6,
+       0x42100104,
+       0x23f00200,
+       0x0512fa02,
        0x94bd03f8,
-       0x800999f0,
+       0x800899f0,
        0xf6021700,
        0x04bd0009,
-       0x99f094bd,
-       0x17008005,
-       0x0009f602,
-       0x00f804bd,
-/* 0x0820: ctx_chan */
-       0x0007347e,
-       0xb87e0c0a,
-       0x050f0000,
-       0x00071c7e,
-/* 0x0832: ctx_mmio_exec */
-       0x039800f8,
-       0x81008041,
-       0x0003f602,
-       0x34bd04bd,
-/* 0x0840: ctx_mmio_loop */
-       0xf4ff34c4,
-       0x00450e1b,
-       0x0653f002,
-       0xf80535fa,
-/* 0x0851: ctx_mmio_pull */
-       0x804e9803,
-       0x7e814f98,
-       0xb600008f,
-       0x12b60830,
-       0xdf1bf401,
-/* 0x0864: ctx_mmio_done */
-       0x80160398,
-       0xf6028100,
-       0x04bd0003,
-       0x414000b5,
-       0x13f00100,
-       0x0601fa06,
-       0x00f803f8,
-/* 0x0880: ctx_xfer */
-       0x0080040e,
-       0x0ef60302,
-/* 0x088b: ctx_xfer_idle */
-       0x8e04bd00,
-       0xcf030000,
-       0xe4f100ee,
-       0x1bf42000,
-       0x0611f4f5,
-/* 0x089f: ctx_xfer_pre */
-       0x0f0c02f4,
-       0x06fd7e10,
-       0x1b11f400,
-/* 0x08a8: ctx_xfer_pre_load */
-       0xac7e020f,
-       0xbb7e0006,
-       0xcd7e0006,
-       0xf4bd0006,
-       0x0006ac7e,
-       0x0007347e,
-/* 0x08c0: ctx_xfer_exec */
-       0xbd160198,
-       0x05008024,
-       0x0002f601,
-       0x1fb204bd,
-       0x41a5008e,
-       0x00008f7e,
-       0xf001fcf0,
-       0x24b6022c,
-       0x05f2fd01,
-       0x048effb2,
-       0x8f7e41a5,
-       0x167e0000,
-       0x24bd0002,
-       0x0247fc80,
-       0xbd0002f6,
-       0x012cf004,
-       0x800320b6,
-       0xf6024afc,
+       0xb6810198,
+       0x02981814,
+       0x0825b680,
+       0xb50512fd,
+       0x94bd1601,
+       0x800999f0,
+       0xf6023700,
+       0x04bd0009,
+       0x02810080,
+       0xbd0001f6,
+       0x80010204,
+       0xf6028800,
        0x04bd0002,
-       0xf001acf0,
-       0x000b06a5,
-       0x98000c98,
-       0x000e010d,
-       0x00013d7e,
-       0xec7e080a,
-       0x0a7e0000,
-       0x01f40002,
-       0x7e0c0a12,
+       0xf0010041,
+       0x01fa0613,
+       0xbd03f805,
+       0x0999f094,
+       0x02170080,
+       0xbd0009f6,
+       0xf094bd04,
+       0x00800599,
+       0x09f60217,
+       0xf804bd00,
+/* 0x0821: ctx_chan */
+       0x07357e00,
+       0x7e0c0a00,
        0x0f0000b8,
-       0x071c7e05,
-       0x2d02f400,
-/* 0x093c: ctx_xfer_post */
-       0xac7e020f,
-       0xf4bd0006,
-       0x0006fd7e,
-       0x0002277e,
-       0x0006bb7e,
-       0xac7ef4bd,
+       0x071d7e05,
+/* 0x0833: ctx_mmio_exec */
+       0x9800f800,
+       0x00804103,
+       0x03f60281,
+       0xbd04bd00,
+/* 0x0841: ctx_mmio_loop */
+       0xff34c434,
+       0x450e1bf4,
+       0x53f00200,
+       0x0535fa06,
+/* 0x0852: ctx_mmio_pull */
+       0x4e9803f8,
+       0x814f9880,
+       0x00008f7e,
+       0xb60830b6,
+       0x1bf40112,
+/* 0x0865: ctx_mmio_done */
+       0x160398df,
+       0x02810080,
+       0xbd0003f6,
+       0x4000b504,
+       0xf0010041,
+       0x01fa0613,
+       0xf803f806,
+/* 0x0881: ctx_xfer */
+       0x80040e00,
+       0xf6030200,
+       0x04bd000e,
+/* 0x088c: ctx_xfer_idle */
+       0x0300008e,
+       0xf100eecf,
+       0xf42000e4,
+       0x11f4f51b,
+       0x0c02f406,
+/* 0x08a0: ctx_xfer_pre */
+       0xfe7e100f,
        0x11f40006,
-       0x40019810,
-       0xf40511fd,
-       0x327e070b,
-/* 0x0966: ctx_xfer_no_post_mmio */
-/* 0x0966: ctx_xfer_done */
-       0x00f80008,
+/* 0x08a9: ctx_xfer_pre_load */
+       0x7e020f1b,
+       0x7e0006ad,
+       0x7e0006bc,
+       0xbd0006ce,
+       0x06ad7ef4,
+       0x07357e00,
+/* 0x08c1: ctx_xfer_exec */
+       0x16019800,
+       0x008024bd,
+       0x02f60105,
+       0xb204bd00,
+       0xa5008e1f,
+       0x008f7e41,
+       0x01fcf000,
+       0xb6022cf0,
+       0xf2fd0124,
+       0x8effb205,
+       0x7e41a504,
+       0x7e00008f,
+       0xbd000216,
+       0x47fc8024,
+       0x0002f602,
+       0x2cf004bd,
+       0x0320b601,
+       0x024afc80,
+       0xbd0002f6,
+       0x01acf004,
+       0x0b06a5f0,
+       0x000c9800,
+       0x0e010d98,
+       0x013d7e00,
+       0x7e080a00,
+       0x7e0000ec,
+       0xf400020a,
+       0x0c0a1201,
+       0x0000b87e,
+       0x1d7e050f,
+       0x02f40007,
+/* 0x093d: ctx_xfer_post */
+       0x7e020f2d,
+       0xbd0006ad,
+       0x06fe7ef4,
+       0x02277e00,
+       0x06bc7e00,
+       0x7ef4bd00,
+       0xf40006ad,
+       0x01981011,
+       0x0511fd40,
+       0x7e070bf4,
+/* 0x0967: ctx_xfer_no_post_mmio */
+/* 0x0967: ctx_xfer_done */
+       0xf8000833,
        0x00000000,
        0x00000000,
        0x00000000,
index 649a442..449dae7 100644 (file)
@@ -441,7 +441,7 @@ static uint32_t gm107_grhub_code[] = {
        0x020014fe,
        0x12004002,
        0xbd0002f6,
-       0x05c94104,
+       0x05ca4104,
        0xbd0010fe,
        0x07004024,
        0xbd0002f6,
@@ -460,423 +460,423 @@ static uint32_t gm107_grhub_code[] = {
        0x01039204,
        0x03090080,
        0xbd0003f6,
-       0x87044204,
-       0xf6040040,
-       0x04bd0002,
-       0x00400402,
-       0x0002f603,
-       0x31f404bd,
-       0x96048e10,
-       0x00657e40,
-       0xc7feb200,
-       0x01b590f1,
-       0x1ff4f003,
-       0x01020fb5,
-       0x041fbb01,
-       0x800112b6,
-       0xf6010300,
-       0x04bd0001,
-       0x01040080,
+       0x87048204,
+       0x04004000,
+       0xbd0002f6,
+       0x40040204,
+       0x02f60300,
+       0xf404bd00,
+       0x048e1031,
+       0x657e4096,
+       0xfeb20000,
+       0xb590f1c7,
+       0xf4f00301,
+       0x020fb51f,
+       0x1fbb0101,
+       0x0112b604,
+       0x01030080,
        0xbd0001f6,
-       0x01004104,
-       0xac7e020f,
-       0xbb7e0006,
-       0x100f0006,
-       0x0006fd7e,
-       0x98000e98,
-       0x207e010f,
-       0x14950001,
-       0xc0008008,
-       0x0004f601,
-       0x008004bd,
-       0x04f601c1,
-       0xb704bd00,
-       0xbb130030,
-       0xf5b6001f,
-       0xd3008002,
-       0x000ff601,
-       0x15b604bd,
-       0x0110b608,
-       0xb20814b6,
-       0x02687e1f,
-       0x001fbb00,
-       0x84020398,
-/* 0x041f: init_gpc */
-       0xb8502000,
-       0x0008044e,
-       0x8f7e1fb2,
+       0x04008004,
+       0x0001f601,
+       0x004104bd,
+       0x7e020f01,
+       0x7e0006ad,
+       0x0f0006bc,
+       0x06fe7e10,
+       0x000e9800,
+       0x7e010f98,
+       0x95000120,
+       0x00800814,
+       0x04f601c0,
+       0x8004bd00,
+       0xf601c100,
+       0x04bd0004,
+       0x130030b7,
+       0xb6001fbb,
+       0x008002f5,
+       0x0ff601d3,
+       0xb604bd00,
+       0x10b60815,
+       0x0814b601,
+       0x687e1fb2,
+       0x1fbb0002,
+       0x02039800,
+       0x50200084,
+/* 0x0420: init_gpc */
+       0x08044eb8,
+       0x7e1fb200,
+       0xb800008f,
+       0x00010c4e,
+       0x8f7ef4bd,
        0x4eb80000,
-       0xbd00010c,
-       0x008f7ef4,
-       0x044eb800,
-       0x8f7e0001,
+       0x7e000104,
+       0xb800008f,
+       0x0001004e,
+       0x8f7e020f,
        0x4eb80000,
-       0x0f000100,
-       0x008f7e02,
-       0x004eb800,
-/* 0x044e: init_gpc_wait */
+/* 0x044f: init_gpc_wait */
+       0x7e000800,
+       0xc8000065,
+       0x0bf41fff,
+       0x044eb8f9,
        0x657e0008,
-       0xffc80000,
-       0xf90bf41f,
-       0x08044eb8,
-       0x00657e00,
-       0x001fbb00,
-       0x800040b7,
-       0xf40132b6,
-       0x000fb41b,
-       0x0006fd7e,
-       0xac7e000f,
-       0x00800006,
-       0x01f60201,
-       0xbd04bd00,
-       0x1f19f014,
-       0x02300080,
-       0xbd0001f6,
-/* 0x0491: wait */
-       0x0028f404,
-/* 0x0497: main */
-       0x0d0031f4,
-       0x00377e10,
-       0xf401f400,
-       0x4001e4b1,
-       0x00c71bf5,
-       0x99f094bd,
-       0x37008004,
-       0x0009f602,
-       0x008104bd,
-       0x11cf02c0,
-       0xc1008200,
-       0x0022cf02,
-       0xf41f13c8,
-       0x23c8770b,
-       0x550bf41f,
-       0x12b220f9,
-       0x99f094bd,
-       0x37008007,
-       0x0009f602,
-       0x32f404bd,
-       0x0231f401,
-       0x0008807e,
-       0x99f094bd,
-       0x17008007,
-       0x0009f602,
-       0x20fc04bd,
-       0x99f094bd,
-       0x37008006,
-       0x0009f602,
-       0x31f404bd,
-       0x08807e01,
+       0x1fbb0000,
+       0x0040b700,
+       0x0132b680,
+       0x0fb41bf4,
+       0x06fe7e00,
+       0x7e000f00,
+       0x800006ad,
+       0xf6020100,
+       0x04bd0001,
+       0x19f014bd,
+       0x3000801f,
+       0x0001f602,
+/* 0x0492: wait */
+       0x28f404bd,
+       0x0031f400,
+/* 0x0498: main */
+       0x377e100d,
+       0x01f40000,
+       0x01e4b1f4,
+       0xc71bf540,
        0xf094bd00,
-       0x00800699,
+       0x00800499,
+       0x09f60237,
+       0x8104bd00,
+       0xcf02c000,
+       0x00820011,
+       0x22cf02c1,
+       0x1f13c800,
+       0xc8770bf4,
+       0x0bf41f23,
+       0xb220f955,
+       0xf094bd12,
+       0x00800799,
+       0x09f60237,
+       0xf404bd00,
+       0x31f40132,
+       0x08817e02,
+       0xf094bd00,
+       0x00800799,
        0x09f60217,
+       0xfc04bd00,
+       0xf094bd20,
+       0x00800699,
+       0x09f60237,
        0xf404bd00,
-/* 0x0522: chsw_prev_no_next */
-       0x20f92f0e,
-       0x32f412b2,
-       0x0232f401,
-       0x0008807e,
-       0x008020fc,
-       0x02f602c0,
+       0x817e0131,
+       0x94bd0008,
+       0x800699f0,
+       0xf6021700,
+       0x04bd0009,
+/* 0x0523: chsw_prev_no_next */
+       0xf92f0ef4,
+       0xf412b220,
+       0x32f40132,
+       0x08817e02,
+       0x8020fc00,
+       0xf602c000,
+       0x04bd0002,
+/* 0x053f: chsw_no_prev */
+       0xc8130ef4,
+       0x0bf41f23,
+       0x0131f40d,
+       0x7e0232f4,
+/* 0x054f: chsw_done */
+       0x02000881,
+       0xc3008001,
+       0x0002f602,
+       0x94bd04bd,
+       0x800499f0,
+       0xf6021700,
+       0x04bd0009,
+       0xff300ef5,
+/* 0x056c: main_not_ctx_switch */
+       0xf401e4b0,
+       0xf2b20c1b,
+       0x0008217e,
+/* 0x057b: main_not_ctx_chan */
+       0xb0400ef4,
+       0x1bf402e4,
+       0xf094bd2c,
+       0x00800799,
+       0x09f60237,
        0xf404bd00,
-/* 0x053e: chsw_no_prev */
-       0x23c8130e,
-       0x0d0bf41f,
-       0xf40131f4,
-       0x807e0232,
-/* 0x054e: chsw_done */
-       0x01020008,
-       0x02c30080,
-       0xbd0002f6,
-       0xf094bd04,
-       0x00800499,
+       0x32f40132,
+       0x08817e02,
+       0xf094bd00,
+       0x00800799,
        0x09f60217,
-       0xf504bd00,
-/* 0x056b: main_not_ctx_switch */
-       0xb0ff300e,
-       0x1bf401e4,
-       0x7ef2b20c,
-       0xf4000820,
-/* 0x057a: main_not_ctx_chan */
-       0xe4b0400e,
-       0x2c1bf402,
-       0x99f094bd,
-       0x37008007,
-       0x0009f602,
-       0x32f404bd,
-       0x0232f401,
-       0x0008807e,
-       0x99f094bd,
-       0x17008007,
-       0x0009f602,
-       0x0ef404bd,
-/* 0x05a9: main_not_ctx_save */
-       0x10ef9411,
-       0x7e01f5f0,
-       0xf50002f8,
-/* 0x05b7: main_done */
-       0xbdfee40e,
-       0x1f29f024,
-       0x02300080,
-       0xbd0002f6,
-       0xd20ef504,
-/* 0x05c9: ih */
-       0xf900f9fe,
-       0x0188fe80,
-       0x90f980f9,
-       0xb0f9a0f9,
-       0xe0f9d0f9,
-       0x04bdf0f9,
-       0xcf02004a,
-       0xabc400aa,
-       0x230bf404,
-       0x004e100d,
-       0x00eecf1a,
-       0xcf19004f,
-       0x047e00ff,
-       0xb0b70000,
-       0x010e0400,
-       0xf61d0040,
-       0x04bd000e,
-/* 0x060c: ih_no_fifo */
-       0x0100abe4,
-       0x0d0c0bf4,
-       0x40014e10,
-       0x0000047e,
-/* 0x061c: ih_no_ctxsw */
-       0x0400abe4,
-       0x8e560bf4,
-       0x7e400708,
+       0xf404bd00,
+/* 0x05aa: main_not_ctx_save */
+       0xef94110e,
+       0x01f5f010,
+       0x0002f87e,
+       0xfee40ef5,
+/* 0x05b8: main_done */
+       0x29f024bd,
+       0x3000801f,
+       0x0002f602,
+       0x0ef504bd,
+/* 0x05ca: ih */
+       0x00f9fed2,
+       0x88fe80f9,
+       0xf980f901,
+       0xf9a0f990,
+       0xf9d0f9b0,
+       0xbdf0f9e0,
+       0x02004a04,
+       0xc400aacf,
+       0x0bf404ab,
+       0x4e100d23,
+       0xeecf1a00,
+       0x19004f00,
+       0x7e00ffcf,
+       0xb7000004,
+       0x0e0400b0,
+       0x1d004001,
+       0xbd000ef6,
+/* 0x060d: ih_no_fifo */
+       0x00abe404,
+       0x0c0bf401,
+       0x014e100d,
+       0x00047e40,
+/* 0x061d: ih_no_ctxsw */
+       0x00abe400,
+       0x560bf404,
+       0x4007088e,
+       0x0000657e,
+       0x0080ffb2,
+       0x0ff60204,
+       0x8e04bd00,
+       0x7e400704,
        0xb2000065,
-       0x040080ff,
+       0x030080ff,
        0x000ff602,
-       0x048e04bd,
-       0x657e4007,
-       0xffb20000,
-       0x02030080,
-       0xbd000ff6,
-       0x50fec704,
-       0x8f02ee94,
-       0xbb400700,
-       0x657e00ef,
-       0x00800000,
-       0x0ff60202,
+       0xfec704bd,
+       0x02ee9450,
+       0x4007008f,
+       0x7e00efbb,
+       0x80000065,
+       0xf6020200,
+       0x04bd000f,
+       0xf87e030f,
+       0x004b0002,
+       0x8ebfb201,
+       0x7e400144,
+/* 0x0677: ih_no_fwmthd */
+       0x4b00008f,
+       0xb0bd0504,
+       0xf4b4abff,
+       0x00800c0b,
+       0x0bf60307,
+/* 0x068b: ih_no_other */
+       0x4004bd00,
+       0x0af60100,
+       0xfc04bd00,
+       0xfce0fcf0,
+       0xfcb0fcd0,
+       0xfc90fca0,
+       0x0088fe80,
+       0x00fc80fc,
+       0xf80032f4,
+/* 0x06ad: ctx_4170s */
+       0x10f5f001,
+       0x708effb2,
+       0x8f7e4041,
+       0x00f80000,
+/* 0x06bc: ctx_4170w */
+       0x4041708e,
+       0x0000657e,
+       0xf4f0ffb2,
+       0xf31bf410,
+/* 0x06ce: ctx_redswitch */
+       0x004e00f8,
+       0x40e5f002,
+       0xf020e5f0,
+       0x008010e5,
+       0x0ef60185,
        0x0f04bd00,
-       0x02f87e03,
-       0x01004b00,
-       0x448ebfb2,
-       0x8f7e4001,
-/* 0x0676: ih_no_fwmthd */
-       0x044b0000,
-       0xffb0bd05,
-       0x0bf4b4ab,
-       0x0700800c,
-       0x000bf603,
-/* 0x068a: ih_no_other */
-       0x004004bd,
-       0x000af601,
-       0xf0fc04bd,
-       0xd0fce0fc,
-       0xa0fcb0fc,
-       0x80fc90fc,
-       0xfc0088fe,
-       0xf400fc80,
-       0x01f80032,
-/* 0x06ac: ctx_4170s */
-       0xb210f5f0,
-       0x41708eff,
+/* 0x06e5: ctx_redswitch_delay */
+       0x01f2b608,
+       0xf1fd1bf4,
+       0xf10400e5,
+       0x800100e5,
+       0xf6018500,
+       0x04bd000e,
+/* 0x06fe: ctx_86c */
+       0x008000f8,
+       0x0ff60223,
+       0xb204bd00,
+       0x8a148eff,
        0x008f7e40,
-/* 0x06bb: ctx_4170w */
-       0x8e00f800,
-       0x7e404170,
-       0xb2000065,
-       0x10f4f0ff,
-       0xf8f31bf4,
-/* 0x06cd: ctx_redswitch */
-       0x02004e00,
-       0xf040e5f0,
-       0xe5f020e5,
-       0x85008010,
-       0x000ef601,
-       0x080f04bd,
-/* 0x06e4: ctx_redswitch_delay */
-       0xf401f2b6,
-       0xe5f1fd1b,
-       0xe5f10400,
-       0x00800100,
-       0x0ef60185,
-       0xf804bd00,
-/* 0x06fd: ctx_86c */
-       0x23008000,
+       0x8effb200,
+       0x7e41a88c,
+       0xf800008f,
+/* 0x071d: ctx_mem */
+       0x84008000,
        0x000ff602,
-       0xffb204bd,
-       0x408a148e,
-       0x00008f7e,
-       0x8c8effb2,
-       0x8f7e41a8,
-       0x00f80000,
-/* 0x071c: ctx_mem */
-       0x02840080,
-       0xbd000ff6,
-/* 0x0725: ctx_mem_wait */
-       0x84008f04,
-       0x00ffcf02,
-       0xf405fffd,
-       0x00f8f61b,
-/* 0x0734: ctx_load */
-       0x99f094bd,
-       0x37008005,
-       0x0009f602,
-       0x0c0a04bd,
-       0x0000b87e,
-       0x0080f4bd,
-       0x0ff60289,
-       0x8004bd00,
-       0xf602c100,
-       0x04bd0002,
-       0x02830080,
+/* 0x0726: ctx_mem_wait */
+       0x008f04bd,
+       0xffcf0284,
+       0x05fffd00,
+       0xf8f61bf4,
+/* 0x0735: ctx_load */
+       0xf094bd00,
+       0x00800599,
+       0x09f60237,
+       0x0a04bd00,
+       0x00b87e0c,
+       0x80f4bd00,
+       0xf6028900,
+       0x04bd000f,
+       0x02c10080,
        0xbd0002f6,
-       0x7e070f04,
-       0x8000071c,
-       0xf602c000,
-       0x04bd0002,
-       0xf0000bfe,
-       0x24b61f2a,
-       0x0220b604,
-       0x99f094bd,
-       0x37008008,
-       0x0009f602,
-       0x008004bd,
-       0x02f60281,
-       0xd204bd00,
-       0x80000000,
-       0x800225f0,
-       0xf6028800,
-       0x04bd0002,
-       0x00421001,
-       0x0223f002,
-       0xf80512fa,
-       0xf094bd03,
+       0x83008004,
+       0x0002f602,
+       0x070f04bd,
+       0x00071d7e,
+       0x02c00080,
+       0xbd0002f6,
+       0x000bfe04,
+       0xb61f2af0,
+       0x20b60424,
+       0xf094bd02,
        0x00800899,
-       0x09f60217,
-       0x9804bd00,
-       0x14b68101,
-       0x80029818,
-       0xfd0825b6,
-       0x01b50512,
-       0xf094bd16,
-       0x00800999,
        0x09f60237,
        0x8004bd00,
        0xf6028100,
-       0x04bd0001,
-       0x00800102,
-       0x02f60288,
-       0x4104bd00,
-       0x13f00100,
-       0x0501fa06,
+       0x04bd0002,
+       0x000000d2,
+       0x0225f080,
+       0x02880080,
+       0xbd0002f6,
+       0x42100104,
+       0x23f00200,
+       0x0512fa02,
        0x94bd03f8,
-       0x800999f0,
+       0x800899f0,
        0xf6021700,
        0x04bd0009,
-       0x99f094bd,
-       0x17008005,
-       0x0009f602,
-       0x00f804bd,
-/* 0x0820: ctx_chan */
-       0x0007347e,
-       0xb87e0c0a,
-       0x050f0000,
-       0x00071c7e,
-/* 0x0832: ctx_mmio_exec */
-       0x039800f8,
-       0x81008041,
-       0x0003f602,
-       0x34bd04bd,
-/* 0x0840: ctx_mmio_loop */
-       0xf4ff34c4,
-       0x00450e1b,
-       0x0653f002,
-       0xf80535fa,
-/* 0x0851: ctx_mmio_pull */
-       0x804e9803,
-       0x7e814f98,
-       0xb600008f,
-       0x12b60830,
-       0xdf1bf401,
-/* 0x0864: ctx_mmio_done */
-       0x80160398,
-       0xf6028100,
-       0x04bd0003,
-       0x414000b5,
-       0x13f00100,
-       0x0601fa06,
-       0x00f803f8,
-/* 0x0880: ctx_xfer */
-       0x0080040e,
-       0x0ef60302,
-/* 0x088b: ctx_xfer_idle */
-       0x8e04bd00,
-       0xcf030000,
-       0xe4f100ee,
-       0x1bf42000,
-       0x0611f4f5,
-/* 0x089f: ctx_xfer_pre */
-       0x0f0c02f4,
-       0x06fd7e10,
-       0x1b11f400,
-/* 0x08a8: ctx_xfer_pre_load */
-       0xac7e020f,
-       0xbb7e0006,
-       0xcd7e0006,
-       0xf4bd0006,
-       0x0006ac7e,
-       0x0007347e,
-/* 0x08c0: ctx_xfer_exec */
-       0xbd160198,
-       0x05008024,
-       0x0002f601,
-       0x1fb204bd,
-       0x41a5008e,
-       0x00008f7e,
-       0xf001fcf0,
-       0x24b6022c,
-       0x05f2fd01,
-       0x048effb2,
-       0x8f7e41a5,
-       0x167e0000,
-       0x24bd0002,
-       0x0247fc80,
-       0xbd0002f6,
-       0x012cf004,
-       0x800320b6,
-       0xf6024afc,
+       0xb6810198,
+       0x02981814,
+       0x0825b680,
+       0xb50512fd,
+       0x94bd1601,
+       0x800999f0,
+       0xf6023700,
+       0x04bd0009,
+       0x02810080,
+       0xbd0001f6,
+       0x80010204,
+       0xf6028800,
        0x04bd0002,
-       0xf001acf0,
-       0x000b06a5,
-       0x98000c98,
-       0x000e010d,
-       0x00013d7e,
-       0xec7e080a,
-       0x0a7e0000,
-       0x01f40002,
-       0x7e0c0a12,
+       0xf0010041,
+       0x01fa0613,
+       0xbd03f805,
+       0x0999f094,
+       0x02170080,
+       0xbd0009f6,
+       0xf094bd04,
+       0x00800599,
+       0x09f60217,
+       0xf804bd00,
+/* 0x0821: ctx_chan */
+       0x07357e00,
+       0x7e0c0a00,
        0x0f0000b8,
-       0x071c7e05,
-       0x2d02f400,
-/* 0x093c: ctx_xfer_post */
-       0xac7e020f,
-       0xf4bd0006,
-       0x0006fd7e,
-       0x0002277e,
-       0x0006bb7e,
-       0xac7ef4bd,
+       0x071d7e05,
+/* 0x0833: ctx_mmio_exec */
+       0x9800f800,
+       0x00804103,
+       0x03f60281,
+       0xbd04bd00,
+/* 0x0841: ctx_mmio_loop */
+       0xff34c434,
+       0x450e1bf4,
+       0x53f00200,
+       0x0535fa06,
+/* 0x0852: ctx_mmio_pull */
+       0x4e9803f8,
+       0x814f9880,
+       0x00008f7e,
+       0xb60830b6,
+       0x1bf40112,
+/* 0x0865: ctx_mmio_done */
+       0x160398df,
+       0x02810080,
+       0xbd0003f6,
+       0x4000b504,
+       0xf0010041,
+       0x01fa0613,
+       0xf803f806,
+/* 0x0881: ctx_xfer */
+       0x80040e00,
+       0xf6030200,
+       0x04bd000e,
+/* 0x088c: ctx_xfer_idle */
+       0x0300008e,
+       0xf100eecf,
+       0xf42000e4,
+       0x11f4f51b,
+       0x0c02f406,
+/* 0x08a0: ctx_xfer_pre */
+       0xfe7e100f,
        0x11f40006,
-       0x40019810,
-       0xf40511fd,
-       0x327e070b,
-/* 0x0966: ctx_xfer_no_post_mmio */
-/* 0x0966: ctx_xfer_done */
-       0x00f80008,
+/* 0x08a9: ctx_xfer_pre_load */
+       0x7e020f1b,
+       0x7e0006ad,
+       0x7e0006bc,
+       0xbd0006ce,
+       0x06ad7ef4,
+       0x07357e00,
+/* 0x08c1: ctx_xfer_exec */
+       0x16019800,
+       0x008024bd,
+       0x02f60105,
+       0xb204bd00,
+       0xa5008e1f,
+       0x008f7e41,
+       0x01fcf000,
+       0xb6022cf0,
+       0xf2fd0124,
+       0x8effb205,
+       0x7e41a504,
+       0x7e00008f,
+       0xbd000216,
+       0x47fc8024,
+       0x0002f602,
+       0x2cf004bd,
+       0x0320b601,
+       0x024afc80,
+       0xbd0002f6,
+       0x01acf004,
+       0x0b06a5f0,
+       0x000c9800,
+       0x0e010d98,
+       0x013d7e00,
+       0x7e080a00,
+       0x7e0000ec,
+       0xf400020a,
+       0x0c0a1201,
+       0x0000b87e,
+       0x1d7e050f,
+       0x02f40007,
+/* 0x093d: ctx_xfer_post */
+       0x7e020f2d,
+       0xbd0006ad,
+       0x06fe7ef4,
+       0x02277e00,
+       0x06bc7e00,
+       0x7ef4bd00,
+       0xf40006ad,
+       0x01981011,
+       0x0511fd40,
+       0x7e070bf4,
+/* 0x0967: ctx_xfer_no_post_mmio */
+/* 0x0967: ctx_xfer_done */
+       0xf8000833,
        0x00000000,
        0x00000000,
        0x00000000,
index c578deb..dd8f85b 100644 (file)
@@ -26,9 +26,9 @@
 #include "fuc/os.h"
 
 #include <core/client.h>
-#include <core/option.h>
 #include <core/firmware.h>
-#include <subdev/secboot.h>
+#include <core/option.h>
+#include <subdev/acr.h>
 #include <subdev/fb.h>
 #include <subdev/mc.h>
 #include <subdev/pmu.h>
@@ -1636,7 +1636,7 @@ gf100_gr_intr(struct nvkm_gr *base)
 
 static void
 gf100_gr_init_fw(struct nvkm_falcon *falcon,
-                struct gf100_gr_fuc *code, struct gf100_gr_fuc *data)
+                struct nvkm_blob *code, struct nvkm_blob *data)
 {
        nvkm_falcon_load_dmem(falcon, data->data, 0x0, data->size, 0);
        nvkm_falcon_load_imem(falcon, code->data, 0x0, code->size, 0, 0, false);
@@ -1690,26 +1690,30 @@ gf100_gr_init_ctxctl_ext(struct gf100_gr *gr)
 {
        struct nvkm_subdev *subdev = &gr->base.engine.subdev;
        struct nvkm_device *device = subdev->device;
-       struct nvkm_secboot *sb = device->secboot;
-       u32 secboot_mask = 0;
+       u32 lsf_mask = 0;
        int ret;
 
        /* load fuc microcode */
        nvkm_mc_unk260(device, 0);
 
        /* securely-managed falcons must be reset using secure boot */
-       if (nvkm_secboot_is_managed(sb, NVKM_SECBOOT_FALCON_FECS))
-               secboot_mask |= BIT(NVKM_SECBOOT_FALCON_FECS);
-       else
-               gf100_gr_init_fw(gr->fecs.falcon, &gr->fuc409c, &gr->fuc409d);
 
-       if (nvkm_secboot_is_managed(sb, NVKM_SECBOOT_FALCON_GPCCS))
-               secboot_mask |= BIT(NVKM_SECBOOT_FALCON_GPCCS);
-       else
-               gf100_gr_init_fw(gr->gpccs.falcon, &gr->fuc41ac, &gr->fuc41ad);
+       if (!nvkm_acr_managed_falcon(device, NVKM_ACR_LSF_FECS)) {
+               gf100_gr_init_fw(&gr->fecs.falcon, &gr->fecs.inst,
+                                                  &gr->fecs.data);
+       } else {
+               lsf_mask |= BIT(NVKM_ACR_LSF_FECS);
+       }
 
-       if (secboot_mask != 0) {
-               int ret = nvkm_secboot_reset(sb, secboot_mask);
+       if (!nvkm_acr_managed_falcon(device, NVKM_ACR_LSF_GPCCS)) {
+               gf100_gr_init_fw(&gr->gpccs.falcon, &gr->gpccs.inst,
+                                                   &gr->gpccs.data);
+       } else {
+               lsf_mask |= BIT(NVKM_ACR_LSF_GPCCS);
+       }
+
+       if (lsf_mask) {
+               ret = nvkm_acr_bootstrap_falcons(device, lsf_mask);
                if (ret)
                        return ret;
        }
@@ -1721,8 +1725,8 @@ gf100_gr_init_ctxctl_ext(struct gf100_gr *gr)
        nvkm_wr32(device, 0x41a10c, 0x00000000);
        nvkm_wr32(device, 0x40910c, 0x00000000);
 
-       nvkm_falcon_start(gr->gpccs.falcon);
-       nvkm_falcon_start(gr->fecs.falcon);
+       nvkm_falcon_start(&gr->gpccs.falcon);
+       nvkm_falcon_start(&gr->fecs.falcon);
 
        if (nvkm_msec(device, 2000,
                if (nvkm_rd32(device, 0x409800) & 0x00000001)
@@ -1784,18 +1788,18 @@ gf100_gr_init_ctxctl_int(struct gf100_gr *gr)
 
        /* load HUB microcode */
        nvkm_mc_unk260(device, 0);
-       nvkm_falcon_load_dmem(gr->fecs.falcon,
+       nvkm_falcon_load_dmem(&gr->fecs.falcon,
                              gr->func->fecs.ucode->data.data, 0x0,
                              gr->func->fecs.ucode->data.size, 0);
-       nvkm_falcon_load_imem(gr->fecs.falcon,
+       nvkm_falcon_load_imem(&gr->fecs.falcon,
                              gr->func->fecs.ucode->code.data, 0x0,
                              gr->func->fecs.ucode->code.size, 0, 0, false);
 
        /* load GPC microcode */
-       nvkm_falcon_load_dmem(gr->gpccs.falcon,
+       nvkm_falcon_load_dmem(&gr->gpccs.falcon,
                              gr->func->gpccs.ucode->data.data, 0x0,
                              gr->func->gpccs.ucode->data.size, 0);
-       nvkm_falcon_load_imem(gr->gpccs.falcon,
+       nvkm_falcon_load_imem(&gr->gpccs.falcon,
                              gr->func->gpccs.ucode->code.data, 0x0,
                              gr->func->gpccs.ucode->code.size, 0, 0, false);
        nvkm_mc_unk260(device, 1);
@@ -1941,17 +1945,6 @@ gf100_gr_oneinit(struct nvkm_gr *base)
        struct nvkm_subdev *subdev = &gr->base.engine.subdev;
        struct nvkm_device *device = subdev->device;
        int i, j;
-       int ret;
-
-       ret = nvkm_falcon_v1_new(subdev, "FECS", 0x409000, &gr->fecs.falcon);
-       if (ret)
-               return ret;
-
-       mutex_init(&gr->fecs.mutex);
-
-       ret = nvkm_falcon_v1_new(subdev, "GPCCS", 0x41a000, &gr->gpccs.falcon);
-       if (ret)
-               return ret;
 
        nvkm_pmu_pgob(device->pmu, false);
 
@@ -1992,11 +1985,11 @@ gf100_gr_init_(struct nvkm_gr *base)
 
        nvkm_pmu_pgob(gr->base.engine.subdev.device->pmu, false);
 
-       ret = nvkm_falcon_get(gr->fecs.falcon, subdev);
+       ret = nvkm_falcon_get(&gr->fecs.falcon, subdev);
        if (ret)
                return ret;
 
-       ret = nvkm_falcon_get(gr->gpccs.falcon, subdev);
+       ret = nvkm_falcon_get(&gr->gpccs.falcon, subdev);
        if (ret)
                return ret;
 
@@ -2004,49 +1997,34 @@ gf100_gr_init_(struct nvkm_gr *base)
 }
 
 static int
-gf100_gr_fini_(struct nvkm_gr *base, bool suspend)
+gf100_gr_fini(struct nvkm_gr *base, bool suspend)
 {
        struct gf100_gr *gr = gf100_gr(base);
        struct nvkm_subdev *subdev = &gr->base.engine.subdev;
-       nvkm_falcon_put(gr->gpccs.falcon, subdev);
-       nvkm_falcon_put(gr->fecs.falcon, subdev);
+       nvkm_falcon_put(&gr->gpccs.falcon, subdev);
+       nvkm_falcon_put(&gr->fecs.falcon, subdev);
        return 0;
 }
 
-void
-gf100_gr_dtor_fw(struct gf100_gr_fuc *fuc)
-{
-       kfree(fuc->data);
-       fuc->data = NULL;
-}
-
-static void
-gf100_gr_dtor_init(struct gf100_gr_pack *pack)
-{
-       vfree(pack);
-}
-
 void *
 gf100_gr_dtor(struct nvkm_gr *base)
 {
        struct gf100_gr *gr = gf100_gr(base);
 
-       if (gr->func->dtor)
-               gr->func->dtor(gr);
        kfree(gr->data);
 
-       nvkm_falcon_del(&gr->gpccs.falcon);
-       nvkm_falcon_del(&gr->fecs.falcon);
+       nvkm_falcon_dtor(&gr->gpccs.falcon);
+       nvkm_falcon_dtor(&gr->fecs.falcon);
 
-       gf100_gr_dtor_fw(&gr->fuc409c);
-       gf100_gr_dtor_fw(&gr->fuc409d);
-       gf100_gr_dtor_fw(&gr->fuc41ac);
-       gf100_gr_dtor_fw(&gr->fuc41ad);
+       nvkm_blob_dtor(&gr->fecs.inst);
+       nvkm_blob_dtor(&gr->fecs.data);
+       nvkm_blob_dtor(&gr->gpccs.inst);
+       nvkm_blob_dtor(&gr->gpccs.data);
 
-       gf100_gr_dtor_init(gr->fuc_bundle);
-       gf100_gr_dtor_init(gr->fuc_method);
-       gf100_gr_dtor_init(gr->fuc_sw_ctx);
-       gf100_gr_dtor_init(gr->fuc_sw_nonctx);
+       vfree(gr->bundle);
+       vfree(gr->method);
+       vfree(gr->sw_ctx);
+       vfree(gr->sw_nonctx);
 
        return gr;
 }
@@ -2056,7 +2034,7 @@ gf100_gr_ = {
        .dtor = gf100_gr_dtor,
        .oneinit = gf100_gr_oneinit,
        .init = gf100_gr_init_,
-       .fini = gf100_gr_fini_,
+       .fini = gf100_gr_fini,
        .intr = gf100_gr_intr,
        .units = gf100_gr_units,
        .chan_new = gf100_gr_chan_new,
@@ -2067,87 +2045,24 @@ gf100_gr_ = {
        .ctxsw.inst = gf100_gr_ctxsw_inst,
 };
 
-int
-gf100_gr_ctor_fw_legacy(struct gf100_gr *gr, const char *fwname,
-                       struct gf100_gr_fuc *fuc, int ret)
-{
-       struct nvkm_subdev *subdev = &gr->base.engine.subdev;
-       struct nvkm_device *device = subdev->device;
-       const struct firmware *fw;
-       char f[32];
-
-       /* see if this firmware has a legacy path */
-       if (!strcmp(fwname, "fecs_inst"))
-               fwname = "fuc409c";
-       else if (!strcmp(fwname, "fecs_data"))
-               fwname = "fuc409d";
-       else if (!strcmp(fwname, "gpccs_inst"))
-               fwname = "fuc41ac";
-       else if (!strcmp(fwname, "gpccs_data"))
-               fwname = "fuc41ad";
-       else {
-               /* nope, let's just return the error we got */
-               nvkm_error(subdev, "failed to load %s\n", fwname);
-               return ret;
-       }
-
-       /* yes, try to load from the legacy path */
-       nvkm_debug(subdev, "%s: falling back to legacy path\n", fwname);
-
-       snprintf(f, sizeof(f), "nouveau/nv%02x_%s", device->chipset, fwname);
-       ret = request_firmware(&fw, f, device->dev);
-       if (ret) {
-               snprintf(f, sizeof(f), "nouveau/%s", fwname);
-               ret = request_firmware(&fw, f, device->dev);
-               if (ret) {
-                       nvkm_error(subdev, "failed to load %s\n", fwname);
-                       return ret;
-               }
-       }
-
-       fuc->size = fw->size;
-       fuc->data = kmemdup(fw->data, fuc->size, GFP_KERNEL);
-       release_firmware(fw);
-       return (fuc->data != NULL) ? 0 : -ENOMEM;
-}
-
-int
-gf100_gr_ctor_fw(struct gf100_gr *gr, const char *fwname,
-                struct gf100_gr_fuc *fuc)
-{
-       const struct firmware *fw;
-       int ret;
-
-       ret = nvkm_firmware_get(&gr->base.engine.subdev, fwname, &fw);
-       if (ret) {
-               ret = gf100_gr_ctor_fw_legacy(gr, fwname, fuc, ret);
-               if (ret)
-                       return -ENODEV;
-               return 0;
-       }
-
-       fuc->size = fw->size;
-       fuc->data = kmemdup(fw->data, fuc->size, GFP_KERNEL);
-       nvkm_firmware_put(fw);
-       return (fuc->data != NULL) ? 0 : -ENOMEM;
-}
-
-int
-gf100_gr_ctor(const struct gf100_gr_func *func, struct nvkm_device *device,
-             int index, struct gf100_gr *gr)
-{
-       gr->func = func;
-       gr->firmware = nvkm_boolopt(device->cfgopt, "NvGrUseFW",
-                                   func->fecs.ucode == NULL);
-
-       return nvkm_gr_ctor(&gf100_gr_, device, index,
-                           gr->firmware || func->fecs.ucode != NULL,
-                           &gr->base);
-}
+static const struct nvkm_falcon_func
+gf100_gr_flcn = {
+       .fbif = 0x600,
+       .load_imem = nvkm_falcon_v1_load_imem,
+       .load_dmem = nvkm_falcon_v1_load_dmem,
+       .read_dmem = nvkm_falcon_v1_read_dmem,
+       .bind_context = nvkm_falcon_v1_bind_context,
+       .wait_for_halt = nvkm_falcon_v1_wait_for_halt,
+       .clear_interrupt = nvkm_falcon_v1_clear_interrupt,
+       .set_start_addr = nvkm_falcon_v1_set_start_addr,
+       .start = nvkm_falcon_v1_start,
+       .enable = nvkm_falcon_v1_enable,
+       .disable = nvkm_falcon_v1_disable,
+};
 
 int
-gf100_gr_new_(const struct gf100_gr_func *func, struct nvkm_device *device,
-             int index, struct nvkm_gr **pgr)
+gf100_gr_new_(const struct gf100_gr_fwif *fwif,
+             struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
        struct gf100_gr *gr;
        int ret;
@@ -2156,22 +2071,49 @@ gf100_gr_new_(const struct gf100_gr_func *func, struct nvkm_device *device,
                return -ENOMEM;
        *pgr = &gr->base;
 
-       ret = gf100_gr_ctor(func, device, index, gr);
+       ret = nvkm_gr_ctor(&gf100_gr_, device, index, true, &gr->base);
        if (ret)
                return ret;
 
-       if (gr->firmware) {
-               if (gf100_gr_ctor_fw(gr, "fecs_inst", &gr->fuc409c) ||
-                   gf100_gr_ctor_fw(gr, "fecs_data", &gr->fuc409d) ||
-                   gf100_gr_ctor_fw(gr, "gpccs_inst", &gr->fuc41ac) ||
-                   gf100_gr_ctor_fw(gr, "gpccs_data", &gr->fuc41ad))
-                       return -ENODEV;
-       }
+       fwif = nvkm_firmware_load(&gr->base.engine.subdev, fwif, "Gr", gr);
+       if (IS_ERR(fwif))
+               return -ENODEV;
+
+       gr->func = fwif->func;
+
+       ret = nvkm_falcon_ctor(&gf100_gr_flcn, &gr->base.engine.subdev,
+                              "fecs", 0x409000, &gr->fecs.falcon);
+       if (ret)
+               return ret;
+
+       mutex_init(&gr->fecs.mutex);
+
+       ret = nvkm_falcon_ctor(&gf100_gr_flcn, &gr->base.engine.subdev,
+                              "gpccs", 0x41a000, &gr->gpccs.falcon);
+       if (ret)
+               return ret;
 
        return 0;
 }
 
 void
+gf100_gr_init_num_tpc_per_gpc(struct gf100_gr *gr, bool pd, bool ds)
+{
+       struct nvkm_device *device = gr->base.engine.subdev.device;
+       int gpc, i, j;
+       u32 data;
+
+       for (gpc = 0, i = 0; i < 4; i++) {
+               for (data = 0, j = 0; j < 8 && gpc < gr->gpc_nr; j++, gpc++)
+                       data |= gr->tpc_nr[gpc] << (j * 4);
+               if (pd)
+                       nvkm_wr32(device, 0x406028 + (i * 4), data);
+               if (ds)
+                       nvkm_wr32(device, 0x405870 + (i * 4), data);
+       }
+}
+
+void
 gf100_gr_init_400054(struct gf100_gr *gr)
 {
        nvkm_wr32(gr->base.engine.subdev.device, 0x400054, 0x34ce3464);
@@ -2295,8 +2237,8 @@ gf100_gr_init(struct gf100_gr *gr)
 
        gr->func->init_gpc_mmu(gr);
 
-       if (gr->fuc_sw_nonctx)
-               gf100_gr_mmio(gr, gr->fuc_sw_nonctx);
+       if (gr->sw_nonctx)
+               gf100_gr_mmio(gr, gr->sw_nonctx);
        else
                gf100_gr_mmio(gr, gr->func->mmio);
 
@@ -2320,6 +2262,8 @@ gf100_gr_init(struct gf100_gr *gr)
                gr->func->init_bios_2(gr);
        if (gr->func->init_swdx_pes_mask)
                gr->func->init_swdx_pes_mask(gr);
+       if (gr->func->init_fs)
+               gr->func->init_fs(gr);
 
        nvkm_wr32(device, 0x400500, 0x00010001);
 
@@ -2338,8 +2282,8 @@ gf100_gr_init(struct gf100_gr *gr)
        if (gr->func->init_40601c)
                gr->func->init_40601c(gr);
 
-       nvkm_wr32(device, 0x404490, 0xc0000000);
        nvkm_wr32(device, 0x406018, 0xc0000000);
+       nvkm_wr32(device, 0x404490, 0xc0000000);
 
        if (gr->func->init_sked_hww_esr)
                gr->func->init_sked_hww_esr(gr);
@@ -2454,7 +2398,66 @@ gf100_gr = {
 };
 
 int
+gf100_gr_nofw(struct gf100_gr *gr, int ver, const struct gf100_gr_fwif *fwif)
+{
+       gr->firmware = false;
+       return 0;
+}
+
+static int
+gf100_gr_load_fw(struct gf100_gr *gr, const char *name,
+                struct nvkm_blob *blob)
+{
+       struct nvkm_subdev *subdev = &gr->base.engine.subdev;
+       struct nvkm_device *device = subdev->device;
+       const struct firmware *fw;
+       char f[32];
+       int ret;
+
+       snprintf(f, sizeof(f), "nouveau/nv%02x_%s", device->chipset, name);
+       ret = request_firmware(&fw, f, device->dev);
+       if (ret) {
+               snprintf(f, sizeof(f), "nouveau/%s", name);
+               ret = request_firmware(&fw, f, device->dev);
+               if (ret) {
+                       nvkm_error(subdev, "failed to load %s\n", name);
+                       return ret;
+               }
+       }
+
+       blob->size = fw->size;
+       blob->data = kmemdup(fw->data, blob->size, GFP_KERNEL);
+       release_firmware(fw);
+       return (blob->data != NULL) ? 0 : -ENOMEM;
+}
+
+int
+gf100_gr_load(struct gf100_gr *gr, int ver, const struct gf100_gr_fwif *fwif)
+{
+       struct nvkm_device *device = gr->base.engine.subdev.device;
+
+       if (!nvkm_boolopt(device->cfgopt, "NvGrUseFW", false))
+               return -EINVAL;
+
+       if (gf100_gr_load_fw(gr, "fuc409c", &gr->fecs.inst) ||
+           gf100_gr_load_fw(gr, "fuc409d", &gr->fecs.data) ||
+           gf100_gr_load_fw(gr, "fuc41ac", &gr->gpccs.inst) ||
+           gf100_gr_load_fw(gr, "fuc41ad", &gr->gpccs.data))
+               return -ENOENT;
+
+       gr->firmware = true;
+       return 0;
+}
+
+static const struct gf100_gr_fwif
+gf100_gr_fwif[] = {
+       { -1, gf100_gr_load, &gf100_gr },
+       { -1, gf100_gr_nofw, &gf100_gr },
+       {}
+};
+
+int
 gf100_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gf100_gr_new_(&gf100_gr, device, index, pgr);
+       return gf100_gr_new_(gf100_gr_fwif, device, index, pgr);
 }
index fafdd0b..4c67b25 100644 (file)
@@ -31,6 +31,8 @@
 #include <subdev/mmu.h>
 #include <engine/falcon.h>
 
+struct nvkm_acr_lsfw;
+
 #define GPC_MAX 32
 #define TPC_MAX_PER_GPC 8
 #define TPC_MAX (GPC_MAX * TPC_MAX_PER_GPC)
@@ -55,11 +57,6 @@ struct gf100_gr_mmio {
        int buffer;
 };
 
-struct gf100_gr_fuc {
-       u32 *data;
-       u32  size;
-};
-
 struct gf100_gr_zbc_color {
        u32 format;
        u32 ds[4];
@@ -83,29 +80,30 @@ struct gf100_gr {
        struct nvkm_gr base;
 
        struct {
-               struct nvkm_falcon *falcon;
+               struct nvkm_falcon falcon;
+               struct nvkm_blob inst;
+               struct nvkm_blob data;
+
                struct mutex mutex;
                u32 disable;
        } fecs;
 
        struct {
-               struct nvkm_falcon *falcon;
+               struct nvkm_falcon falcon;
+               struct nvkm_blob inst;
+               struct nvkm_blob data;
        } gpccs;
 
-       struct gf100_gr_fuc fuc409c;
-       struct gf100_gr_fuc fuc409d;
-       struct gf100_gr_fuc fuc41ac;
-       struct gf100_gr_fuc fuc41ad;
        bool firmware;
 
        /*
         * Used if the register packs are loaded from NVIDIA fw instead of
         * using hardcoded arrays. To be allocated with vzalloc().
         */
-       struct gf100_gr_pack *fuc_sw_nonctx;
-       struct gf100_gr_pack *fuc_sw_ctx;
-       struct gf100_gr_pack *fuc_bundle;
-       struct gf100_gr_pack *fuc_method;
+       struct gf100_gr_pack *sw_nonctx;
+       struct gf100_gr_pack *sw_ctx;
+       struct gf100_gr_pack *bundle;
+       struct gf100_gr_pack *method;
 
        struct gf100_gr_zbc_color zbc_color[NVKM_LTC_MAX_ZBC_CNT];
        struct gf100_gr_zbc_depth zbc_depth[NVKM_LTC_MAX_ZBC_CNT];
@@ -140,12 +138,6 @@ struct gf100_gr {
        u32 size_pm;
 };
 
-int gf100_gr_ctor(const struct gf100_gr_func *, struct nvkm_device *,
-                 int, struct gf100_gr *);
-int gf100_gr_new_(const struct gf100_gr_func *, struct nvkm_device *,
-                 int, struct nvkm_gr **);
-void *gf100_gr_dtor(struct nvkm_gr *);
-
 int gf100_gr_fecs_bind_pointer(struct gf100_gr *, u32 inst);
 
 struct gf100_gr_func_zbc {
@@ -157,7 +149,6 @@ struct gf100_gr_func_zbc {
 };
 
 struct gf100_gr_func {
-       void (*dtor)(struct gf100_gr *);
        void (*oneinit_tiles)(struct gf100_gr *);
        void (*oneinit_sm_id)(struct gf100_gr *);
        int (*init)(struct gf100_gr *);
@@ -171,6 +162,7 @@ struct gf100_gr_func {
        void (*init_rop_active_fbps)(struct gf100_gr *);
        void (*init_bios_2)(struct gf100_gr *);
        void (*init_swdx_pes_mask)(struct gf100_gr *);
+       void (*init_fs)(struct gf100_gr *);
        void (*init_fecs_exceptions)(struct gf100_gr *);
        void (*init_ds_hww_esr_2)(struct gf100_gr *);
        void (*init_40601c)(struct gf100_gr *);
@@ -217,6 +209,7 @@ void gf100_gr_init_419eb4(struct gf100_gr *);
 void gf100_gr_init_tex_hww_esr(struct gf100_gr *, int, int);
 void gf100_gr_init_shader_exceptions(struct gf100_gr *, int, int);
 void gf100_gr_init_400054(struct gf100_gr *);
+void gf100_gr_init_num_tpc_per_gpc(struct gf100_gr *, bool, bool);
 extern const struct gf100_gr_func_zbc gf100_gr_zbc;
 
 void gf117_gr_init_zcull(struct gf100_gr *);
@@ -249,6 +242,13 @@ void gp100_gr_zbc_clear_depth(struct gf100_gr *, int);
 void gp102_gr_init_swdx_pes_mask(struct gf100_gr *);
 extern const struct gf100_gr_func_zbc gp102_gr_zbc;
 
+extern const struct gf100_gr_func gp107_gr;
+
+void gv100_gr_init_419bd8(struct gf100_gr *);
+void gv100_gr_init_504430(struct gf100_gr *, int, int);
+void gv100_gr_init_shader_exceptions(struct gf100_gr *, int, int);
+void gv100_gr_trap_mp(struct gf100_gr *, int, int);
+
 #define gf100_gr_chan(p) container_of((p), struct gf100_gr_chan, object)
 #include <core/object.h>
 
@@ -269,9 +269,6 @@ struct gf100_gr_chan {
 
 void gf100_gr_ctxctl_debug(struct gf100_gr *);
 
-void gf100_gr_dtor_fw(struct gf100_gr_fuc *);
-int  gf100_gr_ctor_fw(struct gf100_gr *, const char *,
-                     struct gf100_gr_fuc *);
 u64  gf100_gr_units(struct nvkm_gr *);
 void gf100_gr_zbc_init(struct gf100_gr *);
 
@@ -294,8 +291,8 @@ struct gf100_gr_pack {
                  for (init = pack->init; init && init->count; init++)
 
 struct gf100_gr_ucode {
-       struct gf100_gr_fuc code;
-       struct gf100_gr_fuc data;
+       struct nvkm_blob code;
+       struct nvkm_blob data;
 };
 
 extern struct gf100_gr_ucode gf100_gr_fecs_ucode;
@@ -310,17 +307,6 @@ void gf100_gr_icmd(struct gf100_gr *, const struct gf100_gr_pack *);
 void gf100_gr_mthd(struct gf100_gr *, const struct gf100_gr_pack *);
 int  gf100_gr_init_ctxctl(struct gf100_gr *);
 
-/* external bundles loading functions */
-int gk20a_gr_av_to_init(struct gf100_gr *, const char *,
-                       struct gf100_gr_pack **);
-int gk20a_gr_aiv_to_init(struct gf100_gr *, const char *,
-                        struct gf100_gr_pack **);
-int gk20a_gr_av_to_method(struct gf100_gr *, const char *,
-                         struct gf100_gr_pack **);
-
-int gm200_gr_new_(const struct gf100_gr_func *, struct nvkm_device *, int,
-                 struct nvkm_gr **);
-
 /* register init value lists */
 
 extern const struct gf100_gr_init gf100_gr_init_main_0[];
@@ -403,4 +389,31 @@ extern const struct gf100_gr_init gm107_gr_init_cbm_0[];
 void gm107_gr_init_bios(struct gf100_gr *);
 
 void gm200_gr_init_gpc_mmu(struct gf100_gr *);
+
+struct gf100_gr_fwif {
+       int version;
+       int (*load)(struct gf100_gr *, int ver, const struct gf100_gr_fwif *);
+       const struct gf100_gr_func *func;
+       const struct nvkm_acr_lsf_func *fecs;
+       const struct nvkm_acr_lsf_func *gpccs;
+};
+
+int gf100_gr_load(struct gf100_gr *, int, const struct gf100_gr_fwif *);
+int gf100_gr_nofw(struct gf100_gr *, int, const struct gf100_gr_fwif *);
+
+int gk20a_gr_load_sw(struct gf100_gr *, const char *path, int ver);
+
+int gm200_gr_load(struct gf100_gr *, int, const struct gf100_gr_fwif *);
+extern const struct nvkm_acr_lsf_func gm200_gr_gpccs_acr;
+extern const struct nvkm_acr_lsf_func gm200_gr_fecs_acr;
+
+extern const struct nvkm_acr_lsf_func gm20b_gr_fecs_acr;
+void gm20b_gr_acr_bld_write(struct nvkm_acr *, u32, struct nvkm_acr_lsfw *);
+void gm20b_gr_acr_bld_patch(struct nvkm_acr *, u32, s64);
+
+extern const struct nvkm_acr_lsf_func gp108_gr_gpccs_acr;
+extern const struct nvkm_acr_lsf_func gp108_gr_fecs_acr;
+
+int gf100_gr_new_(const struct gf100_gr_fwif *, struct nvkm_device *, int,
+                 struct nvkm_gr **);
 #endif
index 42c2fd9..0536fe8 100644 (file)
@@ -144,8 +144,15 @@ gf104_gr = {
        }
 };
 
+static const struct gf100_gr_fwif
+gf104_gr_fwif[] = {
+       { -1, gf100_gr_load, &gf104_gr },
+       { -1, gf100_gr_nofw, &gf104_gr },
+       {}
+};
+
 int
 gf104_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gf100_gr_new_(&gf104_gr, device, index, pgr);
+       return gf100_gr_new_(gf104_gr_fwif, device, index, pgr);
 }
index 4731a46..14284b0 100644 (file)
@@ -143,8 +143,15 @@ gf108_gr = {
        }
 };
 
+const struct gf100_gr_fwif
+gf108_gr_fwif[] = {
+       { -1, gf100_gr_load, &gf108_gr },
+       { -1, gf100_gr_nofw, &gf108_gr },
+       {}
+};
+
 int
 gf108_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gf100_gr_new_(&gf108_gr, device, index, pgr);
+       return gf100_gr_new_(gf108_gr_fwif, device, index, pgr);
 }
index cdf759c..2807525 100644 (file)
@@ -119,8 +119,15 @@ gf110_gr = {
        }
 };
 
+static const struct gf100_gr_fwif
+gf110_gr_fwif[] = {
+       { -1, gf100_gr_load, &gf110_gr },
+       { -1, gf100_gr_nofw, &gf110_gr },
+       {}
+};
+
 int
 gf110_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gf100_gr_new_(&gf110_gr, device, index, pgr);
+       return gf100_gr_new_(gf110_gr_fwif, device, index, pgr);
 }
index a4158f8..235c3fb 100644 (file)
@@ -184,8 +184,15 @@ gf117_gr = {
        }
 };
 
+static const struct gf100_gr_fwif
+gf117_gr_fwif[] = {
+       { -1, gf100_gr_load, &gf117_gr },
+       { -1, gf100_gr_nofw, &gf117_gr },
+       {}
+};
+
 int
 gf117_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gf100_gr_new_(&gf117_gr, device, index, pgr);
+       return gf100_gr_new_(gf117_gr_fwif, device, index, pgr);
 }
index 4197844..7eac385 100644 (file)
@@ -210,8 +210,15 @@ gf119_gr = {
        }
 };
 
+static const struct gf100_gr_fwif
+gf119_gr_fwif[] = {
+       { -1, gf100_gr_load, &gf119_gr },
+       { -1, gf100_gr_nofw, &gf119_gr },
+       {}
+};
+
 int
 gf119_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gf100_gr_new_(&gf119_gr, device, index, pgr);
+       return gf100_gr_new_(gf119_gr_fwif, device, index, pgr);
 }
index 477fee3..89f51d7 100644 (file)
@@ -489,8 +489,15 @@ gk104_gr = {
        }
 };
 
+static const struct gf100_gr_fwif
+gk104_gr_fwif[] = {
+       { -1, gf100_gr_load, &gk104_gr },
+       { -1, gf100_gr_nofw, &gk104_gr },
+       {}
+};
+
 int
 gk104_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gf100_gr_new_(&gk104_gr, device, index, pgr);
+       return gf100_gr_new_(gk104_gr_fwif, device, index, pgr);
 }
index 7cd628c..735f05e 100644 (file)
@@ -385,8 +385,15 @@ gk110_gr = {
        }
 };
 
+static const struct gf100_gr_fwif
+gk110_gr_fwif[] = {
+       { -1, gf100_gr_load, &gk110_gr },
+       { -1, gf100_gr_nofw, &gk110_gr },
+       {}
+};
+
 int
 gk110_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gf100_gr_new_(&gk110_gr, device, index, pgr);
+       return gf100_gr_new_(gk110_gr_fwif, device, index, pgr);
 }
index a38faa2..adc971b 100644 (file)
@@ -136,8 +136,15 @@ gk110b_gr = {
        }
 };
 
+static const struct gf100_gr_fwif
+gk110b_gr_fwif[] = {
+       { -1, gf100_gr_load, &gk110b_gr },
+       { -1, gf100_gr_nofw, &gk110b_gr },
+       {}
+};
+
 int
 gk110b_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gf100_gr_new_(&gk110b_gr, device, index, pgr);
+       return gf100_gr_new_(gk110b_gr_fwif, device, index, pgr);
 }
index 5845666..aa0eff6 100644 (file)
@@ -194,8 +194,15 @@ gk208_gr = {
        }
 };
 
+static const struct gf100_gr_fwif
+gk208_gr_fwif[] = {
+       { -1, gf100_gr_load, &gk208_gr },
+       { -1, gf100_gr_nofw, &gk208_gr },
+       {}
+};
+
 int
 gk208_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gf100_gr_new_(&gk208_gr, device, index, pgr);
+       return gf100_gr_new_(gk208_gr_fwif, device, index, pgr);
 }
index 500cb08..4209b24 100644 (file)
@@ -22,6 +22,7 @@
 #include "gf100.h"
 #include "ctxgf100.h"
 
+#include <core/firmware.h>
 #include <subdev/timer.h>
 
 #include <nvif/class.h>
@@ -33,21 +34,22 @@ struct gk20a_fw_av
 };
 
 int
-gk20a_gr_av_to_init(struct gf100_gr *gr, const char *fw_name,
-                   struct gf100_gr_pack **ppack)
+gk20a_gr_av_to_init(struct gf100_gr *gr, const char *path, const char *name,
+                   int ver, struct gf100_gr_pack **ppack)
 {
-       struct gf100_gr_fuc fuc;
+       struct nvkm_subdev *subdev = &gr->base.engine.subdev;
+       struct nvkm_blob blob;
        struct gf100_gr_init *init;
        struct gf100_gr_pack *pack;
        int nent;
        int ret;
        int i;
 
-       ret = gf100_gr_ctor_fw(gr, fw_name, &fuc);
+       ret = nvkm_firmware_load_blob(subdev, path, name, ver, &blob);
        if (ret)
                return ret;
 
-       nent = (fuc.size / sizeof(struct gk20a_fw_av));
+       nent = (blob.size / sizeof(struct gk20a_fw_av));
        pack = vzalloc((sizeof(*pack) * 2) + (sizeof(*init) * (nent + 1)));
        if (!pack) {
                ret = -ENOMEM;
@@ -59,7 +61,7 @@ gk20a_gr_av_to_init(struct gf100_gr *gr, const char *fw_name,
 
        for (i = 0; i < nent; i++) {
                struct gf100_gr_init *ent = &init[i];
-               struct gk20a_fw_av *av = &((struct gk20a_fw_av *)fuc.data)[i];
+               struct gk20a_fw_av *av = &((struct gk20a_fw_av *)blob.data)[i];
 
                ent->addr = av->addr;
                ent->data = av->data;
@@ -70,7 +72,7 @@ gk20a_gr_av_to_init(struct gf100_gr *gr, const char *fw_name,
        *ppack = pack;
 
 end:
-       gf100_gr_dtor_fw(&fuc);
+       nvkm_blob_dtor(&blob);
        return ret;
 }
 
@@ -82,21 +84,22 @@ struct gk20a_fw_aiv
 };
 
 int
-gk20a_gr_aiv_to_init(struct gf100_gr *gr, const char *fw_name,
-                    struct gf100_gr_pack **ppack)
+gk20a_gr_aiv_to_init(struct gf100_gr *gr, const char *path, const char *name,
+                    int ver, struct gf100_gr_pack **ppack)
 {
-       struct gf100_gr_fuc fuc;
+       struct nvkm_subdev *subdev = &gr->base.engine.subdev;
+       struct nvkm_blob blob;
        struct gf100_gr_init *init;
        struct gf100_gr_pack *pack;
        int nent;
        int ret;
        int i;
 
-       ret = gf100_gr_ctor_fw(gr, fw_name, &fuc);
+       ret = nvkm_firmware_load_blob(subdev, path, name, ver, &blob);
        if (ret)
                return ret;
 
-       nent = (fuc.size / sizeof(struct gk20a_fw_aiv));
+       nent = (blob.size / sizeof(struct gk20a_fw_aiv));
        pack = vzalloc((sizeof(*pack) * 2) + (sizeof(*init) * (nent + 1)));
        if (!pack) {
                ret = -ENOMEM;
@@ -108,7 +111,7 @@ gk20a_gr_aiv_to_init(struct gf100_gr *gr, const char *fw_name,
 
        for (i = 0; i < nent; i++) {
                struct gf100_gr_init *ent = &init[i];
-               struct gk20a_fw_aiv *av = &((struct gk20a_fw_aiv *)fuc.data)[i];
+               struct gk20a_fw_aiv *av = &((struct gk20a_fw_aiv *)blob.data)[i];
 
                ent->addr = av->addr;
                ent->data = av->data;
@@ -119,15 +122,16 @@ gk20a_gr_aiv_to_init(struct gf100_gr *gr, const char *fw_name,
        *ppack = pack;
 
 end:
-       gf100_gr_dtor_fw(&fuc);
+       nvkm_blob_dtor(&blob);
        return ret;
 }
 
 int
-gk20a_gr_av_to_method(struct gf100_gr *gr, const char *fw_name,
-                     struct gf100_gr_pack **ppack)
+gk20a_gr_av_to_method(struct gf100_gr *gr, const char *path, const char *name,
+                     int ver, struct gf100_gr_pack **ppack)
 {
-       struct gf100_gr_fuc fuc;
+       struct nvkm_subdev *subdev = &gr->base.engine.subdev;
+       struct nvkm_blob blob;
        struct gf100_gr_init *init;
        struct gf100_gr_pack *pack;
        /* We don't suppose we will initialize more than 16 classes here... */
@@ -137,29 +141,30 @@ gk20a_gr_av_to_method(struct gf100_gr *gr, const char *fw_name,
        int ret;
        int i;
 
-       ret = gf100_gr_ctor_fw(gr, fw_name, &fuc);
+       ret = nvkm_firmware_load_blob(subdev, path, name, ver, &blob);
        if (ret)
                return ret;
 
-       nent = (fuc.size / sizeof(struct gk20a_fw_av));
+       nent = (blob.size / sizeof(struct gk20a_fw_av));
 
-       pack = vzalloc((sizeof(*pack) * max_classes) +
-                      (sizeof(*init) * (nent + 1)));
+       pack = vzalloc((sizeof(*pack) * (max_classes + 1)) +
+                      (sizeof(*init) * (nent + max_classes + 1)));
        if (!pack) {
                ret = -ENOMEM;
                goto end;
        }
 
-       init = (void *)(pack + max_classes);
+       init = (void *)(pack + max_classes + 1);
 
-       for (i = 0; i < nent; i++) {
-               struct gf100_gr_init *ent = &init[i];
-               struct gk20a_fw_av *av = &((struct gk20a_fw_av *)fuc.data)[i];
+       for (i = 0; i < nent; i++, init++) {
+               struct gk20a_fw_av *av = &((struct gk20a_fw_av *)blob.data)[i];
                u32 class = av->addr & 0xffff;
                u32 addr = (av->addr & 0xffff0000) >> 14;
 
                if (prevclass != class) {
-                       pack[classidx].init = ent;
+                       if (prevclass) /* Add terminator to the method list. */
+                               init++;
+                       pack[classidx].init = init;
                        pack[classidx].type = class;
                        prevclass = class;
                        if (++classidx >= max_classes) {
@@ -169,16 +174,16 @@ gk20a_gr_av_to_method(struct gf100_gr *gr, const char *fw_name,
                        }
                }
 
-               ent->addr = addr;
-               ent->data = av->data;
-               ent->count = 1;
-               ent->pitch = 1;
+               init->addr = addr;
+               init->data = av->data;
+               init->count = 1;
+               init->pitch = 1;
        }
 
        *ppack = pack;
 
 end:
-       gf100_gr_dtor_fw(&fuc);
+       nvkm_blob_dtor(&blob);
        return ret;
 }
 
@@ -224,7 +229,7 @@ gk20a_gr_init(struct gf100_gr *gr)
        /* Clear SCC RAM */
        nvkm_wr32(device, 0x40802c, 0x1);
 
-       gf100_gr_mmio(gr, gr->fuc_sw_nonctx);
+       gf100_gr_mmio(gr, gr->sw_nonctx);
 
        ret = gk20a_gr_wait_mem_scrubbing(gr);
        if (ret)
@@ -303,40 +308,45 @@ gk20a_gr = {
 };
 
 int
-gk20a_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
+gk20a_gr_load_sw(struct gf100_gr *gr, const char *path, int ver)
 {
-       struct gf100_gr *gr;
-       int ret;
+       if (gk20a_gr_av_to_init(gr, path, "sw_nonctx", ver, &gr->sw_nonctx) ||
+           gk20a_gr_aiv_to_init(gr, path, "sw_ctx", ver, &gr->sw_ctx) ||
+           gk20a_gr_av_to_init(gr, path, "sw_bundle_init", ver, &gr->bundle) ||
+           gk20a_gr_av_to_method(gr, path, "sw_method_init", ver, &gr->method))
+               return -ENOENT;
 
-       if (!(gr = kzalloc(sizeof(*gr), GFP_KERNEL)))
-               return -ENOMEM;
-       *pgr = &gr->base;
-
-       ret = gf100_gr_ctor(&gk20a_gr, device, index, gr);
-       if (ret)
-               return ret;
+       return 0;
+}
 
-       if (gf100_gr_ctor_fw(gr, "fecs_inst", &gr->fuc409c) ||
-           gf100_gr_ctor_fw(gr, "fecs_data", &gr->fuc409d) ||
-           gf100_gr_ctor_fw(gr, "gpccs_inst", &gr->fuc41ac) ||
-           gf100_gr_ctor_fw(gr, "gpccs_data", &gr->fuc41ad))
-               return -ENODEV;
+static int
+gk20a_gr_load(struct gf100_gr *gr, int ver, const struct gf100_gr_fwif *fwif)
+{
+       struct nvkm_subdev *subdev = &gr->base.engine.subdev;
 
-       ret = gk20a_gr_av_to_init(gr, "sw_nonctx", &gr->fuc_sw_nonctx);
-       if (ret)
-               return ret;
+       if (nvkm_firmware_load_blob(subdev, "", "fecs_inst", ver,
+                                   &gr->fecs.inst) ||
+           nvkm_firmware_load_blob(subdev, "", "fecs_data", ver,
+                                   &gr->fecs.data) ||
+           nvkm_firmware_load_blob(subdev, "", "gpccs_inst", ver,
+                                   &gr->gpccs.inst) ||
+           nvkm_firmware_load_blob(subdev, "", "gpccs_data", ver,
+                                   &gr->gpccs.data))
+               return -ENOENT;
 
-       ret = gk20a_gr_aiv_to_init(gr, "sw_ctx", &gr->fuc_sw_ctx);
-       if (ret)
-               return ret;
+       gr->firmware = true;
 
-       ret = gk20a_gr_av_to_init(gr, "sw_bundle_init", &gr->fuc_bundle);
-       if (ret)
-               return ret;
+       return gk20a_gr_load_sw(gr, "", ver);
+}
 
-       ret = gk20a_gr_av_to_method(gr, "sw_method_init", &gr->fuc_method);
-       if (ret)
-               return ret;
+static const struct gf100_gr_fwif
+gk20a_gr_fwif[] = {
+       { -1, gk20a_gr_load, &gk20a_gr },
+       {}
+};
 
-       return 0;
+int
+gk20a_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
+{
+       return gf100_gr_new_(gk20a_gr_fwif, device, index, pgr);
 }
index 92e31d3..09bb78b 100644 (file)
@@ -429,8 +429,15 @@ gm107_gr = {
        }
 };
 
+static const struct gf100_gr_fwif
+gm107_gr_fwif[] = {
+       { -1, gf100_gr_load, &gm107_gr },
+       { -1, gf100_gr_nofw, &gm107_gr },
+       {}
+};
+
 int
 gm107_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gf100_gr_new_(&gm107_gr, device, index, pgr);
+       return gf100_gr_new_(gm107_gr_fwif, device, index, pgr);
 }
index eff3066..3d67cfb 100644 (file)
 #include "gf100.h"
 #include "ctxgf100.h"
 
+#include <core/firmware.h>
+#include <subdev/acr.h>
 #include <subdev/secboot.h>
 
+#include <nvfw/flcn.h>
+
 #include <nvif/class.h>
 
 /*******************************************************************************
  * PGRAPH engine/subdev functions
  ******************************************************************************/
 
+static void
+gm200_gr_acr_bld_patch(struct nvkm_acr *acr, u32 bld, s64 adjust)
+{
+       struct flcn_bl_dmem_desc_v1 hdr;
+       nvkm_robj(acr->wpr, bld, &hdr, sizeof(hdr));
+       hdr.code_dma_base = hdr.code_dma_base + adjust;
+       hdr.data_dma_base = hdr.data_dma_base + adjust;
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+       flcn_bl_dmem_desc_v1_dump(&acr->subdev, &hdr);
+}
+
+static void
+gm200_gr_acr_bld_write(struct nvkm_acr *acr, u32 bld,
+                      struct nvkm_acr_lsfw *lsfw)
+{
+       const u64 base = lsfw->offset.img + lsfw->app_start_offset;
+       const u64 code = base + lsfw->app_resident_code_offset;
+       const u64 data = base + lsfw->app_resident_data_offset;
+       const struct flcn_bl_dmem_desc_v1 hdr = {
+               .ctx_dma = FALCON_DMAIDX_UCODE,
+               .code_dma_base = code,
+               .non_sec_code_off = lsfw->app_resident_code_offset,
+               .non_sec_code_size = lsfw->app_resident_code_size,
+               .code_entry_point = lsfw->app_imem_entry,
+               .data_dma_base = data,
+               .data_size = lsfw->app_resident_data_size,
+       };
+
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+}
+
+const struct nvkm_acr_lsf_func
+gm200_gr_gpccs_acr = {
+       .flags = NVKM_ACR_LSF_FORCE_PRIV_LOAD,
+       .bld_size = sizeof(struct flcn_bl_dmem_desc_v1),
+       .bld_write = gm200_gr_acr_bld_write,
+       .bld_patch = gm200_gr_acr_bld_patch,
+};
+
+const struct nvkm_acr_lsf_func
+gm200_gr_fecs_acr = {
+       .bld_size = sizeof(struct flcn_bl_dmem_desc_v1),
+       .bld_write = gm200_gr_acr_bld_write,
+       .bld_patch = gm200_gr_acr_bld_patch,
+};
+
 int
 gm200_gr_rops(struct gf100_gr *gr)
 {
@@ -124,44 +174,6 @@ gm200_gr_oneinit_tiles(struct gf100_gr *gr)
        }
 }
 
-int
-gm200_gr_new_(const struct gf100_gr_func *func, struct nvkm_device *device,
-             int index, struct nvkm_gr **pgr)
-{
-       struct gf100_gr *gr;
-       int ret;
-
-       if (!(gr = kzalloc(sizeof(*gr), GFP_KERNEL)))
-               return -ENOMEM;
-       *pgr = &gr->base;
-
-       ret = gf100_gr_ctor(func, device, index, gr);
-       if (ret)
-               return ret;
-
-       /* Load firmwares for non-secure falcons */
-       if (!nvkm_secboot_is_managed(device->secboot,
-                                    NVKM_SECBOOT_FALCON_FECS)) {
-               if ((ret = gf100_gr_ctor_fw(gr, "gr/fecs_inst", &gr->fuc409c)) ||
-                   (ret = gf100_gr_ctor_fw(gr, "gr/fecs_data", &gr->fuc409d)))
-                       return ret;
-       }
-       if (!nvkm_secboot_is_managed(device->secboot,
-                                    NVKM_SECBOOT_FALCON_GPCCS)) {
-               if ((ret = gf100_gr_ctor_fw(gr, "gr/gpccs_inst", &gr->fuc41ac)) ||
-                   (ret = gf100_gr_ctor_fw(gr, "gr/gpccs_data", &gr->fuc41ad)))
-                       return ret;
-       }
-
-       if ((ret = gk20a_gr_av_to_init(gr, "gr/sw_nonctx", &gr->fuc_sw_nonctx)) ||
-           (ret = gk20a_gr_aiv_to_init(gr, "gr/sw_ctx", &gr->fuc_sw_ctx)) ||
-           (ret = gk20a_gr_av_to_init(gr, "gr/sw_bundle_init", &gr->fuc_bundle)) ||
-           (ret = gk20a_gr_av_to_method(gr, "gr/sw_method_init", &gr->fuc_method)))
-               return ret;
-
-       return 0;
-}
-
 static const struct gf100_gr_func
 gm200_gr = {
        .oneinit_tiles = gm200_gr_oneinit_tiles,
@@ -198,7 +210,77 @@ gm200_gr = {
 };
 
 int
+gm200_gr_load(struct gf100_gr *gr, int ver, const struct gf100_gr_fwif *fwif)
+{
+       int ret;
+
+       ret = nvkm_acr_lsfw_load_bl_inst_data_sig(&gr->base.engine.subdev,
+                                                 &gr->fecs.falcon,
+                                                 NVKM_ACR_LSF_FECS,
+                                                 "gr/fecs_", ver, fwif->fecs);
+       if (ret)
+               return ret;
+
+       ret = nvkm_acr_lsfw_load_bl_inst_data_sig(&gr->base.engine.subdev,
+                                                 &gr->gpccs.falcon,
+                                                 NVKM_ACR_LSF_GPCCS,
+                                                 "gr/gpccs_", ver,
+                                                 fwif->gpccs);
+       if (ret)
+               return ret;
+
+       gr->firmware = true;
+
+       return gk20a_gr_load_sw(gr, "gr/", ver);
+}
+
+MODULE_FIRMWARE("nvidia/gm200/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gm200/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gm200/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gm200/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gm200/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/gm200/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gm200/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gm200/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/gm200/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gm200/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gm200/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gm200/gr/sw_method_init.bin");
+
+MODULE_FIRMWARE("nvidia/gm204/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gm204/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gm204/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gm204/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gm204/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/gm204/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gm204/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gm204/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/gm204/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gm204/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gm204/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gm204/gr/sw_method_init.bin");
+
+MODULE_FIRMWARE("nvidia/gm206/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gm206/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gm206/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gm206/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gm206/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/gm206/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gm206/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gm206/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/gm206/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gm206/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gm206/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gm206/gr/sw_method_init.bin");
+
+static const struct gf100_gr_fwif
+gm200_gr_fwif[] = {
+       { 0, gm200_gr_load, &gm200_gr, &gm200_gr_fecs_acr, &gm200_gr_gpccs_acr },
+       {}
+};
+
+int
 gm200_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gm200_gr_new_(&gm200_gr, device, index, pgr);
+       return gf100_gr_new_(gm200_gr_fwif, device, index, pgr);
 }
index a667770..09d8c5d 100644 (file)
 #include "gf100.h"
 #include "ctxgf100.h"
 
+#include <core/firmware.h>
+#include <subdev/acr.h>
 #include <subdev/timer.h>
 
+#include <nvfw/flcn.h>
+
 #include <nvif/class.h>
 
+void
+gm20b_gr_acr_bld_patch(struct nvkm_acr *acr, u32 bld, s64 adjust)
+{
+       struct flcn_bl_dmem_desc hdr;
+       u64 addr;
+
+       nvkm_robj(acr->wpr, bld, &hdr, sizeof(hdr));
+       addr = ((u64)hdr.code_dma_base1 << 40 | hdr.code_dma_base << 8);
+       hdr.code_dma_base  = lower_32_bits((addr + adjust) >> 8);
+       hdr.code_dma_base1 = upper_32_bits((addr + adjust) >> 8);
+       addr = ((u64)hdr.data_dma_base1 << 40 | hdr.data_dma_base << 8);
+       hdr.data_dma_base  = lower_32_bits((addr + adjust) >> 8);
+       hdr.data_dma_base1 = upper_32_bits((addr + adjust) >> 8);
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+
+       flcn_bl_dmem_desc_dump(&acr->subdev, &hdr);
+}
+
+void
+gm20b_gr_acr_bld_write(struct nvkm_acr *acr, u32 bld,
+                      struct nvkm_acr_lsfw *lsfw)
+{
+       const u64 base = lsfw->offset.img + lsfw->app_start_offset;
+       const u64 code = (base + lsfw->app_resident_code_offset) >> 8;
+       const u64 data = (base + lsfw->app_resident_data_offset) >> 8;
+       const struct flcn_bl_dmem_desc hdr = {
+               .ctx_dma = FALCON_DMAIDX_UCODE,
+               .code_dma_base = lower_32_bits(code),
+               .non_sec_code_off = lsfw->app_resident_code_offset,
+               .non_sec_code_size = lsfw->app_resident_code_size,
+               .code_entry_point = lsfw->app_imem_entry,
+               .data_dma_base = lower_32_bits(data),
+               .data_size = lsfw->app_resident_data_size,
+               .code_dma_base1 = upper_32_bits(code),
+               .data_dma_base1 = upper_32_bits(data),
+       };
+
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+}
+
+const struct nvkm_acr_lsf_func
+gm20b_gr_fecs_acr = {
+       .bld_size = sizeof(struct flcn_bl_dmem_desc),
+       .bld_write = gm20b_gr_acr_bld_write,
+       .bld_patch = gm20b_gr_acr_bld_patch,
+};
+
 static void
 gm20b_gr_init_gpc_mmu(struct gf100_gr *gr)
 {
@@ -33,7 +84,7 @@ gm20b_gr_init_gpc_mmu(struct gf100_gr *gr)
        u32 val;
 
        /* Bypass MMU check for non-secure boot */
-       if (!device->secboot) {
+       if (!device->acr) {
                nvkm_wr32(device, 0x100ce4, 0xffffffff);
 
                if (nvkm_rd32(device, 0x100ce4) != 0xffffffff)
@@ -85,8 +136,51 @@ gm20b_gr = {
        }
 };
 
+static int
+gm20b_gr_load(struct gf100_gr *gr, int ver, const struct gf100_gr_fwif *fwif)
+{
+       struct nvkm_subdev *subdev = &gr->base.engine.subdev;
+       int ret;
+
+       ret = nvkm_acr_lsfw_load_bl_inst_data_sig(subdev, &gr->fecs.falcon,
+                                                 NVKM_ACR_LSF_FECS,
+                                                 "gr/fecs_", ver, fwif->fecs);
+       if (ret)
+               return ret;
+
+
+       if (nvkm_firmware_load_blob(subdev, "gr/", "gpccs_inst", ver,
+                                   &gr->gpccs.inst) ||
+           nvkm_firmware_load_blob(subdev, "gr/", "gpccs_data", ver,
+                                   &gr->gpccs.data))
+               return -ENOENT;
+
+       gr->firmware = true;
+
+       return gk20a_gr_load_sw(gr, "gr/", ver);
+}
+
+#if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+MODULE_FIRMWARE("nvidia/gm20b/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gm20b/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gm20b/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gm20b/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gm20b/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gm20b/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gm20b/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gm20b/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gm20b/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gm20b/gr/sw_method_init.bin");
+#endif
+
+static const struct gf100_gr_fwif
+gm20b_gr_fwif[] = {
+       { 0, gm20b_gr_load, &gm20b_gr, &gm20b_gr_fecs_acr },
+       {}
+};
+
 int
 gm20b_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gm200_gr_new_(&gm20b_gr, device, index, pgr);
+       return gf100_gr_new_(gm20b_gr_fwif, device, index, pgr);
 }
index 9d0521c..bd5d8cc 100644 (file)
@@ -135,8 +135,27 @@ gp100_gr = {
        }
 };
 
+MODULE_FIRMWARE("nvidia/gp100/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp100/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp100/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gp100/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp100/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp100/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp100/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gp100/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp100/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gp100/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gp100/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gp100/gr/sw_method_init.bin");
+
+static const struct gf100_gr_fwif
+gp100_gr_fwif[] = {
+       { 0, gm200_gr_load, &gp100_gr, &gm200_gr_fecs_acr, &gm200_gr_gpccs_acr },
+       {}
+};
+
 int
 gp100_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gm200_gr_new_(&gp100_gr, device, index, pgr);
+       return gf100_gr_new_(gp100_gr_fwif, device, index, pgr);
 }
index 37f7d73..7baf67f 100644 (file)
@@ -131,8 +131,27 @@ gp102_gr = {
        }
 };
 
+MODULE_FIRMWARE("nvidia/gp102/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp102/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp102/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gp102/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp102/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp102/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp102/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gp102/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp102/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gp102/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gp102/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gp102/gr/sw_method_init.bin");
+
+static const struct gf100_gr_fwif
+gp102_gr_fwif[] = {
+       { 0, gm200_gr_load, &gp102_gr, &gm200_gr_fecs_acr, &gm200_gr_gpccs_acr },
+       {}
+};
+
 int
 gp102_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gm200_gr_new_(&gp102_gr, device, index, pgr);
+       return gf100_gr_new_(gp102_gr_fwif, device, index, pgr);
 }
index 4573c91..d9b8ef8 100644 (file)
@@ -59,8 +59,40 @@ gp104_gr = {
        }
 };
 
+MODULE_FIRMWARE("nvidia/gp104/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp104/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp104/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gp104/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp104/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp104/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp104/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gp104/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp104/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gp104/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gp104/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gp104/gr/sw_method_init.bin");
+
+MODULE_FIRMWARE("nvidia/gp106/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp106/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp106/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gp106/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp106/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp106/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp106/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gp106/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp106/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gp106/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gp106/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gp106/gr/sw_method_init.bin");
+
+static const struct gf100_gr_fwif
+gp104_gr_fwif[] = {
+       { 0, gm200_gr_load, &gp104_gr, &gm200_gr_fecs_acr, &gm200_gr_gpccs_acr },
+       {}
+};
+
 int
 gp104_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gm200_gr_new_(&gp104_gr, device, index, pgr);
+       return gf100_gr_new_(gp104_gr_fwif, device, index, pgr);
 }
index 812aba9..2b1ad55 100644 (file)
@@ -26,7 +26,7 @@
 
 #include <nvif/class.h>
 
-static const struct gf100_gr_func
+const struct gf100_gr_func
 gp107_gr = {
        .oneinit_tiles = gm200_gr_oneinit_tiles,
        .oneinit_sm_id = gm200_gr_oneinit_sm_id,
@@ -61,8 +61,27 @@ gp107_gr = {
        }
 };
 
+MODULE_FIRMWARE("nvidia/gp107/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp107/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp107/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gp107/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp107/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp107/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp107/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gp107/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp107/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gp107/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gp107/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gp107/gr/sw_method_init.bin");
+
+static const struct gf100_gr_fwif
+gp107_gr_fwif[] = {
+       { 0, gm200_gr_load, &gp107_gr, &gm200_gr_fecs_acr, &gm200_gr_gpccs_acr },
+       {}
+};
+
 int
 gp107_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gm200_gr_new_(&gp107_gr, device, index, pgr);
+       return gf100_gr_new_(gp107_gr_fwif, device, index, pgr);
 }
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gp108.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gp108.c
new file mode 100644 (file)
index 0000000..113e4c1
--- /dev/null
@@ -0,0 +1,97 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "gf100.h"
+
+#include <subdev/acr.h>
+
+#include <nvfw/flcn.h>
+
+static void
+gp108_gr_acr_bld_patch(struct nvkm_acr *acr, u32 bld, s64 adjust)
+{
+       struct flcn_bl_dmem_desc_v2 hdr;
+       nvkm_robj(acr->wpr, bld, &hdr, sizeof(hdr));
+       hdr.code_dma_base = hdr.code_dma_base + adjust;
+       hdr.data_dma_base = hdr.data_dma_base + adjust;
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+       flcn_bl_dmem_desc_v2_dump(&acr->subdev, &hdr);
+}
+
+static void
+gp108_gr_acr_bld_write(struct nvkm_acr *acr, u32 bld,
+                      struct nvkm_acr_lsfw *lsfw)
+{
+       const u64 base = lsfw->offset.img + lsfw->app_start_offset;
+       const u64 code = base + lsfw->app_resident_code_offset;
+       const u64 data = base + lsfw->app_resident_data_offset;
+       const struct flcn_bl_dmem_desc_v2 hdr = {
+               .ctx_dma = FALCON_DMAIDX_UCODE,
+               .code_dma_base = code,
+               .non_sec_code_off = lsfw->app_resident_code_offset,
+               .non_sec_code_size = lsfw->app_resident_code_size,
+               .code_entry_point = lsfw->app_imem_entry,
+               .data_dma_base = data,
+               .data_size = lsfw->app_resident_data_size,
+       };
+
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+}
+
+const struct nvkm_acr_lsf_func
+gp108_gr_gpccs_acr = {
+       .flags = NVKM_ACR_LSF_FORCE_PRIV_LOAD,
+       .bld_size = sizeof(struct flcn_bl_dmem_desc_v2),
+       .bld_write = gp108_gr_acr_bld_write,
+       .bld_patch = gp108_gr_acr_bld_patch,
+};
+
+const struct nvkm_acr_lsf_func
+gp108_gr_fecs_acr = {
+       .bld_size = sizeof(struct flcn_bl_dmem_desc_v2),
+       .bld_write = gp108_gr_acr_bld_write,
+       .bld_patch = gp108_gr_acr_bld_patch,
+};
+
+MODULE_FIRMWARE("nvidia/gp108/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp108/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp108/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gp108/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp108/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp108/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp108/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gp108/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp108/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gp108/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gp108/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gp108/gr/sw_method_init.bin");
+
+static const struct gf100_gr_fwif
+gp108_gr_fwif[] = {
+       { 0, gm200_gr_load, &gp107_gr, &gp108_gr_fecs_acr, &gp108_gr_gpccs_acr },
+       {}
+};
+
+int
+gp108_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
+{
+       return gf100_gr_new_(gp108_gr_fwif, device, index, pgr);
+}
index 303dced..a3db2a9 100644 (file)
 #include "gf100.h"
 #include "ctxgf100.h"
 
+#include <subdev/acr.h>
+
 #include <nvif/class.h>
 
+#include <nvfw/flcn.h>
+
+static const struct nvkm_acr_lsf_func
+gp10b_gr_gpccs_acr = {
+       .flags = NVKM_ACR_LSF_FORCE_PRIV_LOAD,
+       .bld_size = sizeof(struct flcn_bl_dmem_desc),
+       .bld_write = gm20b_gr_acr_bld_write,
+       .bld_patch = gm20b_gr_acr_bld_patch,
+};
+
 static const struct gf100_gr_func
 gp10b_gr = {
        .oneinit_tiles = gm200_gr_oneinit_tiles,
@@ -59,8 +71,29 @@ gp10b_gr = {
        }
 };
 
+#if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC)
+MODULE_FIRMWARE("nvidia/gp10b/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp10b/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp10b/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gp10b/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp10b/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/gp10b/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gp10b/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gp10b/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/gp10b/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gp10b/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gp10b/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gp10b/gr/sw_method_init.bin");
+#endif
+
+static const struct gf100_gr_fwif
+gp10b_gr_fwif[] = {
+       { 0, gm200_gr_load, &gp10b_gr, &gm20b_gr_fecs_acr, &gp10b_gr_gpccs_acr },
+       {}
+};
+
 int
 gp10b_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gm200_gr_new_(&gp10b_gr, device, index, pgr);
+       return gf100_gr_new_(gp10b_gr_fwif, device, index, pgr);
 }
index 3b33277..70639d8 100644 (file)
@@ -45,7 +45,7 @@ gv100_gr_trap_sm(struct gf100_gr *gr, int gpc, int tpc, int sm)
        nvkm_wr32(device, TPC_UNIT(gpc, tpc, 0x734 + sm * 0x80), gerr);
 }
 
-static void
+void
 gv100_gr_trap_mp(struct gf100_gr *gr, int gpc, int tpc)
 {
        gv100_gr_trap_sm(gr, gpc, tpc, 0);
@@ -59,7 +59,7 @@ gv100_gr_init_4188a4(struct gf100_gr *gr)
        nvkm_mask(device, 0x4188a4, 0x03000000, 0x03000000);
 }
 
-static void
+void
 gv100_gr_init_shader_exceptions(struct gf100_gr *gr, int gpc, int tpc)
 {
        struct nvkm_device *device = gr->base.engine.subdev.device;
@@ -71,14 +71,14 @@ gv100_gr_init_shader_exceptions(struct gf100_gr *gr, int gpc, int tpc)
        }
 }
 
-static void
+void
 gv100_gr_init_504430(struct gf100_gr *gr, int gpc, int tpc)
 {
        struct nvkm_device *device = gr->base.engine.subdev.device;
        nvkm_wr32(device, TPC_UNIT(gpc, tpc, 0x430), 0x403f0000);
 }
 
-static void
+void
 gv100_gr_init_419bd8(struct gf100_gr *gr)
 {
        struct nvkm_device *device = gr->base.engine.subdev.device;
@@ -120,8 +120,27 @@ gv100_gr = {
        }
 };
 
+MODULE_FIRMWARE("nvidia/gv100/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/gv100/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/gv100/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/gv100/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/gv100/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/gv100/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/gv100/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/gv100/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/gv100/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/gv100/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/gv100/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/gv100/gr/sw_method_init.bin");
+
+static const struct gf100_gr_fwif
+gv100_gr_fwif[] = {
+       { 0, gm200_gr_load, &gv100_gr, &gp108_gr_fecs_acr, &gp108_gr_gpccs_acr },
+       {}
+};
+
 int
 gv100_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
 {
-       return gm200_gr_new_(&gv100_gr, device, index, pgr);
+       return gf100_gr_new_(gv100_gr_fwif, device, index, pgr);
 }
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/tu102.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/tu102.c
new file mode 100644 (file)
index 0000000..454668b
--- /dev/null
@@ -0,0 +1,177 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "gf100.h"
+#include "ctxgf100.h"
+
+#include <nvif/class.h>
+
+static void
+tu102_gr_init_fecs_exceptions(struct gf100_gr *gr)
+{
+       nvkm_wr32(gr->base.engine.subdev.device, 0x409c24, 0x006f0002);
+}
+
+static void
+tu102_gr_init_fs(struct gf100_gr *gr)
+{
+       struct nvkm_device *device = gr->base.engine.subdev.device;
+       int sm;
+
+       gp100_grctx_generate_smid_config(gr);
+       gk104_grctx_generate_gpc_tpc_nr(gr);
+
+       for (sm = 0; sm < gr->sm_nr; sm++) {
+               nvkm_wr32(device, GPC_UNIT(gr->sm[sm].gpc, 0x0c10 +
+                                          gr->sm[sm].tpc * 4), sm);
+       }
+
+       gm200_grctx_generate_dist_skip_table(gr);
+       gf100_gr_init_num_tpc_per_gpc(gr, true, true);
+}
+
+static void
+tu102_gr_init_zcull(struct gf100_gr *gr)
+{
+       struct nvkm_device *device = gr->base.engine.subdev.device;
+       const u32 magicgpc918 = DIV_ROUND_UP(0x00800000, gr->tpc_total);
+       const u8 tile_nr = ALIGN(gr->tpc_total, 64);
+       u8 bank[GPC_MAX] = {}, gpc, i, j;
+       u32 data;
+
+       for (i = 0; i < tile_nr; i += 8) {
+               for (data = 0, j = 0; j < 8 && i + j < gr->tpc_total; j++) {
+                       data |= bank[gr->tile[i + j]] << (j * 4);
+                       bank[gr->tile[i + j]]++;
+               }
+               nvkm_wr32(device, GPC_BCAST(0x0980 + ((i / 8) * 4)), data);
+       }
+
+       for (gpc = 0; gpc < gr->gpc_nr; gpc++) {
+               nvkm_wr32(device, GPC_UNIT(gpc, 0x0914),
+                         gr->screen_tile_row_offset << 8 | gr->tpc_nr[gpc]);
+               nvkm_wr32(device, GPC_UNIT(gpc, 0x0910), 0x00040000 |
+                                                        gr->tpc_total);
+               nvkm_wr32(device, GPC_UNIT(gpc, 0x0918), magicgpc918);
+       }
+
+       nvkm_wr32(device, GPC_BCAST(0x3fd4), magicgpc918);
+}
+
+static void
+tu102_gr_init_gpc_mmu(struct gf100_gr *gr)
+{
+       struct nvkm_device *device = gr->base.engine.subdev.device;
+
+       nvkm_wr32(device, 0x418880, nvkm_rd32(device, 0x100c80) & 0xf8001fff);
+       nvkm_wr32(device, 0x418890, 0x00000000);
+       nvkm_wr32(device, 0x418894, 0x00000000);
+
+       nvkm_wr32(device, 0x4188b4, nvkm_rd32(device, 0x100cc8));
+       nvkm_wr32(device, 0x4188b8, nvkm_rd32(device, 0x100ccc));
+       nvkm_wr32(device, 0x4188b0, nvkm_rd32(device, 0x100cc4));
+}
+
+static const struct gf100_gr_func
+tu102_gr = {
+       .oneinit_tiles = gm200_gr_oneinit_tiles,
+       .oneinit_sm_id = gm200_gr_oneinit_sm_id,
+       .init = gf100_gr_init,
+       .init_419bd8 = gv100_gr_init_419bd8,
+       .init_gpc_mmu = tu102_gr_init_gpc_mmu,
+       .init_vsc_stream_master = gk104_gr_init_vsc_stream_master,
+       .init_zcull = tu102_gr_init_zcull,
+       .init_num_active_ltcs = gf100_gr_init_num_active_ltcs,
+       .init_rop_active_fbps = gp100_gr_init_rop_active_fbps,
+       .init_swdx_pes_mask = gp102_gr_init_swdx_pes_mask,
+       .init_fs = tu102_gr_init_fs,
+       .init_fecs_exceptions = tu102_gr_init_fecs_exceptions,
+       .init_ds_hww_esr_2 = gm200_gr_init_ds_hww_esr_2,
+       .init_sked_hww_esr = gk104_gr_init_sked_hww_esr,
+       .init_ppc_exceptions = gk104_gr_init_ppc_exceptions,
+       .init_504430 = gv100_gr_init_504430,
+       .init_shader_exceptions = gv100_gr_init_shader_exceptions,
+       .trap_mp = gv100_gr_trap_mp,
+       .rops = gm200_gr_rops,
+       .gpc_nr = 6,
+       .tpc_nr = 5,
+       .ppc_nr = 3,
+       .grctx = &tu102_grctx,
+       .zbc = &gp102_gr_zbc,
+       .sclass = {
+               { -1, -1, FERMI_TWOD_A },
+               { -1, -1, KEPLER_INLINE_TO_MEMORY_B },
+               { -1, -1, TURING_A, &gf100_fermi },
+               { -1, -1, TURING_COMPUTE_A },
+               {}
+       }
+};
+
+MODULE_FIRMWARE("nvidia/tu102/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/tu102/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/tu102/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/tu102/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/tu102/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/tu102/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/tu102/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/tu102/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/tu102/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/tu102/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/tu102/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/tu102/gr/sw_method_init.bin");
+
+MODULE_FIRMWARE("nvidia/tu104/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/tu104/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/tu104/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/tu104/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/tu104/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/tu104/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/tu104/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/tu104/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/tu104/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/tu104/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/tu104/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/tu104/gr/sw_method_init.bin");
+
+MODULE_FIRMWARE("nvidia/tu106/gr/fecs_bl.bin");
+MODULE_FIRMWARE("nvidia/tu106/gr/fecs_inst.bin");
+MODULE_FIRMWARE("nvidia/tu106/gr/fecs_data.bin");
+MODULE_FIRMWARE("nvidia/tu106/gr/fecs_sig.bin");
+MODULE_FIRMWARE("nvidia/tu106/gr/gpccs_bl.bin");
+MODULE_FIRMWARE("nvidia/tu106/gr/gpccs_inst.bin");
+MODULE_FIRMWARE("nvidia/tu106/gr/gpccs_data.bin");
+MODULE_FIRMWARE("nvidia/tu106/gr/gpccs_sig.bin");
+MODULE_FIRMWARE("nvidia/tu106/gr/sw_ctx.bin");
+MODULE_FIRMWARE("nvidia/tu106/gr/sw_nonctx.bin");
+MODULE_FIRMWARE("nvidia/tu106/gr/sw_bundle_init.bin");
+MODULE_FIRMWARE("nvidia/tu106/gr/sw_method_init.bin");
+
+static const struct gf100_gr_fwif
+tu102_gr_fwif[] = {
+       { 0, gm200_gr_load, &tu102_gr, &gp108_gr_fecs_acr, &gp108_gr_gpccs_acr },
+       {}
+};
+
+int
+tu102_gr_new(struct nvkm_device *device, int index, struct nvkm_gr **pgr)
+{
+       return gf100_gr_new_(tu102_gr_fwif, device, index, pgr);
+}
index cdf6318..9a0fd98 100644 (file)
@@ -1,3 +1,3 @@
 # SPDX-License-Identifier: MIT
 nvkm-y += nvkm/engine/nvdec/base.o
-nvkm-y += nvkm/engine/nvdec/gp102.o
+nvkm-y += nvkm/engine/nvdec/gm107.o
index 4a63581..9b23c1b 100644 (file)
  * DEALINGS IN THE SOFTWARE.
  */
 #include "priv.h"
-
-#include <subdev/top.h>
-#include <engine/falcon.h>
-
-static int
-nvkm_nvdec_oneinit(struct nvkm_engine *engine)
-{
-       struct nvkm_nvdec *nvdec = nvkm_nvdec(engine);
-       struct nvkm_subdev *subdev = &nvdec->engine.subdev;
-
-       nvdec->addr = nvkm_top_addr(subdev->device, subdev->index);
-       if (!nvdec->addr)
-               return -EINVAL;
-
-       /*XXX: fix naming of this when adding support for multiple-NVDEC */
-       return nvkm_falcon_v1_new(subdev, "NVDEC", nvdec->addr,
-                                 &nvdec->falcon);
-}
+#include <core/firmware.h>
 
 static void *
 nvkm_nvdec_dtor(struct nvkm_engine *engine)
 {
        struct nvkm_nvdec *nvdec = nvkm_nvdec(engine);
-       nvkm_falcon_del(&nvdec->falcon);
+       nvkm_falcon_dtor(&nvdec->falcon);
        return nvdec;
 }
 
 static const struct nvkm_engine_func
 nvkm_nvdec = {
        .dtor = nvkm_nvdec_dtor,
-       .oneinit = nvkm_nvdec_oneinit,
 };
 
 int
-nvkm_nvdec_new_(struct nvkm_device *device, int index,
-               struct nvkm_nvdec **pnvdec)
+nvkm_nvdec_new_(const struct nvkm_nvdec_fwif *fwif, struct nvkm_device *device,
+               int index, struct nvkm_nvdec **pnvdec)
 {
        struct nvkm_nvdec *nvdec;
+       int ret;
 
        if (!(nvdec = *pnvdec = kzalloc(sizeof(*nvdec), GFP_KERNEL)))
                return -ENOMEM;
 
-       return nvkm_engine_ctor(&nvkm_nvdec, device, index, true,
-                               &nvdec->engine);
+       ret = nvkm_engine_ctor(&nvkm_nvdec, device, index, true,
+                              &nvdec->engine);
+       if (ret)
+               return ret;
+
+       fwif = nvkm_firmware_load(&nvdec->engine.subdev, fwif, "Nvdec", nvdec);
+       if (IS_ERR(fwif))
+               return -ENODEV;
+
+       nvdec->func = fwif->func;
+
+       return nvkm_falcon_ctor(nvdec->func->flcn, &nvdec->engine.subdev,
+                               nvkm_subdev_name[index], 0, &nvdec->falcon);
 };
  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  * DEALINGS IN THE SOFTWARE.
  */
-
 #include "priv.h"
 
+static const struct nvkm_falcon_func
+gm107_nvdec_flcn = {
+       .debug = 0xd00,
+       .fbif = 0x600,
+       .load_imem = nvkm_falcon_v1_load_imem,
+       .load_dmem = nvkm_falcon_v1_load_dmem,
+       .read_dmem = nvkm_falcon_v1_read_dmem,
+       .bind_context = nvkm_falcon_v1_bind_context,
+       .wait_for_halt = nvkm_falcon_v1_wait_for_halt,
+       .clear_interrupt = nvkm_falcon_v1_clear_interrupt,
+       .set_start_addr = nvkm_falcon_v1_set_start_addr,
+       .start = nvkm_falcon_v1_start,
+       .enable = nvkm_falcon_v1_enable,
+       .disable = nvkm_falcon_v1_disable,
+};
+
+static const struct nvkm_nvdec_func
+gm107_nvdec = {
+       .flcn = &gm107_nvdec_flcn,
+};
+
+static int
+gm107_nvdec_nofw(struct nvkm_nvdec *nvdec, int ver,
+                const struct nvkm_nvdec_fwif *fwif)
+{
+       return 0;
+}
+
+static const struct nvkm_nvdec_fwif
+gm107_nvdec_fwif[] = {
+       { -1, gm107_nvdec_nofw, &gm107_nvdec },
+       {}
+};
+
 int
-gp102_nvdec_new(struct nvkm_device *device, int index,
+gm107_nvdec_new(struct nvkm_device *device, int index,
                struct nvkm_nvdec **pnvdec)
 {
-       return nvkm_nvdec_new_(device, index, pnvdec);
+       return nvkm_nvdec_new_(gm107_nvdec_fwif, device, index, pnvdec);
 }
index 57bfa3a..e14da8b 100644 (file)
@@ -3,5 +3,17 @@
 #define __NVKM_NVDEC_PRIV_H__
 #include <engine/nvdec.h>
 
-int nvkm_nvdec_new_(struct nvkm_device *, int, struct nvkm_nvdec **);
+struct nvkm_nvdec_func {
+       const struct nvkm_falcon_func *flcn;
+};
+
+struct nvkm_nvdec_fwif {
+       int version;
+       int (*load)(struct nvkm_nvdec *, int ver,
+                   const struct nvkm_nvdec_fwif *);
+       const struct nvkm_nvdec_func *func;
+};
+
+int nvkm_nvdec_new_(const struct nvkm_nvdec_fwif *fwif,
+                   struct nvkm_device *, int, struct nvkm_nvdec **);
 #endif
index f316de8..75bf443 100644 (file)
@@ -1,2 +1,3 @@
 # SPDX-License-Identifier: MIT
-#nvkm-y += nvkm/engine/nvenc/base.o
+nvkm-y += nvkm/engine/nvenc/base.o
+nvkm-y += nvkm/engine/nvenc/gm107.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/nvenc/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/nvenc/base.c
new file mode 100644 (file)
index 0000000..484100e
--- /dev/null
@@ -0,0 +1,63 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+#include "priv.h"
+#include <core/firmware.h>
+
+static void *
+nvkm_nvenc_dtor(struct nvkm_engine *engine)
+{
+       struct nvkm_nvenc *nvenc = nvkm_nvenc(engine);
+       nvkm_falcon_dtor(&nvenc->falcon);
+       return nvenc;
+}
+
+static const struct nvkm_engine_func
+nvkm_nvenc = {
+       .dtor = nvkm_nvenc_dtor,
+};
+
+int
+nvkm_nvenc_new_(const struct nvkm_nvenc_fwif *fwif, struct nvkm_device *device,
+               int index, struct nvkm_nvenc **pnvenc)
+{
+       struct nvkm_nvenc *nvenc;
+       int ret;
+
+       if (!(nvenc = *pnvenc = kzalloc(sizeof(*nvenc), GFP_KERNEL)))
+               return -ENOMEM;
+
+       ret = nvkm_engine_ctor(&nvkm_nvenc, device, index, true,
+                              &nvenc->engine);
+       if (ret)
+               return ret;
+
+       fwif = nvkm_firmware_load(&nvenc->engine.subdev, fwif, "Nvenc", nvenc);
+       if (IS_ERR(fwif))
+               return -ENODEV;
+
+       nvenc->func = fwif->func;
+
+       return nvkm_falcon_ctor(nvenc->func->flcn, &nvenc->engine.subdev,
+                               nvkm_subdev_name[index], 0, &nvenc->falcon);
+};
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/nvenc/gm107.c b/drivers/gpu/drm/nouveau/nvkm/engine/nvenc/gm107.c
new file mode 100644 (file)
index 0000000..d249c8f
--- /dev/null
@@ -0,0 +1,63 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "priv.h"
+
+static const struct nvkm_falcon_func
+gm107_nvenc_flcn = {
+       .fbif = 0x800,
+       .load_imem = nvkm_falcon_v1_load_imem,
+       .load_dmem = nvkm_falcon_v1_load_dmem,
+       .read_dmem = nvkm_falcon_v1_read_dmem,
+       .bind_context = nvkm_falcon_v1_bind_context,
+       .wait_for_halt = nvkm_falcon_v1_wait_for_halt,
+       .clear_interrupt = nvkm_falcon_v1_clear_interrupt,
+       .set_start_addr = nvkm_falcon_v1_set_start_addr,
+       .start = nvkm_falcon_v1_start,
+       .enable = nvkm_falcon_v1_enable,
+       .disable = nvkm_falcon_v1_disable,
+};
+
+static const struct nvkm_nvenc_func
+gm107_nvenc = {
+       .flcn = &gm107_nvenc_flcn,
+};
+
+static int
+gm107_nvenc_nofw(struct nvkm_nvenc *nvenc, int ver,
+                const struct nvkm_nvenc_fwif *fwif)
+{
+       return 0;
+}
+
+static const struct nvkm_nvenc_fwif
+gm107_nvenc_fwif[] = {
+       { -1, gm107_nvenc_nofw, &gm107_nvenc },
+       {}
+};
+
+int
+gm107_nvenc_new(struct nvkm_device *device, int index,
+               struct nvkm_nvenc **pnvenc)
+{
+       return nvkm_nvenc_new_(gm107_nvenc_fwif, device, index, pnvenc);
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/nvenc/priv.h b/drivers/gpu/drm/nouveau/nvkm/engine/nvenc/priv.h
new file mode 100644 (file)
index 0000000..100fa5e
--- /dev/null
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: MIT */
+#ifndef __NVKM_NVENC_PRIV_H__
+#define __NVKM_NVENC_PRIV_H__
+#include <engine/nvenc.h>
+
+struct nvkm_nvenc_func {
+       const struct nvkm_falcon_func *flcn;
+};
+
+struct nvkm_nvenc_fwif {
+       int version;
+       int (*load)(struct nvkm_nvenc *, int ver,
+                   const struct nvkm_nvenc_fwif *);
+       const struct nvkm_nvenc_func *func;
+};
+
+int nvkm_nvenc_new_(const struct nvkm_nvenc_fwif *, struct nvkm_device *,
+                   int, struct nvkm_nvenc **pnvenc);
+#endif
index 97c4696..63cd2be 100644 (file)
@@ -1,4 +1,5 @@
 # SPDX-License-Identifier: MIT
 nvkm-y += nvkm/engine/sec2/base.o
 nvkm-y += nvkm/engine/sec2/gp102.o
+nvkm-y += nvkm/engine/sec2/gp108.o
 nvkm-y += nvkm/engine/sec2/tu102.o
index 1b49e5b..41318aa 100644 (file)
  */
 #include "priv.h"
 
-#include <core/msgqueue.h>
+#include <core/firmware.h>
 #include <subdev/top.h>
-#include <engine/falcon.h>
-
-static void *
-nvkm_sec2_dtor(struct nvkm_engine *engine)
-{
-       struct nvkm_sec2 *sec2 = nvkm_sec2(engine);
-       nvkm_msgqueue_del(&sec2->queue);
-       nvkm_falcon_del(&sec2->falcon);
-       return sec2;
-}
 
 static void
-nvkm_sec2_intr(struct nvkm_engine *engine)
+nvkm_sec2_recv(struct work_struct *work)
 {
-       struct nvkm_sec2 *sec2 = nvkm_sec2(engine);
-       struct nvkm_subdev *subdev = &engine->subdev;
-       struct nvkm_device *device = subdev->device;
-       u32 disp = nvkm_rd32(device, sec2->addr + 0x01c);
-       u32 intr = nvkm_rd32(device, sec2->addr + 0x008) & disp & ~(disp >> 16);
-
-       if (intr & 0x00000040) {
-               schedule_work(&sec2->work);
-               nvkm_wr32(device, sec2->addr + 0x004, 0x00000040);
-               intr &= ~0x00000040;
-       }
+       struct nvkm_sec2 *sec2 = container_of(work, typeof(*sec2), work);
 
-       if (intr) {
-               nvkm_error(subdev, "unhandled intr %08x\n", intr);
-               nvkm_wr32(device, sec2->addr + 0x004, intr);
+       if (!sec2->initmsg_received) {
+               int ret = sec2->func->initmsg(sec2);
+               if (ret) {
+                       nvkm_error(&sec2->engine.subdev,
+                                  "error parsing init message: %d\n", ret);
+                       return;
+               }
 
+               sec2->initmsg_received = true;
        }
+
+       nvkm_falcon_msgq_recv(sec2->msgq);
 }
 
 static void
-nvkm_sec2_recv(struct work_struct *work)
+nvkm_sec2_intr(struct nvkm_engine *engine)
 {
-       struct nvkm_sec2 *sec2 = container_of(work, typeof(*sec2), work);
-
-       if (!sec2->queue) {
-               nvkm_warn(&sec2->engine.subdev,
-                         "recv function called while no firmware set!\n");
-               return;
-       }
-
-       nvkm_msgqueue_recv(sec2->queue);
+       struct nvkm_sec2 *sec2 = nvkm_sec2(engine);
+       sec2->func->intr(sec2);
 }
 
-
 static int
-nvkm_sec2_oneinit(struct nvkm_engine *engine)
+nvkm_sec2_fini(struct nvkm_engine *engine, bool suspend)
 {
        struct nvkm_sec2 *sec2 = nvkm_sec2(engine);
-       struct nvkm_subdev *subdev = &sec2->engine.subdev;
 
-       if (!sec2->addr) {
-               sec2->addr = nvkm_top_addr(subdev->device, subdev->index);
-               if (WARN_ON(!sec2->addr))
-                       return -EINVAL;
+       flush_work(&sec2->work);
+
+       if (suspend) {
+               nvkm_falcon_cmdq_fini(sec2->cmdq);
+               sec2->initmsg_received = false;
        }
 
-       return nvkm_falcon_v1_new(subdev, "SEC2", sec2->addr, &sec2->falcon);
+       return 0;
 }
 
-static int
-nvkm_sec2_fini(struct nvkm_engine *engine, bool suspend)
+static void *
+nvkm_sec2_dtor(struct nvkm_engine *engine)
 {
        struct nvkm_sec2 *sec2 = nvkm_sec2(engine);
-       flush_work(&sec2->work);
-       return 0;
+       nvkm_falcon_msgq_del(&sec2->msgq);
+       nvkm_falcon_cmdq_del(&sec2->cmdq);
+       nvkm_falcon_qmgr_del(&sec2->qmgr);
+       nvkm_falcon_dtor(&sec2->falcon);
+       return sec2;
 }
 
 static const struct nvkm_engine_func
 nvkm_sec2 = {
        .dtor = nvkm_sec2_dtor,
-       .oneinit = nvkm_sec2_oneinit,
        .fini = nvkm_sec2_fini,
        .intr = nvkm_sec2_intr,
 };
 
 int
-nvkm_sec2_new_(struct nvkm_device *device, int index, u32 addr,
-              struct nvkm_sec2 **psec2)
+nvkm_sec2_new_(const struct nvkm_sec2_fwif *fwif, struct nvkm_device *device,
+              int index, u32 addr, struct nvkm_sec2 **psec2)
 {
        struct nvkm_sec2 *sec2;
+       int ret;
 
        if (!(sec2 = *psec2 = kzalloc(sizeof(*sec2), GFP_KERNEL)))
                return -ENOMEM;
-       sec2->addr = addr;
-       INIT_WORK(&sec2->work, nvkm_sec2_recv);
 
-       return nvkm_engine_ctor(&nvkm_sec2, device, index, true, &sec2->engine);
+       ret = nvkm_engine_ctor(&nvkm_sec2, device, index, true, &sec2->engine);
+       if (ret)
+               return ret;
+
+       fwif = nvkm_firmware_load(&sec2->engine.subdev, fwif, "Sec2", sec2);
+       if (IS_ERR(fwif))
+               return PTR_ERR(fwif);
+
+       sec2->func = fwif->func;
+
+       ret = nvkm_falcon_ctor(sec2->func->flcn, &sec2->engine.subdev,
+                              nvkm_subdev_name[index], addr, &sec2->falcon);
+       if (ret)
+               return ret;
+
+       if ((ret = nvkm_falcon_qmgr_new(&sec2->falcon, &sec2->qmgr)) ||
+           (ret = nvkm_falcon_cmdq_new(sec2->qmgr, "cmdq", &sec2->cmdq)) ||
+           (ret = nvkm_falcon_msgq_new(sec2->qmgr, "msgq", &sec2->msgq)))
+               return ret;
+
+       INIT_WORK(&sec2->work, nvkm_sec2_recv);
+       return 0;
 };
index 858cf27..368f2a0 100644 (file)
  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  * DEALINGS IN THE SOFTWARE.
  */
-
 #include "priv.h"
 
+#include <core/memory.h>
+#include <subdev/acr.h>
+#include <subdev/timer.h>
+
+#include <nvfw/flcn.h>
+#include <nvfw/sec2.h>
+
+static int
+gp102_sec2_acr_bootstrap_falcon_callback(void *priv, struct nv_falcon_msg *hdr)
+{
+       struct nv_sec2_acr_bootstrap_falcon_msg *msg =
+               container_of(hdr, typeof(*msg), msg.hdr);
+       struct nvkm_subdev *subdev = priv;
+       const char *name = nvkm_acr_lsf_id(msg->falcon_id);
+
+       if (msg->error_code) {
+               nvkm_error(subdev, "ACR_BOOTSTRAP_FALCON failed for "
+                                  "falcon %d [%s]: %08x\n",
+                          msg->falcon_id, name, msg->error_code);
+               return -EINVAL;
+       }
+
+       nvkm_debug(subdev, "%s booted\n", name);
+       return 0;
+}
+
+static int
+gp102_sec2_acr_bootstrap_falcon(struct nvkm_falcon *falcon,
+                               enum nvkm_acr_lsf_id id)
+{
+       struct nvkm_sec2 *sec2 = container_of(falcon, typeof(*sec2), falcon);
+       struct nv_sec2_acr_bootstrap_falcon_cmd cmd = {
+               .cmd.hdr.unit_id = sec2->func->unit_acr,
+               .cmd.hdr.size = sizeof(cmd),
+               .cmd.cmd_type = NV_SEC2_ACR_CMD_BOOTSTRAP_FALCON,
+               .flags = NV_SEC2_ACR_BOOTSTRAP_FALCON_FLAGS_RESET_YES,
+               .falcon_id = id,
+       };
+
+       return nvkm_falcon_cmdq_send(sec2->cmdq, &cmd.cmd.hdr,
+                                    gp102_sec2_acr_bootstrap_falcon_callback,
+                                    &sec2->engine.subdev,
+                                    msecs_to_jiffies(1000));
+}
+
+static int
+gp102_sec2_acr_boot(struct nvkm_falcon *falcon)
+{
+       struct nv_sec2_args args = {};
+       nvkm_falcon_load_dmem(falcon, &args,
+                             falcon->func->emem_addr, sizeof(args), 0);
+       nvkm_falcon_start(falcon);
+       return 0;
+}
+
+static void
+gp102_sec2_acr_bld_patch(struct nvkm_acr *acr, u32 bld, s64 adjust)
+{
+       struct loader_config_v1 hdr;
+       nvkm_robj(acr->wpr, bld, &hdr, sizeof(hdr));
+       hdr.code_dma_base = hdr.code_dma_base + adjust;
+       hdr.data_dma_base = hdr.data_dma_base + adjust;
+       hdr.overlay_dma_base = hdr.overlay_dma_base + adjust;
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+       loader_config_v1_dump(&acr->subdev, &hdr);
+}
+
+static void
+gp102_sec2_acr_bld_write(struct nvkm_acr *acr, u32 bld,
+                        struct nvkm_acr_lsfw *lsfw)
+{
+       const struct loader_config_v1 hdr = {
+               .dma_idx = FALCON_SEC2_DMAIDX_UCODE,
+               .code_dma_base = lsfw->offset.img + lsfw->app_start_offset,
+               .code_size_total = lsfw->app_size,
+               .code_size_to_load = lsfw->app_resident_code_size,
+               .code_entry_point = lsfw->app_imem_entry,
+               .data_dma_base = lsfw->offset.img + lsfw->app_start_offset +
+                                lsfw->app_resident_data_offset,
+               .data_size = lsfw->app_resident_data_size,
+               .overlay_dma_base = lsfw->offset.img + lsfw->app_start_offset,
+               .argc = 1,
+               .argv = lsfw->falcon->func->emem_addr,
+       };
+
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+}
+
+static const struct nvkm_acr_lsf_func
+gp102_sec2_acr_0 = {
+       .bld_size = sizeof(struct loader_config_v1),
+       .bld_write = gp102_sec2_acr_bld_write,
+       .bld_patch = gp102_sec2_acr_bld_patch,
+       .boot = gp102_sec2_acr_boot,
+       .bootstrap_falcon = gp102_sec2_acr_bootstrap_falcon,
+};
+
+int
+gp102_sec2_initmsg(struct nvkm_sec2 *sec2)
+{
+       struct nv_sec2_init_msg msg;
+       int ret, i;
+
+       ret = nvkm_falcon_msgq_recv_initmsg(sec2->msgq, &msg, sizeof(msg));
+       if (ret)
+               return ret;
+
+       if (msg.hdr.unit_id != NV_SEC2_UNIT_INIT ||
+           msg.msg_type != NV_SEC2_INIT_MSG_INIT)
+               return -EINVAL;
+
+       for (i = 0; i < ARRAY_SIZE(msg.queue_info); i++) {
+               if (msg.queue_info[i].id == NV_SEC2_INIT_MSG_QUEUE_ID_MSGQ) {
+                       nvkm_falcon_msgq_init(sec2->msgq,
+                                             msg.queue_info[i].index,
+                                             msg.queue_info[i].offset,
+                                             msg.queue_info[i].size);
+               } else {
+                       nvkm_falcon_cmdq_init(sec2->cmdq,
+                                             msg.queue_info[i].index,
+                                             msg.queue_info[i].offset,
+                                             msg.queue_info[i].size);
+               }
+       }
+
+       return 0;
+}
+
+void
+gp102_sec2_intr(struct nvkm_sec2 *sec2)
+{
+       struct nvkm_subdev *subdev = &sec2->engine.subdev;
+       struct nvkm_falcon *falcon = &sec2->falcon;
+       u32 disp = nvkm_falcon_rd32(falcon, 0x01c);
+       u32 intr = nvkm_falcon_rd32(falcon, 0x008) & disp & ~(disp >> 16);
+
+       if (intr & 0x00000040) {
+               schedule_work(&sec2->work);
+               nvkm_falcon_wr32(falcon, 0x004, 0x00000040);
+               intr &= ~0x00000040;
+       }
+
+       if (intr) {
+               nvkm_error(subdev, "unhandled intr %08x\n", intr);
+               nvkm_falcon_wr32(falcon, 0x004, intr);
+       }
+}
+
+int
+gp102_sec2_flcn_enable(struct nvkm_falcon *falcon)
+{
+       nvkm_falcon_mask(falcon, 0x3c0, 0x00000001, 0x00000001);
+       udelay(10);
+       nvkm_falcon_mask(falcon, 0x3c0, 0x00000001, 0x00000000);
+       return nvkm_falcon_v1_enable(falcon);
+}
+
+void
+gp102_sec2_flcn_bind_context(struct nvkm_falcon *falcon,
+                            struct nvkm_memory *ctx)
+{
+       struct nvkm_device *device = falcon->owner->device;
+
+       nvkm_falcon_v1_bind_context(falcon, ctx);
+       if (!ctx)
+               return;
+
+       /* Not sure if this is a WAR for a HW issue, or some additional
+        * programming sequence that's needed to properly complete the
+        * context switch we trigger above.
+        *
+        * Fixes unreliability of booting the SEC2 RTOS on Quadro P620,
+        * particularly when resuming from suspend.
+        *
+        * Also removes the need for an odd workaround where we needed
+        * to program SEC2's FALCON_CPUCTL_ALIAS_STARTCPU twice before
+        * the SEC2 RTOS would begin executing.
+        */
+       nvkm_msec(device, 10,
+               u32 irqstat = nvkm_falcon_rd32(falcon, 0x008);
+               u32 flcn0dc = nvkm_falcon_rd32(falcon, 0x0dc);
+               if ((irqstat & 0x00000008) &&
+                   (flcn0dc & 0x00007000) == 0x00005000)
+                       break;
+       );
+
+       nvkm_falcon_mask(falcon, 0x004, 0x00000008, 0x00000008);
+       nvkm_falcon_mask(falcon, 0x058, 0x00000002, 0x00000002);
+
+       nvkm_msec(device, 10,
+               u32 flcn0dc = nvkm_falcon_rd32(falcon, 0x0dc);
+               if ((flcn0dc & 0x00007000) == 0x00000000)
+                       break;
+       );
+}
+
+static const struct nvkm_falcon_func
+gp102_sec2_flcn = {
+       .debug = 0x408,
+       .fbif = 0x600,
+       .load_imem = nvkm_falcon_v1_load_imem,
+       .load_dmem = nvkm_falcon_v1_load_dmem,
+       .read_dmem = nvkm_falcon_v1_read_dmem,
+       .emem_addr = 0x01000000,
+       .bind_context = gp102_sec2_flcn_bind_context,
+       .wait_for_halt = nvkm_falcon_v1_wait_for_halt,
+       .clear_interrupt = nvkm_falcon_v1_clear_interrupt,
+       .set_start_addr = nvkm_falcon_v1_set_start_addr,
+       .start = nvkm_falcon_v1_start,
+       .enable = gp102_sec2_flcn_enable,
+       .disable = nvkm_falcon_v1_disable,
+       .cmdq = { 0xa00, 0xa04, 8 },
+       .msgq = { 0xa30, 0xa34, 8 },
+};
+
+const struct nvkm_sec2_func
+gp102_sec2 = {
+       .flcn = &gp102_sec2_flcn,
+       .unit_acr = NV_SEC2_UNIT_ACR,
+       .intr = gp102_sec2_intr,
+       .initmsg = gp102_sec2_initmsg,
+};
+
+MODULE_FIRMWARE("nvidia/gp102/sec2/desc.bin");
+MODULE_FIRMWARE("nvidia/gp102/sec2/image.bin");
+MODULE_FIRMWARE("nvidia/gp102/sec2/sig.bin");
+MODULE_FIRMWARE("nvidia/gp104/sec2/desc.bin");
+MODULE_FIRMWARE("nvidia/gp104/sec2/image.bin");
+MODULE_FIRMWARE("nvidia/gp104/sec2/sig.bin");
+MODULE_FIRMWARE("nvidia/gp106/sec2/desc.bin");
+MODULE_FIRMWARE("nvidia/gp106/sec2/image.bin");
+MODULE_FIRMWARE("nvidia/gp106/sec2/sig.bin");
+MODULE_FIRMWARE("nvidia/gp107/sec2/desc.bin");
+MODULE_FIRMWARE("nvidia/gp107/sec2/image.bin");
+MODULE_FIRMWARE("nvidia/gp107/sec2/sig.bin");
+
+static void
+gp102_sec2_acr_bld_patch_1(struct nvkm_acr *acr, u32 bld, s64 adjust)
+{
+       struct flcn_bl_dmem_desc_v2 hdr;
+       nvkm_robj(acr->wpr, bld, &hdr, sizeof(hdr));
+       hdr.code_dma_base = hdr.code_dma_base + adjust;
+       hdr.data_dma_base = hdr.data_dma_base + adjust;
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+       flcn_bl_dmem_desc_v2_dump(&acr->subdev, &hdr);
+}
+
+static void
+gp102_sec2_acr_bld_write_1(struct nvkm_acr *acr, u32 bld,
+                          struct nvkm_acr_lsfw *lsfw)
+{
+       const struct flcn_bl_dmem_desc_v2 hdr = {
+               .ctx_dma = FALCON_SEC2_DMAIDX_UCODE,
+               .code_dma_base = lsfw->offset.img + lsfw->app_start_offset,
+               .non_sec_code_off = lsfw->app_resident_code_offset,
+               .non_sec_code_size = lsfw->app_resident_code_size,
+               .code_entry_point = lsfw->app_imem_entry,
+               .data_dma_base = lsfw->offset.img + lsfw->app_start_offset +
+                                lsfw->app_resident_data_offset,
+               .data_size = lsfw->app_resident_data_size,
+               .argc = 1,
+               .argv = lsfw->falcon->func->emem_addr,
+       };
+
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+}
+
+const struct nvkm_acr_lsf_func
+gp102_sec2_acr_1 = {
+       .bld_size = sizeof(struct flcn_bl_dmem_desc_v2),
+       .bld_write = gp102_sec2_acr_bld_write_1,
+       .bld_patch = gp102_sec2_acr_bld_patch_1,
+       .boot = gp102_sec2_acr_boot,
+       .bootstrap_falcon = gp102_sec2_acr_bootstrap_falcon,
+};
+
+int
+gp102_sec2_load(struct nvkm_sec2 *sec2, int ver,
+               const struct nvkm_sec2_fwif *fwif)
+{
+       return nvkm_acr_lsfw_load_sig_image_desc_v1(&sec2->engine.subdev,
+                                                   &sec2->falcon,
+                                                   NVKM_ACR_LSF_SEC2, "sec2/",
+                                                   ver, fwif->acr);
+}
+
+MODULE_FIRMWARE("nvidia/gp102/sec2/desc-1.bin");
+MODULE_FIRMWARE("nvidia/gp102/sec2/image-1.bin");
+MODULE_FIRMWARE("nvidia/gp102/sec2/sig-1.bin");
+MODULE_FIRMWARE("nvidia/gp104/sec2/desc-1.bin");
+MODULE_FIRMWARE("nvidia/gp104/sec2/image-1.bin");
+MODULE_FIRMWARE("nvidia/gp104/sec2/sig-1.bin");
+MODULE_FIRMWARE("nvidia/gp106/sec2/desc-1.bin");
+MODULE_FIRMWARE("nvidia/gp106/sec2/image-1.bin");
+MODULE_FIRMWARE("nvidia/gp106/sec2/sig-1.bin");
+MODULE_FIRMWARE("nvidia/gp107/sec2/desc-1.bin");
+MODULE_FIRMWARE("nvidia/gp107/sec2/image-1.bin");
+MODULE_FIRMWARE("nvidia/gp107/sec2/sig-1.bin");
+
+static const struct nvkm_sec2_fwif
+gp102_sec2_fwif[] = {
+       { 1, gp102_sec2_load, &gp102_sec2, &gp102_sec2_acr_1 },
+       { 0, gp102_sec2_load, &gp102_sec2, &gp102_sec2_acr_0 },
+       {}
+};
+
 int
-gp102_sec2_new(struct nvkm_device *device, int index,
-              struct nvkm_sec2 **psec2)
+gp102_sec2_new(struct nvkm_device *device, int index, struct nvkm_sec2 **psec2)
 {
-       return nvkm_sec2_new_(device, index, 0, psec2);
+       return nvkm_sec2_new_(gp102_sec2_fwif, device, index, 0, psec2);
 }
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
+ * Copyright 2019 Red Hat Inc.
  *
  * Permission is hereby granted, free of charge, to any person obtaining a
  * copy of this software and associated documentation files (the "Software"),
  * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
  * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
  * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
  */
+#include "priv.h"
+#include <subdev/acr.h>
 
-#ifndef __NVKM_SECBOOT_ACR_R367_H__
-#define __NVKM_SECBOOT_ACR_R367_H__
+MODULE_FIRMWARE("nvidia/gp108/sec2/desc.bin");
+MODULE_FIRMWARE("nvidia/gp108/sec2/image.bin");
+MODULE_FIRMWARE("nvidia/gp108/sec2/sig.bin");
 
-#include "acr_r352.h"
+static const struct nvkm_sec2_fwif
+gp108_sec2_fwif[] = {
+       { 0, gp102_sec2_load, &gp102_sec2, &gp102_sec2_acr_1 },
+       {}
+};
 
-void acr_r367_fixup_hs_desc(struct acr_r352 *, struct nvkm_secboot *, void *);
-
-struct ls_ucode_img *acr_r367_ls_ucode_img_load(const struct acr_r352 *,
-                                               const struct nvkm_secboot *,
-                                               enum nvkm_secboot_falcon);
-int acr_r367_ls_fill_headers(struct acr_r352 *, struct list_head *);
-int acr_r367_ls_write_wpr(struct acr_r352 *, struct list_head *,
-                         struct nvkm_gpuobj *, u64);
-#endif
+int
+gp108_sec2_new(struct nvkm_device *device, int index, struct nvkm_sec2 **psec2)
+{
+       return nvkm_sec2_new_(gp108_sec2_fwif, device, index, 0, psec2);
+}
index b331b00..bb88117 100644 (file)
@@ -3,7 +3,27 @@
 #define __NVKM_SEC2_PRIV_H__
 #include <engine/sec2.h>
 
-#define nvkm_sec2(p) container_of((p), struct nvkm_sec2, engine)
+struct nvkm_sec2_func {
+       const struct nvkm_falcon_func *flcn;
+       u8 unit_acr;
+       void (*intr)(struct nvkm_sec2 *);
+       int (*initmsg)(struct nvkm_sec2 *);
+};
 
-int nvkm_sec2_new_(struct nvkm_device *, int, u32 addr, struct nvkm_sec2 **);
+void gp102_sec2_intr(struct nvkm_sec2 *);
+int gp102_sec2_initmsg(struct nvkm_sec2 *);
+
+struct nvkm_sec2_fwif {
+       int version;
+       int (*load)(struct nvkm_sec2 *, int ver, const struct nvkm_sec2_fwif *);
+       const struct nvkm_sec2_func *func;
+       const struct nvkm_acr_lsf_func *acr;
+};
+
+int gp102_sec2_load(struct nvkm_sec2 *, int, const struct nvkm_sec2_fwif *);
+extern const struct nvkm_sec2_func gp102_sec2;
+extern const struct nvkm_acr_lsf_func gp102_sec2_acr_1;
+
+int nvkm_sec2_new_(const struct nvkm_sec2_fwif *, struct nvkm_device *,
+                  int, u32 addr, struct nvkm_sec2 **);
 #endif
index d655576..b6ebd95 100644 (file)
  * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
  * OTHER DEALINGS IN THE SOFTWARE.
  */
-
 #include "priv.h"
+#include <subdev/acr.h>
+
+static const struct nvkm_falcon_func
+tu102_sec2_flcn = {
+       .debug = 0x408,
+       .fbif = 0x600,
+       .load_imem = nvkm_falcon_v1_load_imem,
+       .load_dmem = nvkm_falcon_v1_load_dmem,
+       .read_dmem = nvkm_falcon_v1_read_dmem,
+       .emem_addr = 0x01000000,
+       .bind_context = gp102_sec2_flcn_bind_context,
+       .wait_for_halt = nvkm_falcon_v1_wait_for_halt,
+       .clear_interrupt = nvkm_falcon_v1_clear_interrupt,
+       .set_start_addr = nvkm_falcon_v1_set_start_addr,
+       .start = nvkm_falcon_v1_start,
+       .enable = nvkm_falcon_v1_enable,
+       .disable = nvkm_falcon_v1_disable,
+       .cmdq = { 0xc00, 0xc04, 8 },
+       .msgq = { 0xc80, 0xc84, 8 },
+};
+
+static const struct nvkm_sec2_func
+tu102_sec2 = {
+       .flcn = &tu102_sec2_flcn,
+       .unit_acr = 0x07,
+       .intr = gp102_sec2_intr,
+       .initmsg = gp102_sec2_initmsg,
+};
+
+static int
+tu102_sec2_nofw(struct nvkm_sec2 *sec2, int ver,
+               const struct nvkm_sec2_fwif *fwif)
+{
+       return 0;
+}
+
+static const struct nvkm_sec2_fwif
+tu102_sec2_fwif[] = {
+       {  0, gp102_sec2_load, &tu102_sec2, &gp102_sec2_acr_1 },
+       { -1, tu102_sec2_nofw, &tu102_sec2 }
+};
 
 int
-tu102_sec2_new(struct nvkm_device *device, int index,
-              struct nvkm_sec2 **psec2)
+tu102_sec2_new(struct nvkm_device *device, int index, struct nvkm_sec2 **psec2)
 {
        /* TOP info wasn't updated on Turing to reflect the PRI
         * address change for some reason.  We override it here.
         */
-       return nvkm_sec2_new_(device, index, 0x840000, psec2);
+       return nvkm_sec2_new_(tu102_sec2_fwif, device, index, 0x840000, psec2);
 }
index b5665ad..d79d783 100644 (file)
@@ -1,6 +1,6 @@
 # SPDX-License-Identifier: MIT
 nvkm-y += nvkm/falcon/base.o
+nvkm-y += nvkm/falcon/cmdq.o
+nvkm-y += nvkm/falcon/msgq.o
+nvkm-y += nvkm/falcon/qmgr.o
 nvkm-y += nvkm/falcon/v1.o
-nvkm-y += nvkm/falcon/msgqueue.o
-nvkm-y += nvkm/falcon/msgqueue_0137c63d.o
-nvkm-y += nvkm/falcon/msgqueue_0148cdec.o
index 366c87d..c6a3448 100644 (file)
@@ -22,6 +22,7 @@
 #include "priv.h"
 
 #include <subdev/mc.h>
+#include <subdev/top.h>
 
 void
 nvkm_falcon_load_imem(struct nvkm_falcon *falcon, void *data, u32 start,
@@ -134,6 +135,37 @@ nvkm_falcon_clear_interrupt(struct nvkm_falcon *falcon, u32 mask)
        return falcon->func->clear_interrupt(falcon, mask);
 }
 
+static int
+nvkm_falcon_oneinit(struct nvkm_falcon *falcon)
+{
+       const struct nvkm_falcon_func *func = falcon->func;
+       const struct nvkm_subdev *subdev = falcon->owner;
+       u32 reg;
+
+       if (!falcon->addr) {
+               falcon->addr = nvkm_top_addr(subdev->device, subdev->index);
+               if (WARN_ON(!falcon->addr))
+                       return -ENODEV;
+       }
+
+       reg = nvkm_falcon_rd32(falcon, 0x12c);
+       falcon->version = reg & 0xf;
+       falcon->secret = (reg >> 4) & 0x3;
+       falcon->code.ports = (reg >> 8) & 0xf;
+       falcon->data.ports = (reg >> 12) & 0xf;
+
+       reg = nvkm_falcon_rd32(falcon, 0x108);
+       falcon->code.limit = (reg & 0x1ff) << 8;
+       falcon->data.limit = (reg & 0x3fe00) >> 1;
+
+       if (func->debug) {
+               u32 val = nvkm_falcon_rd32(falcon, func->debug);
+               falcon->debug = (val >> 20) & 0x1;
+       }
+
+       return 0;
+}
+
 void
 nvkm_falcon_put(struct nvkm_falcon *falcon, const struct nvkm_subdev *user)
 {
@@ -151,6 +183,8 @@ nvkm_falcon_put(struct nvkm_falcon *falcon, const struct nvkm_subdev *user)
 int
 nvkm_falcon_get(struct nvkm_falcon *falcon, const struct nvkm_subdev *user)
 {
+       int ret = 0;
+
        mutex_lock(&falcon->mutex);
        if (falcon->user) {
                nvkm_error(user, "%s falcon already acquired by %s!\n",
@@ -160,70 +194,37 @@ nvkm_falcon_get(struct nvkm_falcon *falcon, const struct nvkm_subdev *user)
        }
 
        nvkm_debug(user, "acquired %s falcon\n", falcon->name);
+       if (!falcon->oneinit)
+               ret = nvkm_falcon_oneinit(falcon);
        falcon->user = user;
        mutex_unlock(&falcon->mutex);
-       return 0;
+       return ret;
 }
 
 void
+nvkm_falcon_dtor(struct nvkm_falcon *falcon)
+{
+}
+
+int
 nvkm_falcon_ctor(const struct nvkm_falcon_func *func,
                 struct nvkm_subdev *subdev, const char *name, u32 addr,
                 struct nvkm_falcon *falcon)
 {
-       u32 debug_reg;
-       u32 reg;
-
        falcon->func = func;
        falcon->owner = subdev;
        falcon->name = name;
        falcon->addr = addr;
        mutex_init(&falcon->mutex);
        mutex_init(&falcon->dmem_mutex);
-
-       reg = nvkm_falcon_rd32(falcon, 0x12c);
-       falcon->version = reg & 0xf;
-       falcon->secret = (reg >> 4) & 0x3;
-       falcon->code.ports = (reg >> 8) & 0xf;
-       falcon->data.ports = (reg >> 12) & 0xf;
-
-       reg = nvkm_falcon_rd32(falcon, 0x108);
-       falcon->code.limit = (reg & 0x1ff) << 8;
-       falcon->data.limit = (reg & 0x3fe00) >> 1;
-
-       switch (subdev->index) {
-       case NVKM_ENGINE_GR:
-               debug_reg = 0x0;
-               break;
-       case NVKM_SUBDEV_PMU:
-               debug_reg = 0xc08;
-               break;
-       case NVKM_ENGINE_NVDEC0:
-               debug_reg = 0xd00;
-               break;
-       case NVKM_ENGINE_SEC2:
-               debug_reg = 0x408;
-               falcon->has_emem = true;
-               break;
-       case NVKM_SUBDEV_GSP:
-               debug_reg = 0x0; /*XXX*/
-               break;
-       default:
-               nvkm_warn(subdev, "unsupported falcon %s!\n",
-                         nvkm_subdev_name[subdev->index]);
-               debug_reg = 0;
-               break;
-       }
-
-       if (debug_reg) {
-               u32 val = nvkm_falcon_rd32(falcon, debug_reg);
-               falcon->debug = (val >> 20) & 0x1;
-       }
+       return 0;
 }
 
 void
 nvkm_falcon_del(struct nvkm_falcon **pfalcon)
 {
        if (*pfalcon) {
+               nvkm_falcon_dtor(*pfalcon);
                kfree(*pfalcon);
                *pfalcon = NULL;
        }
diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/cmdq.c b/drivers/gpu/drm/nouveau/nvkm/falcon/cmdq.c
new file mode 100644 (file)
index 0000000..40e3f3f
--- /dev/null
@@ -0,0 +1,214 @@
+/*
+ * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "qmgr.h"
+
+static bool
+nvkm_falcon_cmdq_has_room(struct nvkm_falcon_cmdq *cmdq, u32 size, bool *rewind)
+{
+       u32 head = nvkm_falcon_rd32(cmdq->qmgr->falcon, cmdq->head_reg);
+       u32 tail = nvkm_falcon_rd32(cmdq->qmgr->falcon, cmdq->tail_reg);
+       u32 free;
+
+       size = ALIGN(size, QUEUE_ALIGNMENT);
+
+       if (head >= tail) {
+               free = cmdq->offset + cmdq->size - head;
+               free -= HDR_SIZE;
+
+               if (size > free) {
+                       *rewind = true;
+                       head = cmdq->offset;
+               }
+       }
+
+       if (head < tail)
+               free = tail - head - 1;
+
+       return size <= free;
+}
+
+static void
+nvkm_falcon_cmdq_push(struct nvkm_falcon_cmdq *cmdq, void *data, u32 size)
+{
+       struct nvkm_falcon *falcon = cmdq->qmgr->falcon;
+       nvkm_falcon_load_dmem(falcon, data, cmdq->position, size, 0);
+       cmdq->position += ALIGN(size, QUEUE_ALIGNMENT);
+}
+
+static void
+nvkm_falcon_cmdq_rewind(struct nvkm_falcon_cmdq *cmdq)
+{
+       struct nv_falcon_cmd cmd;
+
+       cmd.unit_id = NV_FALCON_CMD_UNIT_ID_REWIND;
+       cmd.size = sizeof(cmd);
+       nvkm_falcon_cmdq_push(cmdq, &cmd, cmd.size);
+
+       cmdq->position = cmdq->offset;
+}
+
+static int
+nvkm_falcon_cmdq_open(struct nvkm_falcon_cmdq *cmdq, u32 size)
+{
+       struct nvkm_falcon *falcon = cmdq->qmgr->falcon;
+       bool rewind = false;
+
+       mutex_lock(&cmdq->mutex);
+
+       if (!nvkm_falcon_cmdq_has_room(cmdq, size, &rewind)) {
+               FLCNQ_DBG(cmdq, "queue full");
+               mutex_unlock(&cmdq->mutex);
+               return -EAGAIN;
+       }
+
+       cmdq->position = nvkm_falcon_rd32(falcon, cmdq->head_reg);
+
+       if (rewind)
+               nvkm_falcon_cmdq_rewind(cmdq);
+
+       return 0;
+}
+
+static void
+nvkm_falcon_cmdq_close(struct nvkm_falcon_cmdq *cmdq)
+{
+       nvkm_falcon_wr32(cmdq->qmgr->falcon, cmdq->head_reg, cmdq->position);
+       mutex_unlock(&cmdq->mutex);
+}
+
+static int
+nvkm_falcon_cmdq_write(struct nvkm_falcon_cmdq *cmdq, struct nv_falcon_cmd *cmd)
+{
+       static unsigned timeout = 2000;
+       unsigned long end_jiffies = jiffies + msecs_to_jiffies(timeout);
+       int ret = -EAGAIN;
+
+       while (ret == -EAGAIN && time_before(jiffies, end_jiffies))
+               ret = nvkm_falcon_cmdq_open(cmdq, cmd->size);
+       if (ret) {
+               FLCNQ_ERR(cmdq, "timeout waiting for queue space");
+               return ret;
+       }
+
+       nvkm_falcon_cmdq_push(cmdq, cmd, cmd->size);
+       nvkm_falcon_cmdq_close(cmdq);
+       return ret;
+}
+
+/* specifies that we want to know the command status in the answer message */
+#define CMD_FLAGS_STATUS BIT(0)
+/* specifies that we want an interrupt when the answer message is queued */
+#define CMD_FLAGS_INTR BIT(1)
+
+int
+nvkm_falcon_cmdq_send(struct nvkm_falcon_cmdq *cmdq, struct nv_falcon_cmd *cmd,
+                     nvkm_falcon_qmgr_callback cb, void *priv,
+                     unsigned long timeout)
+{
+       struct nvkm_falcon_qmgr_seq *seq;
+       int ret;
+
+       if (!wait_for_completion_timeout(&cmdq->ready,
+                                        msecs_to_jiffies(1000))) {
+               FLCNQ_ERR(cmdq, "timeout waiting for queue ready");
+               return -ETIMEDOUT;
+       }
+
+       seq = nvkm_falcon_qmgr_seq_acquire(cmdq->qmgr);
+       if (IS_ERR(seq))
+               return PTR_ERR(seq);
+
+       cmd->seq_id = seq->id;
+       cmd->ctrl_flags = CMD_FLAGS_STATUS | CMD_FLAGS_INTR;
+
+       seq->state = SEQ_STATE_USED;
+       seq->async = !timeout;
+       seq->callback = cb;
+       seq->priv = priv;
+
+       ret = nvkm_falcon_cmdq_write(cmdq, cmd);
+       if (ret) {
+               seq->state = SEQ_STATE_PENDING;
+               nvkm_falcon_qmgr_seq_release(cmdq->qmgr, seq);
+               return ret;
+       }
+
+       if (!seq->async) {
+               if (!wait_for_completion_timeout(&seq->done, timeout)) {
+                       FLCNQ_ERR(cmdq, "timeout waiting for reply");
+                       return -ETIMEDOUT;
+               }
+               ret = seq->result;
+               nvkm_falcon_qmgr_seq_release(cmdq->qmgr, seq);
+       }
+
+       return ret;
+}
+
+void
+nvkm_falcon_cmdq_fini(struct nvkm_falcon_cmdq *cmdq)
+{
+       reinit_completion(&cmdq->ready);
+}
+
+void
+nvkm_falcon_cmdq_init(struct nvkm_falcon_cmdq *cmdq,
+                     u32 index, u32 offset, u32 size)
+{
+       const struct nvkm_falcon_func *func = cmdq->qmgr->falcon->func;
+
+       cmdq->head_reg = func->cmdq.head + index * func->cmdq.stride;
+       cmdq->tail_reg = func->cmdq.tail + index * func->cmdq.stride;
+       cmdq->offset = offset;
+       cmdq->size = size;
+       complete_all(&cmdq->ready);
+
+       FLCNQ_DBG(cmdq, "initialised @ index %d offset 0x%08x size 0x%08x",
+                 index, cmdq->offset, cmdq->size);
+}
+
+void
+nvkm_falcon_cmdq_del(struct nvkm_falcon_cmdq **pcmdq)
+{
+       struct nvkm_falcon_cmdq *cmdq = *pcmdq;
+       if (cmdq) {
+               kfree(*pcmdq);
+               *pcmdq = NULL;
+       }
+}
+
+int
+nvkm_falcon_cmdq_new(struct nvkm_falcon_qmgr *qmgr, const char *name,
+                    struct nvkm_falcon_cmdq **pcmdq)
+{
+       struct nvkm_falcon_cmdq *cmdq = *pcmdq;
+
+       if (!(cmdq = *pcmdq = kzalloc(sizeof(*cmdq), GFP_KERNEL)))
+               return -ENOMEM;
+
+       cmdq->qmgr = qmgr;
+       cmdq->name = name;
+       mutex_init(&cmdq->mutex);
+       init_completion(&cmdq->ready);
+       return 0;
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/msgq.c b/drivers/gpu/drm/nouveau/nvkm/falcon/msgq.c
new file mode 100644 (file)
index 0000000..cbfe09a
--- /dev/null
@@ -0,0 +1,213 @@
+/*
+ * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "qmgr.h"
+
+static void
+nvkm_falcon_msgq_open(struct nvkm_falcon_msgq *msgq)
+{
+       mutex_lock(&msgq->mutex);
+       msgq->position = nvkm_falcon_rd32(msgq->qmgr->falcon, msgq->tail_reg);
+}
+
+static void
+nvkm_falcon_msgq_close(struct nvkm_falcon_msgq *msgq, bool commit)
+{
+       struct nvkm_falcon *falcon = msgq->qmgr->falcon;
+
+       if (commit)
+               nvkm_falcon_wr32(falcon, msgq->tail_reg, msgq->position);
+
+       mutex_unlock(&msgq->mutex);
+}
+
+static bool
+nvkm_falcon_msgq_empty(struct nvkm_falcon_msgq *msgq)
+{
+       u32 head = nvkm_falcon_rd32(msgq->qmgr->falcon, msgq->head_reg);
+       u32 tail = nvkm_falcon_rd32(msgq->qmgr->falcon, msgq->tail_reg);
+       return head == tail;
+}
+
+static int
+nvkm_falcon_msgq_pop(struct nvkm_falcon_msgq *msgq, void *data, u32 size)
+{
+       struct nvkm_falcon *falcon = msgq->qmgr->falcon;
+       u32 head, tail, available;
+
+       head = nvkm_falcon_rd32(falcon, msgq->head_reg);
+       /* has the buffer looped? */
+       if (head < msgq->position)
+               msgq->position = msgq->offset;
+
+       tail = msgq->position;
+
+       available = head - tail;
+       if (size > available) {
+               FLCNQ_ERR(msgq, "requested %d bytes, but only %d available",
+                         size, available);
+               return -EINVAL;
+       }
+
+       nvkm_falcon_read_dmem(falcon, tail, size, 0, data);
+       msgq->position += ALIGN(size, QUEUE_ALIGNMENT);
+       return 0;
+}
+
+static int
+nvkm_falcon_msgq_read(struct nvkm_falcon_msgq *msgq, struct nv_falcon_msg *hdr)
+{
+       int ret = 0;
+
+       nvkm_falcon_msgq_open(msgq);
+
+       if (nvkm_falcon_msgq_empty(msgq))
+               goto close;
+
+       ret = nvkm_falcon_msgq_pop(msgq, hdr, HDR_SIZE);
+       if (ret) {
+               FLCNQ_ERR(msgq, "failed to read message header");
+               goto close;
+       }
+
+       if (hdr->size > MSG_BUF_SIZE) {
+               FLCNQ_ERR(msgq, "message too big, %d bytes", hdr->size);
+               ret = -ENOSPC;
+               goto close;
+       }
+
+       if (hdr->size > HDR_SIZE) {
+               u32 read_size = hdr->size - HDR_SIZE;
+
+               ret = nvkm_falcon_msgq_pop(msgq, (hdr + 1), read_size);
+               if (ret) {
+                       FLCNQ_ERR(msgq, "failed to read message data");
+                       goto close;
+               }
+       }
+
+       ret = 1;
+close:
+       nvkm_falcon_msgq_close(msgq, (ret >= 0));
+       return ret;
+}
+
+static int
+nvkm_falcon_msgq_exec(struct nvkm_falcon_msgq *msgq, struct nv_falcon_msg *hdr)
+{
+       struct nvkm_falcon_qmgr_seq *seq;
+
+       seq = &msgq->qmgr->seq.id[hdr->seq_id];
+       if (seq->state != SEQ_STATE_USED && seq->state != SEQ_STATE_CANCELLED) {
+               FLCNQ_ERR(msgq, "message for unknown sequence %08x", seq->id);
+               return -EINVAL;
+       }
+
+       if (seq->state == SEQ_STATE_USED) {
+               if (seq->callback)
+                       seq->result = seq->callback(seq->priv, hdr);
+       }
+
+       if (seq->async) {
+               nvkm_falcon_qmgr_seq_release(msgq->qmgr, seq);
+               return 0;
+       }
+
+       complete_all(&seq->done);
+       return 0;
+}
+
+void
+nvkm_falcon_msgq_recv(struct nvkm_falcon_msgq *msgq)
+{
+       /*
+        * We are invoked from a worker thread, so normally we have plenty of
+        * stack space to work with.
+        */
+       u8 msg_buffer[MSG_BUF_SIZE];
+       struct nv_falcon_msg *hdr = (void *)msg_buffer;
+
+       while (nvkm_falcon_msgq_read(msgq, hdr) > 0)
+               nvkm_falcon_msgq_exec(msgq, hdr);
+}
+
+int
+nvkm_falcon_msgq_recv_initmsg(struct nvkm_falcon_msgq *msgq,
+                             void *data, u32 size)
+{
+       struct nvkm_falcon *falcon = msgq->qmgr->falcon;
+       struct nv_falcon_msg *hdr = data;
+       int ret;
+
+       msgq->head_reg = falcon->func->msgq.head;
+       msgq->tail_reg = falcon->func->msgq.tail;
+       msgq->offset = nvkm_falcon_rd32(falcon, falcon->func->msgq.tail);
+
+       nvkm_falcon_msgq_open(msgq);
+       ret = nvkm_falcon_msgq_pop(msgq, data, size);
+       if (ret == 0 && hdr->size != size) {
+               FLCN_ERR(falcon, "unexpected init message size %d vs %d",
+                        hdr->size, size);
+               ret = -EINVAL;
+       }
+       nvkm_falcon_msgq_close(msgq, ret == 0);
+       return ret;
+}
+
+void
+nvkm_falcon_msgq_init(struct nvkm_falcon_msgq *msgq,
+                     u32 index, u32 offset, u32 size)
+{
+       const struct nvkm_falcon_func *func = msgq->qmgr->falcon->func;
+
+       msgq->head_reg = func->msgq.head + index * func->msgq.stride;
+       msgq->tail_reg = func->msgq.tail + index * func->msgq.stride;
+       msgq->offset = offset;
+
+       FLCNQ_DBG(msgq, "initialised @ index %d offset 0x%08x size 0x%08x",
+                 index, msgq->offset, size);
+}
+
+void
+nvkm_falcon_msgq_del(struct nvkm_falcon_msgq **pmsgq)
+{
+       struct nvkm_falcon_msgq *msgq = *pmsgq;
+       if (msgq) {
+               kfree(*pmsgq);
+               *pmsgq = NULL;
+       }
+}
+
+int
+nvkm_falcon_msgq_new(struct nvkm_falcon_qmgr *qmgr, const char *name,
+                    struct nvkm_falcon_msgq **pmsgq)
+{
+       struct nvkm_falcon_msgq *msgq = *pmsgq;
+
+       if (!(msgq = *pmsgq = kzalloc(sizeof(*msgq), GFP_KERNEL)))
+               return -ENOMEM;
+
+       msgq->qmgr = qmgr;
+       msgq->name = name;
+       mutex_init(&msgq->mutex);
+       return 0;
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue.c b/drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue.c
deleted file mode 100644 (file)
index a8bee1e..0000000
+++ /dev/null
@@ -1,577 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- */
-
-#include "msgqueue.h"
-#include <engine/falcon.h>
-
-#include <subdev/secboot.h>
-
-
-#define HDR_SIZE sizeof(struct nvkm_msgqueue_hdr)
-#define QUEUE_ALIGNMENT 4
-/* max size of the messages we can receive */
-#define MSG_BUF_SIZE 128
-
-static int
-msg_queue_open(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_queue *queue)
-{
-       struct nvkm_falcon *falcon = priv->falcon;
-
-       mutex_lock(&queue->mutex);
-
-       queue->position = nvkm_falcon_rd32(falcon, queue->tail_reg);
-
-       return 0;
-}
-
-static void
-msg_queue_close(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_queue *queue,
-               bool commit)
-{
-       struct nvkm_falcon *falcon = priv->falcon;
-
-       if (commit)
-               nvkm_falcon_wr32(falcon, queue->tail_reg, queue->position);
-
-       mutex_unlock(&queue->mutex);
-}
-
-static bool
-msg_queue_empty(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_queue *queue)
-{
-       struct nvkm_falcon *falcon = priv->falcon;
-       u32 head, tail;
-
-       head = nvkm_falcon_rd32(falcon, queue->head_reg);
-       tail = nvkm_falcon_rd32(falcon, queue->tail_reg);
-
-       return head == tail;
-}
-
-static int
-msg_queue_pop(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_queue *queue,
-             void *data, u32 size)
-{
-       struct nvkm_falcon *falcon = priv->falcon;
-       const struct nvkm_subdev *subdev = priv->falcon->owner;
-       u32 head, tail, available;
-
-       head = nvkm_falcon_rd32(falcon, queue->head_reg);
-       /* has the buffer looped? */
-       if (head < queue->position)
-               queue->position = queue->offset;
-
-       tail = queue->position;
-
-       available = head - tail;
-
-       if (available == 0) {
-               nvkm_warn(subdev, "no message data available\n");
-               return 0;
-       }
-
-       if (size > available) {
-               nvkm_warn(subdev, "message data smaller than read request\n");
-               size = available;
-       }
-
-       nvkm_falcon_read_dmem(priv->falcon, tail, size, 0, data);
-       queue->position += ALIGN(size, QUEUE_ALIGNMENT);
-
-       return size;
-}
-
-static int
-msg_queue_read(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_queue *queue,
-              struct nvkm_msgqueue_hdr *hdr)
-{
-       const struct nvkm_subdev *subdev = priv->falcon->owner;
-       int err;
-
-       err = msg_queue_open(priv, queue);
-       if (err) {
-               nvkm_error(subdev, "fail to open queue %d\n", queue->index);
-               return err;
-       }
-
-       if (msg_queue_empty(priv, queue)) {
-               err = 0;
-               goto close;
-       }
-
-       err = msg_queue_pop(priv, queue, hdr, HDR_SIZE);
-       if (err >= 0 && err != HDR_SIZE)
-               err = -EINVAL;
-       if (err < 0) {
-               nvkm_error(subdev, "failed to read message header: %d\n", err);
-               goto close;
-       }
-
-       if (hdr->size > MSG_BUF_SIZE) {
-               nvkm_error(subdev, "message too big (%d bytes)\n", hdr->size);
-               err = -ENOSPC;
-               goto close;
-       }
-
-       if (hdr->size > HDR_SIZE) {
-               u32 read_size = hdr->size - HDR_SIZE;
-
-               err = msg_queue_pop(priv, queue, (hdr + 1), read_size);
-               if (err >= 0 && err != read_size)
-                       err = -EINVAL;
-               if (err < 0) {
-                       nvkm_error(subdev, "failed to read message: %d\n", err);
-                       goto close;
-               }
-       }
-
-close:
-       msg_queue_close(priv, queue, (err >= 0));
-
-       return err;
-}
-
-static bool
-cmd_queue_has_room(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_queue *queue,
-                  u32 size, bool *rewind)
-{
-       struct nvkm_falcon *falcon = priv->falcon;
-       u32 head, tail, free;
-
-       size = ALIGN(size, QUEUE_ALIGNMENT);
-
-       head = nvkm_falcon_rd32(falcon, queue->head_reg);
-       tail = nvkm_falcon_rd32(falcon, queue->tail_reg);
-
-       if (head >= tail) {
-               free = queue->offset + queue->size - head;
-               free -= HDR_SIZE;
-
-               if (size > free) {
-                       *rewind = true;
-                       head = queue->offset;
-               }
-       }
-
-       if (head < tail)
-               free = tail - head - 1;
-
-       return size <= free;
-}
-
-static int
-cmd_queue_push(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_queue *queue,
-              void *data, u32 size)
-{
-       nvkm_falcon_load_dmem(priv->falcon, data, queue->position, size, 0);
-       queue->position += ALIGN(size, QUEUE_ALIGNMENT);
-
-       return 0;
-}
-
-/* REWIND unit is always 0x00 */
-#define MSGQUEUE_UNIT_REWIND 0x00
-
-static void
-cmd_queue_rewind(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_queue *queue)
-{
-       const struct nvkm_subdev *subdev = priv->falcon->owner;
-       struct nvkm_msgqueue_hdr cmd;
-       int err;
-
-       cmd.unit_id = MSGQUEUE_UNIT_REWIND;
-       cmd.size = sizeof(cmd);
-       err = cmd_queue_push(priv, queue, &cmd, cmd.size);
-       if (err)
-               nvkm_error(subdev, "queue %d rewind failed\n", queue->index);
-       else
-               nvkm_error(subdev, "queue %d rewinded\n", queue->index);
-
-       queue->position = queue->offset;
-}
-
-static int
-cmd_queue_open(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_queue *queue,
-              u32 size)
-{
-       struct nvkm_falcon *falcon = priv->falcon;
-       const struct nvkm_subdev *subdev = priv->falcon->owner;
-       bool rewind = false;
-
-       mutex_lock(&queue->mutex);
-
-       if (!cmd_queue_has_room(priv, queue, size, &rewind)) {
-               nvkm_error(subdev, "queue full\n");
-               mutex_unlock(&queue->mutex);
-               return -EAGAIN;
-       }
-
-       queue->position = nvkm_falcon_rd32(falcon, queue->head_reg);
-
-       if (rewind)
-               cmd_queue_rewind(priv, queue);
-
-       return 0;
-}
-
-static void
-cmd_queue_close(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_queue *queue,
-               bool commit)
-{
-       struct nvkm_falcon *falcon = priv->falcon;
-
-       if (commit)
-               nvkm_falcon_wr32(falcon, queue->head_reg, queue->position);
-
-       mutex_unlock(&queue->mutex);
-}
-
-static int
-cmd_write(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_hdr *cmd,
-         struct nvkm_msgqueue_queue *queue)
-{
-       const struct nvkm_subdev *subdev = priv->falcon->owner;
-       static unsigned timeout = 2000;
-       unsigned long end_jiffies = jiffies + msecs_to_jiffies(timeout);
-       int ret = -EAGAIN;
-       bool commit = true;
-
-       while (ret == -EAGAIN && time_before(jiffies, end_jiffies))
-               ret = cmd_queue_open(priv, queue, cmd->size);
-       if (ret) {
-               nvkm_error(subdev, "pmu_queue_open_write failed\n");
-               return ret;
-       }
-
-       ret = cmd_queue_push(priv, queue, cmd, cmd->size);
-       if (ret) {
-               nvkm_error(subdev, "pmu_queue_push failed\n");
-               commit = false;
-       }
-
-       cmd_queue_close(priv, queue, commit);
-
-       return ret;
-}
-
-static struct nvkm_msgqueue_seq *
-msgqueue_seq_acquire(struct nvkm_msgqueue *priv)
-{
-       const struct nvkm_subdev *subdev = priv->falcon->owner;
-       struct nvkm_msgqueue_seq *seq;
-       u32 index;
-
-       mutex_lock(&priv->seq_lock);
-
-       index = find_first_zero_bit(priv->seq_tbl, NVKM_MSGQUEUE_NUM_SEQUENCES);
-
-       if (index >= NVKM_MSGQUEUE_NUM_SEQUENCES) {
-               nvkm_error(subdev, "no free sequence available\n");
-               mutex_unlock(&priv->seq_lock);
-               return ERR_PTR(-EAGAIN);
-       }
-
-       set_bit(index, priv->seq_tbl);
-
-       mutex_unlock(&priv->seq_lock);
-
-       seq = &priv->seq[index];
-       seq->state = SEQ_STATE_PENDING;
-
-       return seq;
-}
-
-static void
-msgqueue_seq_release(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_seq *seq)
-{
-       /* no need to acquire seq_lock since clear_bit is atomic */
-       seq->state = SEQ_STATE_FREE;
-       seq->callback = NULL;
-       seq->completion = NULL;
-       clear_bit(seq->id, priv->seq_tbl);
-}
-
-/* specifies that we want to know the command status in the answer message */
-#define CMD_FLAGS_STATUS BIT(0)
-/* specifies that we want an interrupt when the answer message is queued */
-#define CMD_FLAGS_INTR BIT(1)
-
-int
-nvkm_msgqueue_post(struct nvkm_msgqueue *priv, enum msgqueue_msg_priority prio,
-                  struct nvkm_msgqueue_hdr *cmd, nvkm_msgqueue_callback cb,
-                  struct completion *completion, bool wait_init)
-{
-       struct nvkm_msgqueue_seq *seq;
-       struct nvkm_msgqueue_queue *queue;
-       int ret;
-
-       if (wait_init && !wait_for_completion_timeout(&priv->init_done,
-                                        msecs_to_jiffies(1000)))
-               return -ETIMEDOUT;
-
-       queue = priv->func->cmd_queue(priv, prio);
-       if (IS_ERR(queue))
-               return PTR_ERR(queue);
-
-       seq = msgqueue_seq_acquire(priv);
-       if (IS_ERR(seq))
-               return PTR_ERR(seq);
-
-       cmd->seq_id = seq->id;
-       cmd->ctrl_flags = CMD_FLAGS_STATUS | CMD_FLAGS_INTR;
-
-       seq->callback = cb;
-       seq->state = SEQ_STATE_USED;
-       seq->completion = completion;
-
-       ret = cmd_write(priv, cmd, queue);
-       if (ret) {
-               seq->state = SEQ_STATE_PENDING;
-               msgqueue_seq_release(priv, seq);
-       }
-
-       return ret;
-}
-
-static int
-msgqueue_msg_handle(struct nvkm_msgqueue *priv, struct nvkm_msgqueue_hdr *hdr)
-{
-       const struct nvkm_subdev *subdev = priv->falcon->owner;
-       struct nvkm_msgqueue_seq *seq;
-
-       seq = &priv->seq[hdr->seq_id];
-       if (seq->state != SEQ_STATE_USED && seq->state != SEQ_STATE_CANCELLED) {
-               nvkm_error(subdev, "msg for unknown sequence %d", seq->id);
-               return -EINVAL;
-       }
-
-       if (seq->state == SEQ_STATE_USED) {
-               if (seq->callback)
-                       seq->callback(priv, hdr);
-       }
-
-       if (seq->completion)
-               complete(seq->completion);
-
-       msgqueue_seq_release(priv, seq);
-
-       return 0;
-}
-
-static int
-msgqueue_handle_init_msg(struct nvkm_msgqueue *priv,
-                        struct nvkm_msgqueue_hdr *hdr)
-{
-       struct nvkm_falcon *falcon = priv->falcon;
-       const struct nvkm_subdev *subdev = falcon->owner;
-       u32 tail;
-       u32 tail_reg;
-       int ret;
-
-       /*
-        * Of course the message queue registers vary depending on the falcon
-        * used...
-        */
-       switch (falcon->owner->index) {
-       case NVKM_SUBDEV_PMU:
-               tail_reg = 0x4cc;
-               break;
-       case NVKM_ENGINE_SEC2:
-               tail_reg = 0xa34;
-               break;
-       default:
-               nvkm_error(subdev, "falcon %s unsupported for msgqueue!\n",
-                          nvkm_subdev_name[falcon->owner->index]);
-               return -EINVAL;
-       }
-
-       /*
-        * Read the message - queues are not initialized yet so we cannot rely
-        * on msg_queue_read()
-        */
-       tail = nvkm_falcon_rd32(falcon, tail_reg);
-       nvkm_falcon_read_dmem(falcon, tail, HDR_SIZE, 0, hdr);
-
-       if (hdr->size > MSG_BUF_SIZE) {
-               nvkm_error(subdev, "message too big (%d bytes)\n", hdr->size);
-               return -ENOSPC;
-       }
-
-       nvkm_falcon_read_dmem(falcon, tail + HDR_SIZE, hdr->size - HDR_SIZE, 0,
-                             (hdr + 1));
-
-       tail += ALIGN(hdr->size, QUEUE_ALIGNMENT);
-       nvkm_falcon_wr32(falcon, tail_reg, tail);
-
-       ret = priv->func->init_func->init_callback(priv, hdr);
-       if (ret)
-               return ret;
-
-       return 0;
-}
-
-void
-nvkm_msgqueue_process_msgs(struct nvkm_msgqueue *priv,
-                          struct nvkm_msgqueue_queue *queue)
-{
-       /*
-        * We are invoked from a worker thread, so normally we have plenty of
-        * stack space to work with.
-        */
-       u8 msg_buffer[MSG_BUF_SIZE];
-       struct nvkm_msgqueue_hdr *hdr = (void *)msg_buffer;
-       int ret;
-
-       /* the first message we receive must be the init message */
-       if ((!priv->init_msg_received)) {
-               ret = msgqueue_handle_init_msg(priv, hdr);
-               if (!ret)
-                       priv->init_msg_received = true;
-       } else {
-               while (msg_queue_read(priv, queue, hdr) > 0)
-                       msgqueue_msg_handle(priv, hdr);
-       }
-}
-
-void
-nvkm_msgqueue_write_cmdline(struct nvkm_msgqueue *queue, void *buf)
-{
-       if (!queue || !queue->func || !queue->func->init_func)
-               return;
-
-       queue->func->init_func->gen_cmdline(queue, buf);
-}
-
-int
-nvkm_msgqueue_acr_boot_falcons(struct nvkm_msgqueue *queue,
-                              unsigned long falcon_mask)
-{
-       unsigned long falcon;
-
-       if (!queue || !queue->func->acr_func)
-               return -ENODEV;
-
-       /* Does the firmware support booting multiple falcons? */
-       if (queue->func->acr_func->boot_multiple_falcons)
-               return queue->func->acr_func->boot_multiple_falcons(queue,
-                                                                  falcon_mask);
-
-       /* Else boot all requested falcons individually */
-       if (!queue->func->acr_func->boot_falcon)
-               return -ENODEV;
-
-       for_each_set_bit(falcon, &falcon_mask, NVKM_SECBOOT_FALCON_END) {
-               int ret = queue->func->acr_func->boot_falcon(queue, falcon);
-
-               if (ret)
-                       return ret;
-       }
-
-       return 0;
-}
-
-int
-nvkm_msgqueue_new(u32 version, struct nvkm_falcon *falcon,
-                 const struct nvkm_secboot *sb, struct nvkm_msgqueue **queue)
-{
-       const struct nvkm_subdev *subdev = falcon->owner;
-       int ret = -EINVAL;
-
-       switch (version) {
-       case 0x0137c63d:
-               ret = msgqueue_0137c63d_new(falcon, sb, queue);
-               break;
-       case 0x0137bca5:
-               ret = msgqueue_0137bca5_new(falcon, sb, queue);
-               break;
-       case 0x0148cdec:
-       case 0x015ccf3e:
-       case 0x0167d263:
-               ret = msgqueue_0148cdec_new(falcon, sb, queue);
-               break;
-       default:
-               nvkm_error(subdev, "unhandled firmware version 0x%08x\n",
-                          version);
-               break;
-       }
-
-       if (ret == 0) {
-               nvkm_debug(subdev, "firmware version: 0x%08x\n", version);
-               (*queue)->fw_version = version;
-       }
-
-       return ret;
-}
-
-void
-nvkm_msgqueue_del(struct nvkm_msgqueue **queue)
-{
-       if (*queue) {
-               (*queue)->func->dtor(*queue);
-               *queue = NULL;
-       }
-}
-
-void
-nvkm_msgqueue_recv(struct nvkm_msgqueue *queue)
-{
-       if (!queue->func || !queue->func->recv) {
-               const struct nvkm_subdev *subdev = queue->falcon->owner;
-
-               nvkm_warn(subdev, "missing msgqueue recv function\n");
-               return;
-       }
-
-       queue->func->recv(queue);
-}
-
-int
-nvkm_msgqueue_reinit(struct nvkm_msgqueue *queue)
-{
-       /* firmware not set yet... */
-       if (!queue)
-               return 0;
-
-       queue->init_msg_received = false;
-       reinit_completion(&queue->init_done);
-
-       return 0;
-}
-
-void
-nvkm_msgqueue_ctor(const struct nvkm_msgqueue_func *func,
-                  struct nvkm_falcon *falcon,
-                  struct nvkm_msgqueue *queue)
-{
-       int i;
-
-       queue->func = func;
-       queue->falcon = falcon;
-       mutex_init(&queue->seq_lock);
-       for (i = 0; i < NVKM_MSGQUEUE_NUM_SEQUENCES; i++)
-               queue->seq[i].id = i;
-
-       init_completion(&queue->init_done);
-
-
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue.h b/drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue.h
deleted file mode 100644 (file)
index 13b54f8..0000000
+++ /dev/null
@@ -1,213 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- */
-
-#ifndef __NVKM_CORE_FALCON_MSGQUEUE_H
-#define __NVKM_CORE_FALCON_MSGQUEUE_H
-
-#include <core/msgqueue.h>
-
-/*
- * The struct nvkm_msgqueue (named so for lack of better candidate) manages
- * a firmware (typically, NVIDIA signed firmware) running under a given falcon.
- *
- * Such firmwares expect to receive commands (through one or several command
- * queues) and will reply to such command by sending messages (using one
- * message queue).
- *
- * Each firmware can support one or several units - ACR for managing secure
- * falcons, PMU for power management, etc. A unit can be seen as a class to
- * which command can be sent.
- *
- * One usage example would be to send a command to the SEC falcon to ask it to
- * reset a secure falcon. The SEC falcon will receive the command, process it,
- * and send a message to signal success or failure. Only when the corresponding
- * message is received can the requester assume the request has been processed.
- *
- * Since we expect many variations between the firmwares NVIDIA will release
- * across GPU generations, this library is built in a very modular way. Message
- * formats and queues details (such as number of usage) are left to
- * specializations of struct nvkm_msgqueue, while the functions in msgqueue.c
- * take care of posting commands and processing messages in a fashion that is
- * universal.
- *
- */
-
-enum msgqueue_msg_priority {
-       MSGQUEUE_MSG_PRIORITY_HIGH,
-       MSGQUEUE_MSG_PRIORITY_LOW,
-};
-
-/**
- * struct nvkm_msgqueue_hdr - header for all commands/messages
- * @unit_id:   id of firmware using receiving the command/sending the message
- * @size:      total size of command/message
- * @ctrl_flags:        type of command/message
- * @seq_id:    used to match a message from its corresponding command
- */
-struct nvkm_msgqueue_hdr {
-       u8 unit_id;
-       u8 size;
-       u8 ctrl_flags;
-       u8 seq_id;
-};
-
-/**
- * struct nvkm_msgqueue_msg - base message.
- *
- * This is just a header and a message (or command) type. Useful when
- * building command-specific structures.
- */
-struct nvkm_msgqueue_msg {
-       struct nvkm_msgqueue_hdr hdr;
-       u8 msg_type;
-};
-
-struct nvkm_msgqueue;
-typedef void
-(*nvkm_msgqueue_callback)(struct nvkm_msgqueue *, struct nvkm_msgqueue_hdr *);
-
-/**
- * struct nvkm_msgqueue_init_func - msgqueue functions related to initialization
- *
- * @gen_cmdline:       build the commandline into a pre-allocated buffer
- * @init_callback:     called to process the init message
- */
-struct nvkm_msgqueue_init_func {
-       void (*gen_cmdline)(struct nvkm_msgqueue *, void *);
-       int (*init_callback)(struct nvkm_msgqueue *, struct nvkm_msgqueue_hdr *);
-};
-
-/**
- * struct nvkm_msgqueue_acr_func - msgqueue functions related to ACR
- *
- * @boot_falcon:       build and send the command to reset a given falcon
- * @boot_multiple_falcons: build and send the command to reset several falcons
- */
-struct nvkm_msgqueue_acr_func {
-       int (*boot_falcon)(struct nvkm_msgqueue *, enum nvkm_secboot_falcon);
-       int (*boot_multiple_falcons)(struct nvkm_msgqueue *, unsigned long);
-};
-
-struct nvkm_msgqueue_func {
-       const struct nvkm_msgqueue_init_func *init_func;
-       const struct nvkm_msgqueue_acr_func *acr_func;
-       void (*dtor)(struct nvkm_msgqueue *);
-       struct nvkm_msgqueue_queue *(*cmd_queue)(struct nvkm_msgqueue *,
-                                                enum msgqueue_msg_priority);
-       void (*recv)(struct nvkm_msgqueue *queue);
-};
-
-/**
- * struct nvkm_msgqueue_queue - information about a command or message queue
- *
- * The number of queues is firmware-dependent. All queues must have their
- * information filled by the init message handler.
- *
- * @mutex_lock:        to be acquired when the queue is being used
- * @index:     physical queue index
- * @offset:    DMEM offset where this queue begins
- * @size:      size allocated to this queue in DMEM (in bytes)
- * @position:  current write position
- * @head_reg:  address of the HEAD register for this queue
- * @tail_reg:  address of the TAIL register for this queue
- */
-struct nvkm_msgqueue_queue {
-       struct mutex mutex;
-       u32 index;
-       u32 offset;
-       u32 size;
-       u32 position;
-
-       u32 head_reg;
-       u32 tail_reg;
-};
-
-/**
- * struct nvkm_msgqueue_seq - keep track of ongoing commands
- *
- * Every time a command is sent, a sequence is assigned to it so the
- * corresponding message can be matched. Upon receiving the message, a callback
- * can be called and/or a completion signaled.
- *
- * @id:                sequence ID
- * @state:     current state
- * @callback:  callback to call upon receiving matching message
- * @completion:        completion to signal after callback is called
- */
-struct nvkm_msgqueue_seq {
-       u16 id;
-       enum {
-               SEQ_STATE_FREE = 0,
-               SEQ_STATE_PENDING,
-               SEQ_STATE_USED,
-               SEQ_STATE_CANCELLED
-       } state;
-       nvkm_msgqueue_callback callback;
-       struct completion *completion;
-};
-
-/*
- * We can have an arbitrary number of sequences, but realistically we will
- * probably not use that much simultaneously.
- */
-#define NVKM_MSGQUEUE_NUM_SEQUENCES 16
-
-/**
- * struct nvkm_msgqueue - manage a command/message based FW on a falcon
- *
- * @falcon:    falcon to be managed
- * @func:      implementation of the firmware to use
- * @init_msg_received: whether the init message has already been received
- * @init_done: whether all init is complete and commands can be processed
- * @seq_lock:  protects seq and seq_tbl
- * @seq:       sequences to match commands and messages
- * @seq_tbl:   bitmap of sequences currently in use
- */
-struct nvkm_msgqueue {
-       struct nvkm_falcon *falcon;
-       const struct nvkm_msgqueue_func *func;
-       u32 fw_version;
-       bool init_msg_received;
-       struct completion init_done;
-
-       struct mutex seq_lock;
-       struct nvkm_msgqueue_seq seq[NVKM_MSGQUEUE_NUM_SEQUENCES];
-       unsigned long seq_tbl[BITS_TO_LONGS(NVKM_MSGQUEUE_NUM_SEQUENCES)];
-};
-
-void nvkm_msgqueue_ctor(const struct nvkm_msgqueue_func *, struct nvkm_falcon *,
-                       struct nvkm_msgqueue *);
-int nvkm_msgqueue_post(struct nvkm_msgqueue *, enum msgqueue_msg_priority,
-                      struct nvkm_msgqueue_hdr *, nvkm_msgqueue_callback,
-                      struct completion *, bool);
-void nvkm_msgqueue_process_msgs(struct nvkm_msgqueue *,
-                               struct nvkm_msgqueue_queue *);
-
-int msgqueue_0137c63d_new(struct nvkm_falcon *, const struct nvkm_secboot *,
-                         struct nvkm_msgqueue **);
-int msgqueue_0137bca5_new(struct nvkm_falcon *, const struct nvkm_secboot *,
-                         struct nvkm_msgqueue **);
-int msgqueue_0148cdec_new(struct nvkm_falcon *, const struct nvkm_secboot *,
-                         struct nvkm_msgqueue **);
-
-#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue_0137c63d.c b/drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue_0137c63d.c
deleted file mode 100644 (file)
index fec0273..0000000
+++ /dev/null
@@ -1,436 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- */
-#include "msgqueue.h"
-#include <engine/falcon.h>
-#include <subdev/secboot.h>
-
-/* Queues identifiers */
-enum {
-       /* High Priority Command Queue for Host -> PMU communication */
-       MSGQUEUE_0137C63D_COMMAND_QUEUE_HPQ = 0,
-       /* Low Priority Command Queue for Host -> PMU communication */
-       MSGQUEUE_0137C63D_COMMAND_QUEUE_LPQ = 1,
-       /* Message queue for PMU -> Host communication */
-       MSGQUEUE_0137C63D_MESSAGE_QUEUE = 4,
-       MSGQUEUE_0137C63D_NUM_QUEUES = 5,
-};
-
-struct msgqueue_0137c63d {
-       struct nvkm_msgqueue base;
-
-       struct nvkm_msgqueue_queue queue[MSGQUEUE_0137C63D_NUM_QUEUES];
-};
-#define msgqueue_0137c63d(q) \
-       container_of(q, struct msgqueue_0137c63d, base)
-
-struct msgqueue_0137bca5 {
-       struct msgqueue_0137c63d base;
-
-       u64 wpr_addr;
-};
-#define msgqueue_0137bca5(q) \
-       container_of(container_of(q, struct msgqueue_0137c63d, base), \
-                    struct msgqueue_0137bca5, base);
-
-static struct nvkm_msgqueue_queue *
-msgqueue_0137c63d_cmd_queue(struct nvkm_msgqueue *queue,
-                           enum msgqueue_msg_priority priority)
-{
-       struct msgqueue_0137c63d *priv = msgqueue_0137c63d(queue);
-       const struct nvkm_subdev *subdev = priv->base.falcon->owner;
-
-       switch (priority) {
-       case MSGQUEUE_MSG_PRIORITY_HIGH:
-               return &priv->queue[MSGQUEUE_0137C63D_COMMAND_QUEUE_HPQ];
-       case MSGQUEUE_MSG_PRIORITY_LOW:
-               return &priv->queue[MSGQUEUE_0137C63D_COMMAND_QUEUE_LPQ];
-       default:
-               nvkm_error(subdev, "invalid command queue!\n");
-               return ERR_PTR(-EINVAL);
-       }
-}
-
-static void
-msgqueue_0137c63d_process_msgs(struct nvkm_msgqueue *queue)
-{
-       struct msgqueue_0137c63d *priv = msgqueue_0137c63d(queue);
-       struct nvkm_msgqueue_queue *q_queue =
-               &priv->queue[MSGQUEUE_0137C63D_MESSAGE_QUEUE];
-
-       nvkm_msgqueue_process_msgs(&priv->base, q_queue);
-}
-
-/* Init unit */
-#define MSGQUEUE_0137C63D_UNIT_INIT 0x07
-
-enum {
-       INIT_MSG_INIT = 0x0,
-};
-
-static void
-init_gen_cmdline(struct nvkm_msgqueue *queue, void *buf)
-{
-       struct {
-               u32 reserved;
-               u32 freq_hz;
-               u32 trace_size;
-               u32 trace_dma_base;
-               u16 trace_dma_base1;
-               u8 trace_dma_offset;
-               u32 trace_dma_idx;
-               bool secure_mode;
-               bool raise_priv_sec;
-               struct {
-                       u32 dma_base;
-                       u16 dma_base1;
-                       u8 dma_offset;
-                       u16 fb_size;
-                       u8 dma_idx;
-               } gc6_ctx;
-               u8 pad;
-       } *args = buf;
-
-       args->secure_mode = 1;
-}
-
-/* forward declaration */
-static int acr_init_wpr(struct nvkm_msgqueue *queue);
-
-static int
-init_callback(struct nvkm_msgqueue *_queue, struct nvkm_msgqueue_hdr *hdr)
-{
-       struct msgqueue_0137c63d *priv = msgqueue_0137c63d(_queue);
-       struct {
-               struct nvkm_msgqueue_msg base;
-
-               u8 pad;
-               u16 os_debug_entry_point;
-
-               struct {
-                       u16 size;
-                       u16 offset;
-                       u8 index;
-                       u8 pad;
-               } queue_info[MSGQUEUE_0137C63D_NUM_QUEUES];
-
-               u16 sw_managed_area_offset;
-               u16 sw_managed_area_size;
-       } *init = (void *)hdr;
-       const struct nvkm_subdev *subdev = _queue->falcon->owner;
-       int i;
-
-       if (init->base.hdr.unit_id != MSGQUEUE_0137C63D_UNIT_INIT) {
-               nvkm_error(subdev, "expected message from init unit\n");
-               return -EINVAL;
-       }
-
-       if (init->base.msg_type != INIT_MSG_INIT) {
-               nvkm_error(subdev, "expected PMU init msg\n");
-               return -EINVAL;
-       }
-
-       for (i = 0; i < MSGQUEUE_0137C63D_NUM_QUEUES; i++) {
-               struct nvkm_msgqueue_queue *queue = &priv->queue[i];
-
-               mutex_init(&queue->mutex);
-
-               queue->index = init->queue_info[i].index;
-               queue->offset = init->queue_info[i].offset;
-               queue->size = init->queue_info[i].size;
-
-               if (i != MSGQUEUE_0137C63D_MESSAGE_QUEUE) {
-                       queue->head_reg = 0x4a0 + (queue->index * 4);
-                       queue->tail_reg = 0x4b0 + (queue->index * 4);
-               } else {
-                       queue->head_reg = 0x4c8;
-                       queue->tail_reg = 0x4cc;
-               }
-
-               nvkm_debug(subdev,
-                          "queue %d: index %d, offset 0x%08x, size 0x%08x\n",
-                          i, queue->index, queue->offset, queue->size);
-       }
-
-       /* Complete initialization by initializing WPR region */
-       return acr_init_wpr(&priv->base);
-}
-
-static const struct nvkm_msgqueue_init_func
-msgqueue_0137c63d_init_func = {
-       .gen_cmdline = init_gen_cmdline,
-       .init_callback = init_callback,
-};
-
-
-
-/* ACR unit */
-#define MSGQUEUE_0137C63D_UNIT_ACR 0x0a
-
-enum {
-       ACR_CMD_INIT_WPR_REGION = 0x00,
-       ACR_CMD_BOOTSTRAP_FALCON = 0x01,
-       ACR_CMD_BOOTSTRAP_MULTIPLE_FALCONS = 0x03,
-};
-
-static void
-acr_init_wpr_callback(struct nvkm_msgqueue *queue,
-                     struct nvkm_msgqueue_hdr *hdr)
-{
-       struct {
-               struct nvkm_msgqueue_msg base;
-               u32 error_code;
-       } *msg = (void *)hdr;
-       const struct nvkm_subdev *subdev = queue->falcon->owner;
-
-       if (msg->error_code) {
-               nvkm_error(subdev, "ACR WPR init failure: %d\n",
-                          msg->error_code);
-               return;
-       }
-
-       nvkm_debug(subdev, "ACR WPR init complete\n");
-       complete_all(&queue->init_done);
-}
-
-static int
-acr_init_wpr(struct nvkm_msgqueue *queue)
-{
-       /*
-        * region_id:   region ID in WPR region
-        * wpr_offset:  offset in WPR region
-        */
-       struct {
-               struct nvkm_msgqueue_hdr hdr;
-               u8 cmd_type;
-               u32 region_id;
-               u32 wpr_offset;
-       } cmd;
-       memset(&cmd, 0, sizeof(cmd));
-
-       cmd.hdr.unit_id = MSGQUEUE_0137C63D_UNIT_ACR;
-       cmd.hdr.size = sizeof(cmd);
-       cmd.cmd_type = ACR_CMD_INIT_WPR_REGION;
-       cmd.region_id = 0x01;
-       cmd.wpr_offset = 0x00;
-
-       nvkm_msgqueue_post(queue, MSGQUEUE_MSG_PRIORITY_HIGH, &cmd.hdr,
-                          acr_init_wpr_callback, NULL, false);
-
-       return 0;
-}
-
-
-static void
-acr_boot_falcon_callback(struct nvkm_msgqueue *priv,
-                        struct nvkm_msgqueue_hdr *hdr)
-{
-       struct acr_bootstrap_falcon_msg {
-               struct nvkm_msgqueue_msg base;
-
-               u32 falcon_id;
-       } *msg = (void *)hdr;
-       const struct nvkm_subdev *subdev = priv->falcon->owner;
-       u32 falcon_id = msg->falcon_id;
-
-       if (falcon_id >= NVKM_SECBOOT_FALCON_END) {
-               nvkm_error(subdev, "in bootstrap falcon callback:\n");
-               nvkm_error(subdev, "invalid falcon ID 0x%x\n", falcon_id);
-               return;
-       }
-       nvkm_debug(subdev, "%s booted\n", nvkm_secboot_falcon_name[falcon_id]);
-}
-
-enum {
-       ACR_CMD_BOOTSTRAP_FALCON_FLAGS_RESET_YES = 0,
-       ACR_CMD_BOOTSTRAP_FALCON_FLAGS_RESET_NO = 1,
-};
-
-static int
-acr_boot_falcon(struct nvkm_msgqueue *priv, enum nvkm_secboot_falcon falcon)
-{
-       DECLARE_COMPLETION_ONSTACK(completed);
-       /*
-        * flags      - Flag specifying RESET or no RESET.
-        * falcon id  - Falcon id specifying falcon to bootstrap.
-        */
-       struct {
-               struct nvkm_msgqueue_hdr hdr;
-               u8 cmd_type;
-               u32 flags;
-               u32 falcon_id;
-       } cmd;
-
-       memset(&cmd, 0, sizeof(cmd));
-
-       cmd.hdr.unit_id = MSGQUEUE_0137C63D_UNIT_ACR;
-       cmd.hdr.size = sizeof(cmd);
-       cmd.cmd_type = ACR_CMD_BOOTSTRAP_FALCON;
-       cmd.flags = ACR_CMD_BOOTSTRAP_FALCON_FLAGS_RESET_YES;
-       cmd.falcon_id = falcon;
-       nvkm_msgqueue_post(priv, MSGQUEUE_MSG_PRIORITY_HIGH, &cmd.hdr,
-                       acr_boot_falcon_callback, &completed, true);
-
-       if (!wait_for_completion_timeout(&completed, msecs_to_jiffies(1000)))
-               return -ETIMEDOUT;
-
-       return 0;
-}
-
-static void
-acr_boot_multiple_falcons_callback(struct nvkm_msgqueue *priv,
-                                  struct nvkm_msgqueue_hdr *hdr)
-{
-       struct acr_bootstrap_falcon_msg {
-               struct nvkm_msgqueue_msg base;
-
-               u32 falcon_mask;
-       } *msg = (void *)hdr;
-       const struct nvkm_subdev *subdev = priv->falcon->owner;
-       unsigned long falcon_mask = msg->falcon_mask;
-       u32 falcon_id, falcon_treated = 0;
-
-       for_each_set_bit(falcon_id, &falcon_mask, NVKM_SECBOOT_FALCON_END) {
-               nvkm_debug(subdev, "%s booted\n",
-                          nvkm_secboot_falcon_name[falcon_id]);
-               falcon_treated |= BIT(falcon_id);
-       }
-
-       if (falcon_treated != msg->falcon_mask) {
-               nvkm_error(subdev, "in bootstrap falcon callback:\n");
-               nvkm_error(subdev, "invalid falcon mask 0x%x\n",
-                          msg->falcon_mask);
-               return;
-       }
-}
-
-static int
-acr_boot_multiple_falcons(struct nvkm_msgqueue *priv, unsigned long falcon_mask)
-{
-       DECLARE_COMPLETION_ONSTACK(completed);
-       /*
-        * flags      - Flag specifying RESET or no RESET.
-        * falcon id  - Falcon id specifying falcon to bootstrap.
-        */
-       struct {
-               struct nvkm_msgqueue_hdr hdr;
-               u8 cmd_type;
-               u32 flags;
-               u32 falcon_mask;
-               u32 use_va_mask;
-               u32 wpr_lo;
-               u32 wpr_hi;
-       } cmd;
-       struct msgqueue_0137bca5 *queue = msgqueue_0137bca5(priv);
-
-       memset(&cmd, 0, sizeof(cmd));
-
-       cmd.hdr.unit_id = MSGQUEUE_0137C63D_UNIT_ACR;
-       cmd.hdr.size = sizeof(cmd);
-       cmd.cmd_type = ACR_CMD_BOOTSTRAP_MULTIPLE_FALCONS;
-       cmd.flags = ACR_CMD_BOOTSTRAP_FALCON_FLAGS_RESET_YES;
-       cmd.falcon_mask = falcon_mask;
-       cmd.wpr_lo = lower_32_bits(queue->wpr_addr);
-       cmd.wpr_hi = upper_32_bits(queue->wpr_addr);
-       nvkm_msgqueue_post(priv, MSGQUEUE_MSG_PRIORITY_HIGH, &cmd.hdr,
-                       acr_boot_multiple_falcons_callback, &completed, true);
-
-       if (!wait_for_completion_timeout(&completed, msecs_to_jiffies(1000)))
-               return -ETIMEDOUT;
-
-       return 0;
-}
-
-static const struct nvkm_msgqueue_acr_func
-msgqueue_0137c63d_acr_func = {
-       .boot_falcon = acr_boot_falcon,
-};
-
-static const struct nvkm_msgqueue_acr_func
-msgqueue_0137bca5_acr_func = {
-       .boot_falcon = acr_boot_falcon,
-       .boot_multiple_falcons = acr_boot_multiple_falcons,
-};
-
-static void
-msgqueue_0137c63d_dtor(struct nvkm_msgqueue *queue)
-{
-       kfree(msgqueue_0137c63d(queue));
-}
-
-static const struct nvkm_msgqueue_func
-msgqueue_0137c63d_func = {
-       .init_func = &msgqueue_0137c63d_init_func,
-       .acr_func = &msgqueue_0137c63d_acr_func,
-       .cmd_queue = msgqueue_0137c63d_cmd_queue,
-       .recv = msgqueue_0137c63d_process_msgs,
-       .dtor = msgqueue_0137c63d_dtor,
-};
-
-int
-msgqueue_0137c63d_new(struct nvkm_falcon *falcon, const struct nvkm_secboot *sb,
-                     struct nvkm_msgqueue **queue)
-{
-       struct msgqueue_0137c63d *ret;
-
-       ret = kzalloc(sizeof(*ret), GFP_KERNEL);
-       if (!ret)
-               return -ENOMEM;
-
-       *queue = &ret->base;
-
-       nvkm_msgqueue_ctor(&msgqueue_0137c63d_func, falcon, &ret->base);
-
-       return 0;
-}
-
-static const struct nvkm_msgqueue_func
-msgqueue_0137bca5_func = {
-       .init_func = &msgqueue_0137c63d_init_func,
-       .acr_func = &msgqueue_0137bca5_acr_func,
-       .cmd_queue = msgqueue_0137c63d_cmd_queue,
-       .recv = msgqueue_0137c63d_process_msgs,
-       .dtor = msgqueue_0137c63d_dtor,
-};
-
-int
-msgqueue_0137bca5_new(struct nvkm_falcon *falcon, const struct nvkm_secboot *sb,
-                     struct nvkm_msgqueue **queue)
-{
-       struct msgqueue_0137bca5 *ret;
-
-       ret = kzalloc(sizeof(*ret), GFP_KERNEL);
-       if (!ret)
-               return -ENOMEM;
-
-       *queue = &ret->base.base;
-
-       /*
-        * FIXME this must be set to the address of a *GPU* mapping within the
-        * ACR address space!
-        */
-       /* ret->wpr_addr = sb->wpr_addr; */
-
-       nvkm_msgqueue_ctor(&msgqueue_0137bca5_func, falcon, &ret->base.base);
-
-       return 0;
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue_0148cdec.c b/drivers/gpu/drm/nouveau/nvkm/falcon/msgqueue_0148cdec.c
deleted file mode 100644 (file)
index 9424803..0000000
+++ /dev/null
@@ -1,264 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- */
-
-#include "msgqueue.h"
-#include <engine/falcon.h>
-#include <subdev/secboot.h>
-
-/*
- * This firmware runs on the SEC falcon. It only has one command and one
- * message queue, and uses a different command line and init message.
- */
-
-enum {
-       MSGQUEUE_0148CDEC_COMMAND_QUEUE = 0,
-       MSGQUEUE_0148CDEC_MESSAGE_QUEUE = 1,
-       MSGQUEUE_0148CDEC_NUM_QUEUES,
-};
-
-struct msgqueue_0148cdec {
-       struct nvkm_msgqueue base;
-
-       struct nvkm_msgqueue_queue queue[MSGQUEUE_0148CDEC_NUM_QUEUES];
-};
-#define msgqueue_0148cdec(q) \
-       container_of(q, struct msgqueue_0148cdec, base)
-
-static struct nvkm_msgqueue_queue *
-msgqueue_0148cdec_cmd_queue(struct nvkm_msgqueue *queue,
-                           enum msgqueue_msg_priority priority)
-{
-       struct msgqueue_0148cdec *priv = msgqueue_0148cdec(queue);
-
-       return &priv->queue[MSGQUEUE_0148CDEC_COMMAND_QUEUE];
-}
-
-static void
-msgqueue_0148cdec_process_msgs(struct nvkm_msgqueue *queue)
-{
-       struct msgqueue_0148cdec *priv = msgqueue_0148cdec(queue);
-       struct nvkm_msgqueue_queue *q_queue =
-               &priv->queue[MSGQUEUE_0148CDEC_MESSAGE_QUEUE];
-
-       nvkm_msgqueue_process_msgs(&priv->base, q_queue);
-}
-
-
-/* Init unit */
-#define MSGQUEUE_0148CDEC_UNIT_INIT 0x01
-
-enum {
-       INIT_MSG_INIT = 0x0,
-};
-
-static void
-init_gen_cmdline(struct nvkm_msgqueue *queue, void *buf)
-{
-       struct {
-               u32 freq_hz;
-               u32 falc_trace_size;
-               u32 falc_trace_dma_base;
-               u32 falc_trace_dma_idx;
-               bool secure_mode;
-       } *args = buf;
-
-       args->secure_mode = false;
-}
-
-static int
-init_callback(struct nvkm_msgqueue *_queue, struct nvkm_msgqueue_hdr *hdr)
-{
-       struct msgqueue_0148cdec *priv = msgqueue_0148cdec(_queue);
-       struct {
-               struct nvkm_msgqueue_msg base;
-
-               u8 num_queues;
-               u16 os_debug_entry_point;
-
-               struct {
-                       u32 offset;
-                       u16 size;
-                       u8 index;
-                       u8 id;
-               } queue_info[MSGQUEUE_0148CDEC_NUM_QUEUES];
-
-               u16 sw_managed_area_offset;
-               u16 sw_managed_area_size;
-       } *init = (void *)hdr;
-       const struct nvkm_subdev *subdev = _queue->falcon->owner;
-       int i;
-
-       if (init->base.hdr.unit_id != MSGQUEUE_0148CDEC_UNIT_INIT) {
-               nvkm_error(subdev, "expected message from init unit\n");
-               return -EINVAL;
-       }
-
-       if (init->base.msg_type != INIT_MSG_INIT) {
-               nvkm_error(subdev, "expected SEC init msg\n");
-               return -EINVAL;
-       }
-
-       for (i = 0; i < MSGQUEUE_0148CDEC_NUM_QUEUES; i++) {
-               u8 id = init->queue_info[i].id;
-               struct nvkm_msgqueue_queue *queue = &priv->queue[id];
-
-               mutex_init(&queue->mutex);
-
-               queue->index = init->queue_info[i].index;
-               queue->offset = init->queue_info[i].offset;
-               queue->size = init->queue_info[i].size;
-
-               if (id == MSGQUEUE_0148CDEC_MESSAGE_QUEUE) {
-                       queue->head_reg = 0xa30 + (queue->index * 8);
-                       queue->tail_reg = 0xa34 + (queue->index * 8);
-               } else {
-                       queue->head_reg = 0xa00 + (queue->index * 8);
-                       queue->tail_reg = 0xa04 + (queue->index * 8);
-               }
-
-               nvkm_debug(subdev,
-                          "queue %d: index %d, offset 0x%08x, size 0x%08x\n",
-                          id, queue->index, queue->offset, queue->size);
-       }
-
-       complete_all(&_queue->init_done);
-
-       return 0;
-}
-
-static const struct nvkm_msgqueue_init_func
-msgqueue_0148cdec_init_func = {
-       .gen_cmdline = init_gen_cmdline,
-       .init_callback = init_callback,
-};
-
-
-
-/* ACR unit */
-#define MSGQUEUE_0148CDEC_UNIT_ACR 0x08
-
-enum {
-       ACR_CMD_BOOTSTRAP_FALCON = 0x00,
-};
-
-static void
-acr_boot_falcon_callback(struct nvkm_msgqueue *priv,
-                        struct nvkm_msgqueue_hdr *hdr)
-{
-       struct acr_bootstrap_falcon_msg {
-               struct nvkm_msgqueue_msg base;
-
-               u32 error_code;
-               u32 falcon_id;
-       } *msg = (void *)hdr;
-       const struct nvkm_subdev *subdev = priv->falcon->owner;
-       u32 falcon_id = msg->falcon_id;
-
-       if (msg->error_code) {
-               nvkm_error(subdev, "in bootstrap falcon callback:\n");
-               nvkm_error(subdev, "expected error code 0x%x\n",
-                          msg->error_code);
-               return;
-       }
-
-       if (falcon_id >= NVKM_SECBOOT_FALCON_END) {
-               nvkm_error(subdev, "in bootstrap falcon callback:\n");
-               nvkm_error(subdev, "invalid falcon ID 0x%x\n", falcon_id);
-               return;
-       }
-
-       nvkm_debug(subdev, "%s booted\n", nvkm_secboot_falcon_name[falcon_id]);
-}
-
-enum {
-       ACR_CMD_BOOTSTRAP_FALCON_FLAGS_RESET_YES = 0,
-       ACR_CMD_BOOTSTRAP_FALCON_FLAGS_RESET_NO = 1,
-};
-
-static int
-acr_boot_falcon(struct nvkm_msgqueue *priv, enum nvkm_secboot_falcon falcon)
-{
-       DECLARE_COMPLETION_ONSTACK(completed);
-       /*
-        * flags      - Flag specifying RESET or no RESET.
-        * falcon id  - Falcon id specifying falcon to bootstrap.
-        */
-       struct {
-               struct nvkm_msgqueue_hdr hdr;
-               u8 cmd_type;
-               u32 flags;
-               u32 falcon_id;
-       } cmd;
-
-       memset(&cmd, 0, sizeof(cmd));
-
-       cmd.hdr.unit_id = MSGQUEUE_0148CDEC_UNIT_ACR;
-       cmd.hdr.size = sizeof(cmd);
-       cmd.cmd_type = ACR_CMD_BOOTSTRAP_FALCON;
-       cmd.flags = ACR_CMD_BOOTSTRAP_FALCON_FLAGS_RESET_YES;
-       cmd.falcon_id = falcon;
-       nvkm_msgqueue_post(priv, MSGQUEUE_MSG_PRIORITY_HIGH, &cmd.hdr,
-                          acr_boot_falcon_callback, &completed, true);
-
-       if (!wait_for_completion_timeout(&completed, msecs_to_jiffies(1000)))
-               return -ETIMEDOUT;
-
-       return 0;
-}
-
-const struct nvkm_msgqueue_acr_func
-msgqueue_0148cdec_acr_func = {
-       .boot_falcon = acr_boot_falcon,
-};
-
-static void
-msgqueue_0148cdec_dtor(struct nvkm_msgqueue *queue)
-{
-       kfree(msgqueue_0148cdec(queue));
-}
-
-const struct nvkm_msgqueue_func
-msgqueue_0148cdec_func = {
-       .init_func = &msgqueue_0148cdec_init_func,
-       .acr_func = &msgqueue_0148cdec_acr_func,
-       .cmd_queue = msgqueue_0148cdec_cmd_queue,
-       .recv = msgqueue_0148cdec_process_msgs,
-       .dtor = msgqueue_0148cdec_dtor,
-};
-
-int
-msgqueue_0148cdec_new(struct nvkm_falcon *falcon, const struct nvkm_secboot *sb,
-                     struct nvkm_msgqueue **queue)
-{
-       struct msgqueue_0148cdec *ret;
-
-       ret = kzalloc(sizeof(*ret), GFP_KERNEL);
-       if (!ret)
-               return -ENOMEM;
-
-       *queue = &ret->base;
-
-       nvkm_msgqueue_ctor(&msgqueue_0148cdec_func, falcon, &ret->base);
-
-       return 0;
-}
index 900fe1d..4661887 100644 (file)
@@ -1,9 +1,5 @@
 /* SPDX-License-Identifier: MIT */
 #ifndef __NVKM_FALCON_PRIV_H__
 #define __NVKM_FALCON_PRIV_H__
-#include <engine/falcon.h>
-
-void
-nvkm_falcon_ctor(const struct nvkm_falcon_func *, struct nvkm_subdev *,
-                const char *, u32, struct nvkm_falcon *);
+#include <core/falcon.h>
 #endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/qmgr.c b/drivers/gpu/drm/nouveau/nvkm/falcon/qmgr.c
new file mode 100644 (file)
index 0000000..a453de3
--- /dev/null
@@ -0,0 +1,87 @@
+/*
+ * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "qmgr.h"
+
+struct nvkm_falcon_qmgr_seq *
+nvkm_falcon_qmgr_seq_acquire(struct nvkm_falcon_qmgr *qmgr)
+{
+       const struct nvkm_subdev *subdev = qmgr->falcon->owner;
+       struct nvkm_falcon_qmgr_seq *seq;
+       u32 index;
+
+       mutex_lock(&qmgr->seq.mutex);
+       index = find_first_zero_bit(qmgr->seq.tbl, NVKM_FALCON_QMGR_SEQ_NUM);
+       if (index >= NVKM_FALCON_QMGR_SEQ_NUM) {
+               nvkm_error(subdev, "no free sequence available\n");
+               mutex_unlock(&qmgr->seq.mutex);
+               return ERR_PTR(-EAGAIN);
+       }
+
+       set_bit(index, qmgr->seq.tbl);
+       mutex_unlock(&qmgr->seq.mutex);
+
+       seq = &qmgr->seq.id[index];
+       seq->state = SEQ_STATE_PENDING;
+       return seq;
+}
+
+void
+nvkm_falcon_qmgr_seq_release(struct nvkm_falcon_qmgr *qmgr,
+                            struct nvkm_falcon_qmgr_seq *seq)
+{
+       /* no need to acquire seq.mutex since clear_bit is atomic */
+       seq->state = SEQ_STATE_FREE;
+       seq->callback = NULL;
+       reinit_completion(&seq->done);
+       clear_bit(seq->id, qmgr->seq.tbl);
+}
+
+void
+nvkm_falcon_qmgr_del(struct nvkm_falcon_qmgr **pqmgr)
+{
+       struct nvkm_falcon_qmgr *qmgr = *pqmgr;
+       if (qmgr) {
+               kfree(*pqmgr);
+               *pqmgr = NULL;
+       }
+}
+
+int
+nvkm_falcon_qmgr_new(struct nvkm_falcon *falcon,
+                    struct nvkm_falcon_qmgr **pqmgr)
+{
+       struct nvkm_falcon_qmgr *qmgr;
+       int i;
+
+       if (!(qmgr = *pqmgr = kzalloc(sizeof(*qmgr), GFP_KERNEL)))
+               return -ENOMEM;
+
+       qmgr->falcon = falcon;
+       mutex_init(&qmgr->seq.mutex);
+       for (i = 0; i < NVKM_FALCON_QMGR_SEQ_NUM; i++) {
+               qmgr->seq.id[i].id = i;
+               init_completion(&qmgr->seq.id[i].done);
+       }
+
+       return 0;
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/qmgr.h b/drivers/gpu/drm/nouveau/nvkm/falcon/qmgr.h
new file mode 100644 (file)
index 0000000..a45cd70
--- /dev/null
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: MIT */
+#ifndef __NVKM_FALCON_QMGR_H__
+#define __NVKM_FALCON_QMGR_H__
+#include <core/falcon.h>
+
+#define HDR_SIZE sizeof(struct nv_falcon_msg)
+#define QUEUE_ALIGNMENT 4
+/* max size of the messages we can receive */
+#define MSG_BUF_SIZE 128
+
+/**
+ * struct nvkm_falcon_qmgr_seq - keep track of ongoing commands
+ *
+ * Every time a command is sent, a sequence is assigned to it so the
+ * corresponding message can be matched. Upon receiving the message, a callback
+ * can be called and/or a completion signaled.
+ *
+ * @id:                sequence ID
+ * @state:     current state
+ * @callback:  callback to call upon receiving matching message
+ * @completion:        completion to signal after callback is called
+ */
+struct nvkm_falcon_qmgr_seq {
+       u16 id;
+       enum {
+               SEQ_STATE_FREE = 0,
+               SEQ_STATE_PENDING,
+               SEQ_STATE_USED,
+               SEQ_STATE_CANCELLED
+       } state;
+       bool async;
+       nvkm_falcon_qmgr_callback callback;
+       void *priv;
+       struct completion done;
+       int result;
+};
+
+/*
+ * We can have an arbitrary number of sequences, but realistically we will
+ * probably not use that much simultaneously.
+ */
+#define NVKM_FALCON_QMGR_SEQ_NUM 16
+
+struct nvkm_falcon_qmgr {
+       struct nvkm_falcon *falcon;
+
+       struct {
+               struct mutex mutex;
+               struct nvkm_falcon_qmgr_seq id[NVKM_FALCON_QMGR_SEQ_NUM];
+               unsigned long tbl[BITS_TO_LONGS(NVKM_FALCON_QMGR_SEQ_NUM)];
+       } seq;
+};
+
+struct nvkm_falcon_qmgr_seq *
+nvkm_falcon_qmgr_seq_acquire(struct nvkm_falcon_qmgr *);
+void nvkm_falcon_qmgr_seq_release(struct nvkm_falcon_qmgr *,
+                                 struct nvkm_falcon_qmgr_seq *);
+
+struct nvkm_falcon_cmdq {
+       struct nvkm_falcon_qmgr *qmgr;
+       const char *name;
+       struct mutex mutex;
+       struct completion ready;
+
+       u32 head_reg;
+       u32 tail_reg;
+       u32 offset;
+       u32 size;
+
+       u32 position;
+};
+
+struct nvkm_falcon_msgq {
+       struct nvkm_falcon_qmgr *qmgr;
+       const char *name;
+       struct mutex mutex;
+
+       u32 head_reg;
+       u32 tail_reg;
+       u32 offset;
+
+       u32 position;
+};
+
+#define FLCNQ_PRINTK(t,q,f,a...)                                               \
+       FLCN_PRINTK(t, (q)->qmgr->falcon, "%s: "f, (q)->name, ##a)
+#define FLCNQ_DBG(q,f,a...) FLCNQ_PRINTK(debug, (q), f, ##a)
+#define FLCNQ_ERR(q,f,a...) FLCNQ_PRINTK(error, (q), f, ##a)
+#endif
index 6d978fe..1ff9b9c 100644 (file)
@@ -25,7 +25,7 @@
 #include <core/memory.h>
 #include <subdev/timer.h>
 
-static void
+void
 nvkm_falcon_v1_load_imem(struct nvkm_falcon *falcon, void *data, u32 start,
                         u32 size, u16 tag, u8 port, bool secure)
 {
@@ -89,18 +89,17 @@ nvkm_falcon_v1_load_emem(struct nvkm_falcon *falcon, void *data, u32 start,
        }
 }
 
-static const u32 EMEM_START_ADDR = 0x1000000;
-
-static void
+void
 nvkm_falcon_v1_load_dmem(struct nvkm_falcon *falcon, void *data, u32 start,
-                     u32 size, u8 port)
+                        u32 size, u8 port)
 {
+       const struct nvkm_falcon_func *func = falcon->func;
        u8 rem = size % 4;
        int i;
 
-       if (start >= EMEM_START_ADDR && falcon->has_emem)
+       if (func->emem_addr && start >= func->emem_addr)
                return nvkm_falcon_v1_load_emem(falcon, data,
-                                               start - EMEM_START_ADDR, size,
+                                               start - func->emem_addr, size,
                                                port);
 
        size -= rem;
@@ -148,15 +147,16 @@ nvkm_falcon_v1_read_emem(struct nvkm_falcon *falcon, u32 start, u32 size,
        }
 }
 
-static void
+void
 nvkm_falcon_v1_read_dmem(struct nvkm_falcon *falcon, u32 start, u32 size,
                         u8 port, void *data)
 {
+       const struct nvkm_falcon_func *func = falcon->func;
        u8 rem = size % 4;
        int i;
 
-       if (start >= EMEM_START_ADDR && falcon->has_emem)
-               return nvkm_falcon_v1_read_emem(falcon, start - EMEM_START_ADDR,
+       if (func->emem_addr && start >= func->emem_addr)
+               return nvkm_falcon_v1_read_emem(falcon, start - func->emem_addr,
                                                size, port, data);
 
        size -= rem;
@@ -179,12 +179,11 @@ nvkm_falcon_v1_read_dmem(struct nvkm_falcon *falcon, u32 start, u32 size,
        }
 }
 
-static void
+void
 nvkm_falcon_v1_bind_context(struct nvkm_falcon *falcon, struct nvkm_memory *ctx)
 {
-       struct nvkm_device *device = falcon->owner->device;
+       const u32 fbif = falcon->func->fbif;
        u32 inst_loc;
-       u32 fbif;
 
        /* disable instance block binding */
        if (ctx == NULL) {
@@ -192,20 +191,6 @@ nvkm_falcon_v1_bind_context(struct nvkm_falcon *falcon, struct nvkm_memory *ctx)
                return;
        }
 
-       switch (falcon->owner->index) {
-       case NVKM_ENGINE_NVENC0:
-       case NVKM_ENGINE_NVENC1:
-       case NVKM_ENGINE_NVENC2:
-               fbif = 0x800;
-               break;
-       case NVKM_SUBDEV_PMU:
-               fbif = 0xe00;
-               break;
-       default:
-               fbif = 0x600;
-               break;
-       }
-
        nvkm_falcon_wr32(falcon, 0x10c, 0x1);
 
        /* setup apertures - virtual */
@@ -234,50 +219,15 @@ nvkm_falcon_v1_bind_context(struct nvkm_falcon *falcon, struct nvkm_memory *ctx)
 
        nvkm_falcon_mask(falcon, 0x090, 0x10000, 0x10000);
        nvkm_falcon_mask(falcon, 0x0a4, 0x8, 0x8);
-
-       /* Not sure if this is a WAR for a HW issue, or some additional
-        * programming sequence that's needed to properly complete the
-        * context switch we trigger above.
-        *
-        * Fixes unreliability of booting the SEC2 RTOS on Quadro P620,
-        * particularly when resuming from suspend.
-        *
-        * Also removes the need for an odd workaround where we needed
-        * to program SEC2's FALCON_CPUCTL_ALIAS_STARTCPU twice before
-        * the SEC2 RTOS would begin executing.
-        */
-       switch (falcon->owner->index) {
-       case NVKM_SUBDEV_GSP:
-       case NVKM_ENGINE_SEC2:
-               nvkm_msec(device, 10,
-                       u32 irqstat = nvkm_falcon_rd32(falcon, 0x008);
-                       u32 flcn0dc = nvkm_falcon_rd32(falcon, 0x0dc);
-                       if ((irqstat & 0x00000008) &&
-                           (flcn0dc & 0x00007000) == 0x00005000)
-                               break;
-               );
-
-               nvkm_falcon_mask(falcon, 0x004, 0x00000008, 0x00000008);
-               nvkm_falcon_mask(falcon, 0x058, 0x00000002, 0x00000002);
-
-               nvkm_msec(device, 10,
-                       u32 flcn0dc = nvkm_falcon_rd32(falcon, 0x0dc);
-                       if ((flcn0dc & 0x00007000) == 0x00000000)
-                               break;
-               );
-               break;
-       default:
-               break;
-       }
 }
 
-static void
+void
 nvkm_falcon_v1_set_start_addr(struct nvkm_falcon *falcon, u32 start_addr)
 {
        nvkm_falcon_wr32(falcon, 0x104, start_addr);
 }
 
-static void
+void
 nvkm_falcon_v1_start(struct nvkm_falcon *falcon)
 {
        u32 reg = nvkm_falcon_rd32(falcon, 0x100);
@@ -288,7 +238,7 @@ nvkm_falcon_v1_start(struct nvkm_falcon *falcon)
                nvkm_falcon_wr32(falcon, 0x100, 0x2);
 }
 
-static int
+int
 nvkm_falcon_v1_wait_for_halt(struct nvkm_falcon *falcon, u32 ms)
 {
        struct nvkm_device *device = falcon->owner->device;
@@ -301,7 +251,7 @@ nvkm_falcon_v1_wait_for_halt(struct nvkm_falcon *falcon, u32 ms)
        return 0;
 }
 
-static int
+int
 nvkm_falcon_v1_clear_interrupt(struct nvkm_falcon *falcon, u32 mask)
 {
        struct nvkm_device *device = falcon->owner->device;
@@ -330,7 +280,7 @@ falcon_v1_wait_idle(struct nvkm_falcon *falcon)
        return 0;
 }
 
-static int
+int
 nvkm_falcon_v1_enable(struct nvkm_falcon *falcon)
 {
        struct nvkm_device *device = falcon->owner->device;
@@ -352,7 +302,7 @@ nvkm_falcon_v1_enable(struct nvkm_falcon *falcon)
        return 0;
 }
 
-static void
+void
 nvkm_falcon_v1_disable(struct nvkm_falcon *falcon)
 {
        /* disable IRQs and wait for any previous code to complete */
diff --git a/drivers/gpu/drm/nouveau/nvkm/nvfw/Kbuild b/drivers/gpu/drm/nouveau/nvkm/nvfw/Kbuild
new file mode 100644 (file)
index 0000000..41d75f9
--- /dev/null
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: MIT
+nvkm-y += nvkm/nvfw/fw.o
+nvkm-y += nvkm/nvfw/hs.o
+nvkm-y += nvkm/nvfw/ls.o
+
+nvkm-y += nvkm/nvfw/acr.o
+nvkm-y += nvkm/nvfw/flcn.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/nvfw/acr.c b/drivers/gpu/drm/nouveau/nvkm/nvfw/acr.c
new file mode 100644 (file)
index 0000000..0d063b8
--- /dev/null
@@ -0,0 +1,165 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include <core/subdev.h>
+#include <nvfw/acr.h>
+
+void
+wpr_header_dump(struct nvkm_subdev *subdev, const struct wpr_header *hdr)
+{
+       nvkm_debug(subdev, "wprHeader\n");
+       nvkm_debug(subdev, "\tfalconID      : %d\n", hdr->falcon_id);
+       nvkm_debug(subdev, "\tlsbOffset     : 0x%x\n", hdr->lsb_offset);
+       nvkm_debug(subdev, "\tbootstrapOwner: %d\n", hdr->bootstrap_owner);
+       nvkm_debug(subdev, "\tlazyBootstrap : %d\n", hdr->lazy_bootstrap);
+       nvkm_debug(subdev, "\tstatus        : %d\n", hdr->status);
+}
+
+void
+wpr_header_v1_dump(struct nvkm_subdev *subdev, const struct wpr_header_v1 *hdr)
+{
+       nvkm_debug(subdev, "wprHeader\n");
+       nvkm_debug(subdev, "\tfalconID      : %d\n", hdr->falcon_id);
+       nvkm_debug(subdev, "\tlsbOffset     : 0x%x\n", hdr->lsb_offset);
+       nvkm_debug(subdev, "\tbootstrapOwner: %d\n", hdr->bootstrap_owner);
+       nvkm_debug(subdev, "\tlazyBootstrap : %d\n", hdr->lazy_bootstrap);
+       nvkm_debug(subdev, "\tbinVersion    : %d\n", hdr->bin_version);
+       nvkm_debug(subdev, "\tstatus        : %d\n", hdr->status);
+}
+
+void
+lsb_header_tail_dump(struct nvkm_subdev *subdev,
+                       struct lsb_header_tail *hdr)
+{
+       nvkm_debug(subdev, "lsbHeader\n");
+       nvkm_debug(subdev, "\tucodeOff      : 0x%x\n", hdr->ucode_off);
+       nvkm_debug(subdev, "\tucodeSize     : 0x%x\n", hdr->ucode_size);
+       nvkm_debug(subdev, "\tdataSize      : 0x%x\n", hdr->data_size);
+       nvkm_debug(subdev, "\tblCodeSize    : 0x%x\n", hdr->bl_code_size);
+       nvkm_debug(subdev, "\tblImemOff     : 0x%x\n", hdr->bl_imem_off);
+       nvkm_debug(subdev, "\tblDataOff     : 0x%x\n", hdr->bl_data_off);
+       nvkm_debug(subdev, "\tblDataSize    : 0x%x\n", hdr->bl_data_size);
+       nvkm_debug(subdev, "\tappCodeOff    : 0x%x\n", hdr->app_code_off);
+       nvkm_debug(subdev, "\tappCodeSize   : 0x%x\n", hdr->app_code_size);
+       nvkm_debug(subdev, "\tappDataOff    : 0x%x\n", hdr->app_data_off);
+       nvkm_debug(subdev, "\tappDataSize   : 0x%x\n", hdr->app_data_size);
+       nvkm_debug(subdev, "\tflags         : 0x%x\n", hdr->flags);
+}
+
+void
+lsb_header_dump(struct nvkm_subdev *subdev, struct lsb_header *hdr)
+{
+       lsb_header_tail_dump(subdev, &hdr->tail);
+}
+
+void
+lsb_header_v1_dump(struct nvkm_subdev *subdev, struct lsb_header_v1 *hdr)
+{
+       lsb_header_tail_dump(subdev, &hdr->tail);
+}
+
+void
+flcn_acr_desc_dump(struct nvkm_subdev *subdev, struct flcn_acr_desc *hdr)
+{
+       int i;
+
+       nvkm_debug(subdev, "acrDesc\n");
+       nvkm_debug(subdev, "\twprRegionId  : %d\n", hdr->wpr_region_id);
+       nvkm_debug(subdev, "\twprOffset    : 0x%x\n", hdr->wpr_offset);
+       nvkm_debug(subdev, "\tmmuMemRange  : 0x%x\n",
+                  hdr->mmu_mem_range);
+       nvkm_debug(subdev, "\tnoRegions    : %d\n",
+                  hdr->regions.no_regions);
+
+       for (i = 0; i < ARRAY_SIZE(hdr->regions.region_props); i++) {
+               nvkm_debug(subdev, "\tregion[%d]    :\n", i);
+               nvkm_debug(subdev, "\t  startAddr  : 0x%x\n",
+                          hdr->regions.region_props[i].start_addr);
+               nvkm_debug(subdev, "\t  endAddr    : 0x%x\n",
+                          hdr->regions.region_props[i].end_addr);
+               nvkm_debug(subdev, "\t  regionId   : %d\n",
+                          hdr->regions.region_props[i].region_id);
+               nvkm_debug(subdev, "\t  readMask   : 0x%x\n",
+                          hdr->regions.region_props[i].read_mask);
+               nvkm_debug(subdev, "\t  writeMask  : 0x%x\n",
+                          hdr->regions.region_props[i].write_mask);
+               nvkm_debug(subdev, "\t  clientMask : 0x%x\n",
+                          hdr->regions.region_props[i].client_mask);
+       }
+
+       nvkm_debug(subdev, "\tucodeBlobSize: %d\n",
+                  hdr->ucode_blob_size);
+       nvkm_debug(subdev, "\tucodeBlobBase: 0x%llx\n",
+                  hdr->ucode_blob_base);
+       nvkm_debug(subdev, "\tvprEnabled   : %d\n",
+                  hdr->vpr_desc.vpr_enabled);
+       nvkm_debug(subdev, "\tvprStart     : 0x%x\n",
+                  hdr->vpr_desc.vpr_start);
+       nvkm_debug(subdev, "\tvprEnd       : 0x%x\n",
+                  hdr->vpr_desc.vpr_end);
+       nvkm_debug(subdev, "\thdcpPolicies : 0x%x\n",
+                  hdr->vpr_desc.hdcp_policies);
+}
+
+void
+flcn_acr_desc_v1_dump(struct nvkm_subdev *subdev, struct flcn_acr_desc_v1 *hdr)
+{
+       int i;
+
+       nvkm_debug(subdev, "acrDesc\n");
+       nvkm_debug(subdev, "\twprRegionId         : %d\n", hdr->wpr_region_id);
+       nvkm_debug(subdev, "\twprOffset           : 0x%x\n", hdr->wpr_offset);
+       nvkm_debug(subdev, "\tmmuMemoryRange      : 0x%x\n",
+                  hdr->mmu_memory_range);
+       nvkm_debug(subdev, "\tnoRegions           : %d\n",
+                  hdr->regions.no_regions);
+
+       for (i = 0; i < ARRAY_SIZE(hdr->regions.region_props); i++) {
+               nvkm_debug(subdev, "\tregion[%d]           :\n", i);
+               nvkm_debug(subdev, "\t  startAddr         : 0x%x\n",
+                          hdr->regions.region_props[i].start_addr);
+               nvkm_debug(subdev, "\t  endAddr           : 0x%x\n",
+                          hdr->regions.region_props[i].end_addr);
+               nvkm_debug(subdev, "\t  regionId          : %d\n",
+                          hdr->regions.region_props[i].region_id);
+               nvkm_debug(subdev, "\t  readMask          : 0x%x\n",
+                          hdr->regions.region_props[i].read_mask);
+               nvkm_debug(subdev, "\t  writeMask         : 0x%x\n",
+                          hdr->regions.region_props[i].write_mask);
+               nvkm_debug(subdev, "\t  clientMask        : 0x%x\n",
+                          hdr->regions.region_props[i].client_mask);
+               nvkm_debug(subdev, "\t  shadowMemStartAddr: 0x%x\n",
+                          hdr->regions.region_props[i].shadow_mem_start_addr);
+       }
+
+       nvkm_debug(subdev, "\tucodeBlobSize       : %d\n",
+                  hdr->ucode_blob_size);
+       nvkm_debug(subdev, "\tucodeBlobBase       : 0x%llx\n",
+                  hdr->ucode_blob_base);
+       nvkm_debug(subdev, "\tvprEnabled          : %d\n",
+                  hdr->vpr_desc.vpr_enabled);
+       nvkm_debug(subdev, "\tvprStart            : 0x%x\n",
+                  hdr->vpr_desc.vpr_start);
+       nvkm_debug(subdev, "\tvprEnd              : 0x%x\n",
+                  hdr->vpr_desc.vpr_end);
+       nvkm_debug(subdev, "\thdcpPolicies        : 0x%x\n",
+                  hdr->vpr_desc.hdcp_policies);
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/nvfw/flcn.c b/drivers/gpu/drm/nouveau/nvkm/nvfw/flcn.c
new file mode 100644 (file)
index 0000000..00ec764
--- /dev/null
@@ -0,0 +1,115 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include <core/subdev.h>
+#include <nvfw/flcn.h>
+
+void
+loader_config_dump(struct nvkm_subdev *subdev, const struct loader_config *hdr)
+{
+       nvkm_debug(subdev, "loaderConfig\n");
+       nvkm_debug(subdev, "\tdmaIdx        : %d\n", hdr->dma_idx);
+       nvkm_debug(subdev, "\tcodeDmaBase   : 0x%xx\n", hdr->code_dma_base);
+       nvkm_debug(subdev, "\tcodeSizeTotal : 0x%x\n", hdr->code_size_total);
+       nvkm_debug(subdev, "\tcodeSizeToLoad: 0x%x\n", hdr->code_size_to_load);
+       nvkm_debug(subdev, "\tcodeEntryPoint: 0x%x\n", hdr->code_entry_point);
+       nvkm_debug(subdev, "\tdataDmaBase   : 0x%x\n", hdr->data_dma_base);
+       nvkm_debug(subdev, "\tdataSize      : 0x%x\n", hdr->data_size);
+       nvkm_debug(subdev, "\toverlayDmaBase: 0x%x\n", hdr->overlay_dma_base);
+       nvkm_debug(subdev, "\targc          : 0x%08x\n", hdr->argc);
+       nvkm_debug(subdev, "\targv          : 0x%08x\n", hdr->argv);
+       nvkm_debug(subdev, "\tcodeDmaBase1  : 0x%x\n", hdr->code_dma_base1);
+       nvkm_debug(subdev, "\tdataDmaBase1  : 0x%x\n", hdr->data_dma_base1);
+       nvkm_debug(subdev, "\tovlyDmaBase1  : 0x%x\n", hdr->overlay_dma_base1);
+}
+
+void
+loader_config_v1_dump(struct nvkm_subdev *subdev,
+                     const struct loader_config_v1 *hdr)
+{
+       nvkm_debug(subdev, "loaderConfig\n");
+       nvkm_debug(subdev, "\treserved      : 0x%08x\n", hdr->reserved);
+       nvkm_debug(subdev, "\tdmaIdx        : %d\n", hdr->dma_idx);
+       nvkm_debug(subdev, "\tcodeDmaBase   : 0x%llxx\n", hdr->code_dma_base);
+       nvkm_debug(subdev, "\tcodeSizeTotal : 0x%x\n", hdr->code_size_total);
+       nvkm_debug(subdev, "\tcodeSizeToLoad: 0x%x\n", hdr->code_size_to_load);
+       nvkm_debug(subdev, "\tcodeEntryPoint: 0x%x\n", hdr->code_entry_point);
+       nvkm_debug(subdev, "\tdataDmaBase   : 0x%llx\n", hdr->data_dma_base);
+       nvkm_debug(subdev, "\tdataSize      : 0x%x\n", hdr->data_size);
+       nvkm_debug(subdev, "\toverlayDmaBase: 0x%llx\n", hdr->overlay_dma_base);
+       nvkm_debug(subdev, "\targc          : 0x%08x\n", hdr->argc);
+       nvkm_debug(subdev, "\targv          : 0x%08x\n", hdr->argv);
+}
+
+void
+flcn_bl_dmem_desc_dump(struct nvkm_subdev *subdev,
+                      const struct flcn_bl_dmem_desc *hdr)
+{
+       nvkm_debug(subdev, "flcnBlDmemDesc\n");
+       nvkm_debug(subdev, "\treserved      : 0x%08x 0x%08x 0x%08x 0x%08x\n",
+                  hdr->reserved[0], hdr->reserved[1], hdr->reserved[2],
+                  hdr->reserved[3]);
+       nvkm_debug(subdev, "\tsignature     : 0x%08x 0x%08x 0x%08x 0x%08x\n",
+                  hdr->signature[0], hdr->signature[1], hdr->signature[2],
+                  hdr->signature[3]);
+       nvkm_debug(subdev, "\tctxDma        : %d\n", hdr->ctx_dma);
+       nvkm_debug(subdev, "\tcodeDmaBase   : 0x%x\n", hdr->code_dma_base);
+       nvkm_debug(subdev, "\tnonSecCodeOff : 0x%x\n", hdr->non_sec_code_off);
+       nvkm_debug(subdev, "\tnonSecCodeSize: 0x%x\n", hdr->non_sec_code_size);
+       nvkm_debug(subdev, "\tsecCodeOff    : 0x%x\n", hdr->sec_code_off);
+       nvkm_debug(subdev, "\tsecCodeSize   : 0x%x\n", hdr->sec_code_size);
+       nvkm_debug(subdev, "\tcodeEntryPoint: 0x%x\n", hdr->code_entry_point);
+       nvkm_debug(subdev, "\tdataDmaBase   : 0x%x\n", hdr->data_dma_base);
+       nvkm_debug(subdev, "\tdataSize      : 0x%x\n", hdr->data_size);
+       nvkm_debug(subdev, "\tcodeDmaBase1  : 0x%x\n", hdr->code_dma_base1);
+       nvkm_debug(subdev, "\tdataDmaBase1  : 0x%x\n", hdr->data_dma_base1);
+}
+
+void
+flcn_bl_dmem_desc_v1_dump(struct nvkm_subdev *subdev,
+                         const struct flcn_bl_dmem_desc_v1 *hdr)
+{
+       nvkm_debug(subdev, "flcnBlDmemDesc\n");
+       nvkm_debug(subdev, "\treserved      : 0x%08x 0x%08x 0x%08x 0x%08x\n",
+                  hdr->reserved[0], hdr->reserved[1], hdr->reserved[2],
+                  hdr->reserved[3]);
+       nvkm_debug(subdev, "\tsignature     : 0x%08x 0x%08x 0x%08x 0x%08x\n",
+                  hdr->signature[0], hdr->signature[1], hdr->signature[2],
+                  hdr->signature[3]);
+       nvkm_debug(subdev, "\tctxDma        : %d\n", hdr->ctx_dma);
+       nvkm_debug(subdev, "\tcodeDmaBase   : 0x%llx\n", hdr->code_dma_base);
+       nvkm_debug(subdev, "\tnonSecCodeOff : 0x%x\n", hdr->non_sec_code_off);
+       nvkm_debug(subdev, "\tnonSecCodeSize: 0x%x\n", hdr->non_sec_code_size);
+       nvkm_debug(subdev, "\tsecCodeOff    : 0x%x\n", hdr->sec_code_off);
+       nvkm_debug(subdev, "\tsecCodeSize   : 0x%x\n", hdr->sec_code_size);
+       nvkm_debug(subdev, "\tcodeEntryPoint: 0x%x\n", hdr->code_entry_point);
+       nvkm_debug(subdev, "\tdataDmaBase   : 0x%llx\n", hdr->data_dma_base);
+       nvkm_debug(subdev, "\tdataSize      : 0x%x\n", hdr->data_size);
+}
+
+void
+flcn_bl_dmem_desc_v2_dump(struct nvkm_subdev *subdev,
+                         const struct flcn_bl_dmem_desc_v2 *hdr)
+{
+       flcn_bl_dmem_desc_v1_dump(subdev, (void *)hdr);
+       nvkm_debug(subdev, "\targc          : 0x%08x\n", hdr->argc);
+       nvkm_debug(subdev, "\targv          : 0x%08x\n", hdr->argv);
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/nvfw/fw.c b/drivers/gpu/drm/nouveau/nvkm/nvfw/fw.c
new file mode 100644 (file)
index 0000000..746803b
--- /dev/null
@@ -0,0 +1,51 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include <core/subdev.h>
+#include <nvfw/fw.h>
+
+const struct nvfw_bin_hdr *
+nvfw_bin_hdr(struct nvkm_subdev *subdev, const void *data)
+{
+       const struct nvfw_bin_hdr *hdr = data;
+       nvkm_debug(subdev, "binHdr:\n");
+       nvkm_debug(subdev, "\tbinMagic         : 0x%08x\n", hdr->bin_magic);
+       nvkm_debug(subdev, "\tbinVer           : %d\n", hdr->bin_ver);
+       nvkm_debug(subdev, "\tbinSize          : %d\n", hdr->bin_size);
+       nvkm_debug(subdev, "\theaderOffset     : 0x%x\n", hdr->header_offset);
+       nvkm_debug(subdev, "\tdataOffset       : 0x%x\n", hdr->data_offset);
+       nvkm_debug(subdev, "\tdataSize         : 0x%x\n", hdr->data_size);
+       return hdr;
+}
+
+const struct nvfw_bl_desc *
+nvfw_bl_desc(struct nvkm_subdev *subdev, const void *data)
+{
+       const struct nvfw_bl_desc *hdr = data;
+       nvkm_debug(subdev, "blDesc\n");
+       nvkm_debug(subdev, "\tstartTag         : 0x%x\n", hdr->start_tag);
+       nvkm_debug(subdev, "\tdmemLoadOff      : 0x%x\n", hdr->dmem_load_off);
+       nvkm_debug(subdev, "\tcodeOff          : 0x%x\n", hdr->code_off);
+       nvkm_debug(subdev, "\tcodeSize         : 0x%x\n", hdr->code_size);
+       nvkm_debug(subdev, "\tdataOff          : 0x%x\n", hdr->data_off);
+       nvkm_debug(subdev, "\tdataSize         : 0x%x\n", hdr->data_size);
+       return hdr;
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/nvfw/hs.c b/drivers/gpu/drm/nouveau/nvkm/nvfw/hs.c
new file mode 100644 (file)
index 0000000..04ed77c
--- /dev/null
@@ -0,0 +1,62 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include <core/subdev.h>
+#include <nvfw/hs.h>
+
+const struct nvfw_hs_header *
+nvfw_hs_header(struct nvkm_subdev *subdev, const void *data)
+{
+       const struct nvfw_hs_header *hdr = data;
+       nvkm_debug(subdev, "hsHeader:\n");
+       nvkm_debug(subdev, "\tsigDbgOffset     : 0x%x\n", hdr->sig_dbg_offset);
+       nvkm_debug(subdev, "\tsigDbgSize       : 0x%x\n", hdr->sig_dbg_size);
+       nvkm_debug(subdev, "\tsigProdOffset    : 0x%x\n", hdr->sig_prod_offset);
+       nvkm_debug(subdev, "\tsigProdSize      : 0x%x\n", hdr->sig_prod_size);
+       nvkm_debug(subdev, "\tpatchLoc         : 0x%x\n", hdr->patch_loc);
+       nvkm_debug(subdev, "\tpatchSig         : 0x%x\n", hdr->patch_sig);
+       nvkm_debug(subdev, "\thdrOffset        : 0x%x\n", hdr->hdr_offset);
+       nvkm_debug(subdev, "\thdrSize          : 0x%x\n", hdr->hdr_size);
+       return hdr;
+}
+
+const struct nvfw_hs_load_header *
+nvfw_hs_load_header(struct nvkm_subdev *subdev, const void *data)
+{
+       const struct nvfw_hs_load_header *hdr = data;
+       int i;
+
+       nvkm_debug(subdev, "hsLoadHeader:\n");
+       nvkm_debug(subdev, "\tnonSecCodeOff    : 0x%x\n",
+                          hdr->non_sec_code_off);
+       nvkm_debug(subdev, "\tnonSecCodeSize   : 0x%x\n",
+                          hdr->non_sec_code_size);
+       nvkm_debug(subdev, "\tdataDmaBase      : 0x%x\n", hdr->data_dma_base);
+       nvkm_debug(subdev, "\tdataSize         : 0x%x\n", hdr->data_size);
+       nvkm_debug(subdev, "\tnumApps          : 0x%x\n", hdr->num_apps);
+       for (i = 0; i < hdr->num_apps; i++) {
+               nvkm_debug(subdev,
+                          "\tApp[%d]           : offset 0x%x size 0x%x\n", i,
+                          hdr->apps[(i * 2) + 0], hdr->apps[(i * 2) + 1]);
+       }
+
+       return hdr;
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/nvfw/ls.c b/drivers/gpu/drm/nouveau/nvkm/nvfw/ls.c
new file mode 100644 (file)
index 0000000..b847f28
--- /dev/null
@@ -0,0 +1,108 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include <core/subdev.h>
+#include <nvfw/ls.h>
+
+static void
+nvfw_ls_desc_head(struct nvkm_subdev *subdev,
+                 const struct nvfw_ls_desc_head *hdr)
+{
+       char *date;
+
+       nvkm_debug(subdev, "lsUcodeImgDesc:\n");
+       nvkm_debug(subdev, "\tdescriptorSize       : %d\n",
+                          hdr->descriptor_size);
+       nvkm_debug(subdev, "\timageSize            : %d\n", hdr->image_size);
+       nvkm_debug(subdev, "\ttoolsVersion         : 0x%x\n",
+                          hdr->tools_version);
+       nvkm_debug(subdev, "\tappVersion           : 0x%x\n", hdr->app_version);
+
+       date = kstrndup(hdr->date, sizeof(hdr->date), GFP_KERNEL);
+       nvkm_debug(subdev, "\tdate                 : %s\n", date);
+       kfree(date);
+
+       nvkm_debug(subdev, "\tbootloaderStartOffset: 0x%x\n",
+                          hdr->bootloader_start_offset);
+       nvkm_debug(subdev, "\tbootloaderSize       : 0x%x\n",
+                          hdr->bootloader_size);
+       nvkm_debug(subdev, "\tbootloaderImemOffset : 0x%x\n",
+                          hdr->bootloader_imem_offset);
+       nvkm_debug(subdev, "\tbootloaderEntryPoint : 0x%x\n",
+                          hdr->bootloader_entry_point);
+
+       nvkm_debug(subdev, "\tappStartOffset       : 0x%x\n",
+                          hdr->app_start_offset);
+       nvkm_debug(subdev, "\tappSize              : 0x%x\n", hdr->app_size);
+       nvkm_debug(subdev, "\tappImemOffset        : 0x%x\n",
+                          hdr->app_imem_offset);
+       nvkm_debug(subdev, "\tappImemEntry         : 0x%x\n",
+                          hdr->app_imem_entry);
+       nvkm_debug(subdev, "\tappDmemOffset        : 0x%x\n",
+                          hdr->app_dmem_offset);
+       nvkm_debug(subdev, "\tappResidentCodeOffset: 0x%x\n",
+                          hdr->app_resident_code_offset);
+       nvkm_debug(subdev, "\tappResidentCodeSize  : 0x%x\n",
+                          hdr->app_resident_code_size);
+       nvkm_debug(subdev, "\tappResidentDataOffset: 0x%x\n",
+                          hdr->app_resident_data_offset);
+       nvkm_debug(subdev, "\tappResidentDataSize  : 0x%x\n",
+                          hdr->app_resident_data_size);
+}
+
+const struct nvfw_ls_desc *
+nvfw_ls_desc(struct nvkm_subdev *subdev, const void *data)
+{
+       const struct nvfw_ls_desc *hdr = data;
+       int i;
+
+       nvfw_ls_desc_head(subdev, &hdr->head);
+
+       nvkm_debug(subdev, "\tnbOverlays           : %d\n", hdr->nb_overlays);
+       for (i = 0; i < ARRAY_SIZE(hdr->load_ovl); i++) {
+               nvkm_debug(subdev, "\tloadOvl[%d]          : 0x%x %d\n", i,
+                          hdr->load_ovl[i].start, hdr->load_ovl[i].size);
+       }
+       nvkm_debug(subdev, "\tcompressed           : %d\n", hdr->compressed);
+
+       return hdr;
+}
+
+const struct nvfw_ls_desc_v1 *
+nvfw_ls_desc_v1(struct nvkm_subdev *subdev, const void *data)
+{
+       const struct nvfw_ls_desc_v1 *hdr = data;
+       int i;
+
+       nvfw_ls_desc_head(subdev, &hdr->head);
+
+       nvkm_debug(subdev, "\tnbImemOverlays       : %d\n",
+                          hdr->nb_imem_overlays);
+       nvkm_debug(subdev, "\tnbDmemOverlays       : %d\n",
+                          hdr->nb_imem_overlays);
+       for (i = 0; i < ARRAY_SIZE(hdr->load_ovl); i++) {
+               nvkm_debug(subdev, "\tloadOvl[%2d]          : 0x%x %d\n", i,
+                          hdr->load_ovl[i].start, hdr->load_ovl[i].size);
+       }
+       nvkm_debug(subdev, "\tcompressed           : %d\n", hdr->compressed);
+
+       return hdr;
+}
index 4e136f3..fb4fff1 100644 (file)
@@ -1,4 +1,5 @@
 # SPDX-License-Identifier: MIT
+include $(src)/nvkm/subdev/acr/Kbuild
 include $(src)/nvkm/subdev/bar/Kbuild
 include $(src)/nvkm/subdev/bios/Kbuild
 include $(src)/nvkm/subdev/bus/Kbuild
@@ -19,7 +20,6 @@ include $(src)/nvkm/subdev/mmu/Kbuild
 include $(src)/nvkm/subdev/mxm/Kbuild
 include $(src)/nvkm/subdev/pci/Kbuild
 include $(src)/nvkm/subdev/pmu/Kbuild
-include $(src)/nvkm/subdev/secboot/Kbuild
 include $(src)/nvkm/subdev/therm/Kbuild
 include $(src)/nvkm/subdev/timer/Kbuild
 include $(src)/nvkm/subdev/top/Kbuild
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/Kbuild
new file mode 100644 (file)
index 0000000..5b9f64a
--- /dev/null
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: MIT
+nvkm-y += nvkm/subdev/acr/base.o
+nvkm-y += nvkm/subdev/acr/hsfw.o
+nvkm-y += nvkm/subdev/acr/lsfw.o
+nvkm-y += nvkm/subdev/acr/gm200.o
+nvkm-y += nvkm/subdev/acr/gm20b.o
+nvkm-y += nvkm/subdev/acr/gp102.o
+nvkm-y += nvkm/subdev/acr/gp108.o
+nvkm-y += nvkm/subdev/acr/gp10b.o
+nvkm-y += nvkm/subdev/acr/tu102.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/base.c
new file mode 100644 (file)
index 0000000..8eb2a93
--- /dev/null
@@ -0,0 +1,411 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+#include <core/firmware.h>
+#include <core/memory.h>
+#include <subdev/mmu.h>
+
+static struct nvkm_acr_hsf *
+nvkm_acr_hsf_find(struct nvkm_acr *acr, const char *name)
+{
+       struct nvkm_acr_hsf *hsf;
+       list_for_each_entry(hsf, &acr->hsf, head) {
+               if (!strcmp(hsf->name, name))
+                       return hsf;
+       }
+       return NULL;
+}
+
+int
+nvkm_acr_hsf_boot(struct nvkm_acr *acr, const char *name)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       struct nvkm_acr_hsf *hsf;
+       int ret;
+
+       hsf = nvkm_acr_hsf_find(acr, name);
+       if (!hsf)
+               return -EINVAL;
+
+       nvkm_debug(subdev, "executing %s binary\n", hsf->name);
+       ret = nvkm_falcon_get(hsf->falcon, subdev);
+       if (ret)
+               return ret;
+
+       ret = hsf->func->boot(acr, hsf);
+       nvkm_falcon_put(hsf->falcon, subdev);
+       if (ret) {
+               nvkm_error(subdev, "%s binary failed\n", hsf->name);
+               return ret;
+       }
+
+       nvkm_debug(subdev, "%s binary completed successfully\n", hsf->name);
+       return 0;
+}
+
+static void
+nvkm_acr_unload(struct nvkm_acr *acr)
+{
+       if (acr->done) {
+               nvkm_acr_hsf_boot(acr, "unload");
+               acr->done = false;
+       }
+}
+
+static int
+nvkm_acr_load(struct nvkm_acr *acr)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       struct nvkm_acr_lsf *lsf;
+       u64 start, limit;
+       int ret;
+
+       if (list_empty(&acr->lsf)) {
+               nvkm_debug(subdev, "No LSF(s) present.\n");
+               return 0;
+       }
+
+       ret = acr->func->init(acr);
+       if (ret)
+               return ret;
+
+       acr->func->wpr_check(acr, &start, &limit);
+
+       if (start != acr->wpr_start || limit != acr->wpr_end) {
+               nvkm_error(subdev, "WPR not configured as expected: "
+                                  "%016llx-%016llx vs %016llx-%016llx\n",
+                          acr->wpr_start, acr->wpr_end, start, limit);
+               return -EIO;
+       }
+
+       acr->done = true;
+
+       list_for_each_entry(lsf, &acr->lsf, head) {
+               if (lsf->func->boot) {
+                       ret = lsf->func->boot(lsf->falcon);
+                       if (ret)
+                               break;
+               }
+       }
+
+       return ret;
+}
+
+static int
+nvkm_acr_reload(struct nvkm_acr *acr)
+{
+       nvkm_acr_unload(acr);
+       return nvkm_acr_load(acr);
+}
+
+static struct nvkm_acr_lsf *
+nvkm_acr_falcon(struct nvkm_device *device)
+{
+       struct nvkm_acr *acr = device->acr;
+       struct nvkm_acr_lsf *lsf;
+
+       if (acr) {
+               list_for_each_entry(lsf, &acr->lsf, head) {
+                       if (lsf->func->bootstrap_falcon)
+                               return lsf;
+               }
+       }
+
+       return NULL;
+}
+
+int
+nvkm_acr_bootstrap_falcons(struct nvkm_device *device, unsigned long mask)
+{
+       struct nvkm_acr_lsf *acrflcn = nvkm_acr_falcon(device);
+       struct nvkm_acr *acr = device->acr;
+       unsigned long id;
+
+       if (!acrflcn) {
+               int ret = nvkm_acr_reload(acr);
+               if (ret)
+                       return ret;
+
+               return acr->done ? 0 : -EINVAL;
+       }
+
+       if (acrflcn->func->bootstrap_multiple_falcons) {
+               return acrflcn->func->
+                       bootstrap_multiple_falcons(acrflcn->falcon, mask);
+       }
+
+       for_each_set_bit(id, &mask, NVKM_ACR_LSF_NUM) {
+               int ret = acrflcn->func->bootstrap_falcon(acrflcn->falcon, id);
+               if (ret)
+                       return ret;
+       }
+
+       return 0;
+}
+
+bool
+nvkm_acr_managed_falcon(struct nvkm_device *device, enum nvkm_acr_lsf_id id)
+{
+       struct nvkm_acr *acr = device->acr;
+       struct nvkm_acr_lsf *lsf;
+
+       if (acr) {
+               list_for_each_entry(lsf, &acr->lsf, head) {
+                       if (lsf->id == id)
+                               return true;
+               }
+       }
+
+       return false;
+}
+
+static int
+nvkm_acr_fini(struct nvkm_subdev *subdev, bool suspend)
+{
+       nvkm_acr_unload(nvkm_acr(subdev));
+       return 0;
+}
+
+static int
+nvkm_acr_init(struct nvkm_subdev *subdev)
+{
+       if (!nvkm_acr_falcon(subdev->device))
+               return 0;
+
+       return nvkm_acr_load(nvkm_acr(subdev));
+}
+
+static void
+nvkm_acr_cleanup(struct nvkm_acr *acr)
+{
+       nvkm_acr_lsfw_del_all(acr);
+       nvkm_acr_hsfw_del_all(acr);
+       nvkm_firmware_put(acr->wpr_fw);
+       acr->wpr_fw = NULL;
+}
+
+static int
+nvkm_acr_oneinit(struct nvkm_subdev *subdev)
+{
+       struct nvkm_device *device = subdev->device;
+       struct nvkm_acr *acr = nvkm_acr(subdev);
+       struct nvkm_acr_hsfw *hsfw;
+       struct nvkm_acr_lsfw *lsfw, *lsft;
+       struct nvkm_acr_lsf *lsf;
+       u32 wpr_size = 0;
+       int ret, i;
+
+       if (list_empty(&acr->hsfw)) {
+               nvkm_debug(subdev, "No HSFW(s)\n");
+               nvkm_acr_cleanup(acr);
+               return 0;
+       }
+
+       /* Determine layout/size of WPR image up-front, as we need to know
+        * it to allocate memory before we begin constructing it.
+        */
+       list_for_each_entry_safe(lsfw, lsft, &acr->lsfw, head) {
+               /* Cull unknown falcons that are present in WPR image. */
+               if (acr->wpr_fw) {
+                       if (!lsfw->func) {
+                               nvkm_acr_lsfw_del(lsfw);
+                               continue;
+                       }
+
+                       wpr_size = acr->wpr_fw->size;
+               }
+
+               /* Ensure we've fetched falcon configuration. */
+               ret = nvkm_falcon_get(lsfw->falcon, subdev);
+               if (ret)
+                       return ret;
+
+               nvkm_falcon_put(lsfw->falcon, subdev);
+
+               if (!(lsf = kmalloc(sizeof(*lsf), GFP_KERNEL)))
+                       return -ENOMEM;
+               lsf->func = lsfw->func;
+               lsf->falcon = lsfw->falcon;
+               lsf->id = lsfw->id;
+               list_add_tail(&lsf->head, &acr->lsf);
+       }
+
+       if (!acr->wpr_fw || acr->wpr_comp)
+               wpr_size = acr->func->wpr_layout(acr);
+
+       /* Allocate/Locate WPR + fill ucode blob pointer.
+        *
+        *  dGPU: allocate WPR + shadow blob
+        * Tegra: locate WPR with regs, ensure size is sufficient,
+        *        allocate ucode blob.
+        */
+       ret = acr->func->wpr_alloc(acr, wpr_size);
+       if (ret)
+               return ret;
+
+       nvkm_debug(subdev, "WPR region is from 0x%llx-0x%llx (shadow 0x%llx)\n",
+                  acr->wpr_start, acr->wpr_end, acr->shadow_start);
+
+       /* Write WPR to ucode blob. */
+       nvkm_kmap(acr->wpr);
+       if (acr->wpr_fw && !acr->wpr_comp)
+               nvkm_wobj(acr->wpr, 0, acr->wpr_fw->data, acr->wpr_fw->size);
+
+       if (!acr->wpr_fw || acr->wpr_comp)
+               acr->func->wpr_build(acr, nvkm_acr_falcon(device));
+       acr->func->wpr_patch(acr, (s64)acr->wpr_start - acr->wpr_prev);
+
+       if (acr->wpr_fw && acr->wpr_comp) {
+               nvkm_kmap(acr->wpr);
+               for (i = 0; i < acr->wpr_fw->size; i += 4) {
+                       u32 us = nvkm_ro32(acr->wpr, i);
+                       u32 fw = ((u32 *)acr->wpr_fw->data)[i/4];
+                       if (fw != us) {
+                               nvkm_warn(subdev, "%08x: %08x %08x\n",
+                                         i, us, fw);
+                       }
+               }
+               return -EINVAL;
+       }
+       nvkm_done(acr->wpr);
+
+       /* Allocate instance block for ACR-related stuff. */
+       ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, 0x1000, 0, true,
+                             &acr->inst);
+       if (ret)
+               return ret;
+
+       ret = nvkm_vmm_new(device, 0, 0, NULL, 0, NULL, "acr", &acr->vmm);
+       if (ret)
+               return ret;
+
+       acr->vmm->debug = acr->subdev.debug;
+
+       ret = nvkm_vmm_join(acr->vmm, acr->inst);
+       if (ret)
+               return ret;
+
+       /* Load HS firmware blobs into ACR VMM. */
+       list_for_each_entry(hsfw, &acr->hsfw, head) {
+               nvkm_debug(subdev, "loading %s fw\n", hsfw->name);
+               ret = hsfw->func->load(acr, hsfw);
+               if (ret)
+                       return ret;
+       }
+
+       /* Kill temporary data. */
+       nvkm_acr_cleanup(acr);
+       return 0;
+}
+
+static void *
+nvkm_acr_dtor(struct nvkm_subdev *subdev)
+{
+       struct nvkm_acr *acr = nvkm_acr(subdev);
+       struct nvkm_acr_hsf *hsf, *hst;
+       struct nvkm_acr_lsf *lsf, *lst;
+
+       list_for_each_entry_safe(hsf, hst, &acr->hsf, head) {
+               nvkm_vmm_put(acr->vmm, &hsf->vma);
+               nvkm_memory_unref(&hsf->ucode);
+               kfree(hsf->imem);
+               list_del(&hsf->head);
+               kfree(hsf);
+       }
+
+       nvkm_vmm_part(acr->vmm, acr->inst);
+       nvkm_vmm_unref(&acr->vmm);
+       nvkm_memory_unref(&acr->inst);
+
+       nvkm_memory_unref(&acr->wpr);
+
+       list_for_each_entry_safe(lsf, lst, &acr->lsf, head) {
+               list_del(&lsf->head);
+               kfree(lsf);
+       }
+
+       nvkm_acr_cleanup(acr);
+       return acr;
+}
+
+static const struct nvkm_subdev_func
+nvkm_acr = {
+       .dtor = nvkm_acr_dtor,
+       .oneinit = nvkm_acr_oneinit,
+       .init = nvkm_acr_init,
+       .fini = nvkm_acr_fini,
+};
+
+static int
+nvkm_acr_ctor_wpr(struct nvkm_acr *acr, int ver)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       struct nvkm_device *device = subdev->device;
+       int ret;
+
+       ret = nvkm_firmware_get(subdev, "acr/wpr", ver, &acr->wpr_fw);
+       if (ret < 0)
+               return ret;
+
+       /* Pre-add LSFs in the order they appear in the FW WPR image so that
+        * we're able to do a binary comparison with our own generator.
+        */
+       ret = acr->func->wpr_parse(acr);
+       if (ret)
+               return ret;
+
+       acr->wpr_comp = nvkm_boolopt(device->cfgopt, "NvAcrWprCompare", false);
+       acr->wpr_prev = nvkm_longopt(device->cfgopt, "NvAcrWprPrevAddr", 0);
+       return 0;
+}
+
+int
+nvkm_acr_new_(const struct nvkm_acr_fwif *fwif, struct nvkm_device *device,
+             int index, struct nvkm_acr **pacr)
+{
+       struct nvkm_acr *acr;
+       long wprfw;
+
+       if (!(acr = *pacr = kzalloc(sizeof(*acr), GFP_KERNEL)))
+               return -ENOMEM;
+       nvkm_subdev_ctor(&nvkm_acr, device, index, &acr->subdev);
+       INIT_LIST_HEAD(&acr->hsfw);
+       INIT_LIST_HEAD(&acr->lsfw);
+       INIT_LIST_HEAD(&acr->hsf);
+       INIT_LIST_HEAD(&acr->lsf);
+
+       fwif = nvkm_firmware_load(&acr->subdev, fwif, "Acr", acr);
+       if (IS_ERR(fwif))
+               return PTR_ERR(fwif);
+
+       acr->func = fwif->func;
+
+       wprfw = nvkm_longopt(device->cfgopt, "NvAcrWpr", -1);
+       if (wprfw >= 0) {
+               int ret = nvkm_acr_ctor_wpr(acr, wprfw);
+               if (ret)
+                       return ret;
+       }
+
+       return 0;
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm200.c
new file mode 100644 (file)
index 0000000..9a63940
--- /dev/null
@@ -0,0 +1,470 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+#include <core/falcon.h>
+#include <core/firmware.h>
+#include <core/memory.h>
+#include <subdev/mc.h>
+#include <subdev/mmu.h>
+#include <subdev/pmu.h>
+#include <subdev/timer.h>
+
+#include <nvfw/acr.h>
+#include <nvfw/flcn.h>
+
+int
+gm200_acr_init(struct nvkm_acr *acr)
+{
+       return nvkm_acr_hsf_boot(acr, "load");
+}
+
+void
+gm200_acr_wpr_check(struct nvkm_acr *acr, u64 *start, u64 *limit)
+{
+       struct nvkm_device *device = acr->subdev.device;
+
+       nvkm_wr32(device, 0x100cd4, 2);
+       *start = (u64)(nvkm_rd32(device, 0x100cd4) & 0xffffff00) << 8;
+       nvkm_wr32(device, 0x100cd4, 3);
+       *limit = (u64)(nvkm_rd32(device, 0x100cd4) & 0xffffff00) << 8;
+       *limit = *limit + 0x20000;
+}
+
+void
+gm200_acr_wpr_patch(struct nvkm_acr *acr, s64 adjust)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       struct wpr_header hdr;
+       struct lsb_header lsb;
+       struct nvkm_acr_lsf *lsfw;
+       u32 offset = 0;
+
+       do {
+               nvkm_robj(acr->wpr, offset, &hdr, sizeof(hdr));
+               wpr_header_dump(subdev, &hdr);
+
+               list_for_each_entry(lsfw, &acr->lsfw, head) {
+                       if (lsfw->id != hdr.falcon_id)
+                               continue;
+
+                       nvkm_robj(acr->wpr, hdr.lsb_offset, &lsb, sizeof(lsb));
+                       lsb_header_dump(subdev, &lsb);
+
+                       lsfw->func->bld_patch(acr, lsb.tail.bl_data_off, adjust);
+                       break;
+               }
+               offset += sizeof(hdr);
+       } while (hdr.falcon_id != WPR_HEADER_V0_FALCON_ID_INVALID);
+}
+
+void
+gm200_acr_wpr_build_lsb_tail(struct nvkm_acr_lsfw *lsfw,
+                            struct lsb_header_tail *hdr)
+{
+       hdr->ucode_off = lsfw->offset.img;
+       hdr->ucode_size = lsfw->ucode_size;
+       hdr->data_size = lsfw->data_size;
+       hdr->bl_code_size = lsfw->bootloader_size;
+       hdr->bl_imem_off = lsfw->bootloader_imem_offset;
+       hdr->bl_data_off = lsfw->offset.bld;
+       hdr->bl_data_size = lsfw->bl_data_size;
+       hdr->app_code_off = lsfw->app_start_offset +
+                          lsfw->app_resident_code_offset;
+       hdr->app_code_size = lsfw->app_resident_code_size;
+       hdr->app_data_off = lsfw->app_start_offset +
+                          lsfw->app_resident_data_offset;
+       hdr->app_data_size = lsfw->app_resident_data_size;
+       hdr->flags = lsfw->func->flags;
+}
+
+static int
+gm200_acr_wpr_build_lsb(struct nvkm_acr *acr, struct nvkm_acr_lsfw *lsfw)
+{
+       struct lsb_header hdr;
+
+       if (WARN_ON(lsfw->sig->size != sizeof(hdr.signature)))
+               return -EINVAL;
+
+       memcpy(&hdr.signature, lsfw->sig->data, lsfw->sig->size);
+       gm200_acr_wpr_build_lsb_tail(lsfw, &hdr.tail);
+
+       nvkm_wobj(acr->wpr, lsfw->offset.lsb, &hdr, sizeof(hdr));
+       return 0;
+}
+
+int
+gm200_acr_wpr_build(struct nvkm_acr *acr, struct nvkm_acr_lsf *rtos)
+{
+       struct nvkm_acr_lsfw *lsfw;
+       u32 offset = 0;
+       int ret;
+
+       /* Fill per-LSF structures. */
+       list_for_each_entry(lsfw, &acr->lsfw, head) {
+               struct wpr_header hdr = {
+                       .falcon_id = lsfw->id,
+                       .lsb_offset = lsfw->offset.lsb,
+                       .bootstrap_owner = NVKM_ACR_LSF_PMU,
+                       .lazy_bootstrap = rtos && lsfw->id != rtos->id,
+                       .status = WPR_HEADER_V0_STATUS_COPY,
+               };
+
+               /* Write WPR header. */
+               nvkm_wobj(acr->wpr, offset, &hdr, sizeof(hdr));
+               offset += sizeof(hdr);
+
+               /* Write LSB header. */
+               ret = gm200_acr_wpr_build_lsb(acr, lsfw);
+               if (ret)
+                       return ret;
+
+               /* Write ucode image. */
+               nvkm_wobj(acr->wpr, lsfw->offset.img,
+                                   lsfw->img.data,
+                                   lsfw->img.size);
+
+               /* Write bootloader data. */
+               lsfw->func->bld_write(acr, lsfw->offset.bld, lsfw);
+       }
+
+       /* Finalise WPR. */
+       nvkm_wo32(acr->wpr, offset, WPR_HEADER_V0_FALCON_ID_INVALID);
+       return 0;
+}
+
+static int
+gm200_acr_wpr_alloc(struct nvkm_acr *acr, u32 wpr_size)
+{
+       int ret = nvkm_memory_new(acr->subdev.device, NVKM_MEM_TARGET_INST,
+                                 ALIGN(wpr_size, 0x40000), 0x40000, true,
+                                 &acr->wpr);
+       if (ret)
+               return ret;
+
+       acr->wpr_start = nvkm_memory_addr(acr->wpr);
+       acr->wpr_end = acr->wpr_start + nvkm_memory_size(acr->wpr);
+       return 0;
+}
+
+u32
+gm200_acr_wpr_layout(struct nvkm_acr *acr)
+{
+       struct nvkm_acr_lsfw *lsfw;
+       u32 wpr = 0;
+
+       wpr += 11 /* MAX_LSF */ * sizeof(struct wpr_header);
+
+       list_for_each_entry(lsfw, &acr->lsfw, head) {
+               wpr  = ALIGN(wpr, 256);
+               lsfw->offset.lsb = wpr;
+               wpr += sizeof(struct lsb_header);
+
+               wpr  = ALIGN(wpr, 4096);
+               lsfw->offset.img = wpr;
+               wpr += lsfw->img.size;
+
+               wpr  = ALIGN(wpr, 256);
+               lsfw->offset.bld = wpr;
+               lsfw->bl_data_size = ALIGN(lsfw->func->bld_size, 256);
+               wpr += lsfw->bl_data_size;
+       }
+
+       return wpr;
+}
+
+int
+gm200_acr_wpr_parse(struct nvkm_acr *acr)
+{
+       const struct wpr_header *hdr = (void *)acr->wpr_fw->data;
+
+       while (hdr->falcon_id != WPR_HEADER_V0_FALCON_ID_INVALID) {
+               wpr_header_dump(&acr->subdev, hdr);
+               if (!nvkm_acr_lsfw_add(NULL, acr, NULL, (hdr++)->falcon_id))
+                       return -ENOMEM;
+       }
+
+       return 0;
+}
+
+void
+gm200_acr_hsfw_bld(struct nvkm_acr *acr, struct nvkm_acr_hsf *hsf)
+{
+       struct flcn_bl_dmem_desc_v1 hsdesc = {
+               .ctx_dma = FALCON_DMAIDX_VIRT,
+               .code_dma_base = hsf->vma->addr,
+               .non_sec_code_off = hsf->non_sec_addr,
+               .non_sec_code_size = hsf->non_sec_size,
+               .sec_code_off = hsf->sec_addr,
+               .sec_code_size = hsf->sec_size,
+               .code_entry_point = 0,
+               .data_dma_base = hsf->vma->addr + hsf->data_addr,
+               .data_size = hsf->data_size,
+       };
+
+       flcn_bl_dmem_desc_v1_dump(&acr->subdev, &hsdesc);
+
+       nvkm_falcon_load_dmem(hsf->falcon, &hsdesc, 0, sizeof(hsdesc), 0);
+}
+
+int
+gm200_acr_hsfw_boot(struct nvkm_acr *acr, struct nvkm_acr_hsf *hsf,
+                   u32 intr_clear, u32 mbox0_ok)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       struct nvkm_device *device = subdev->device;
+       struct nvkm_falcon *falcon = hsf->falcon;
+       u32 mbox0, mbox1;
+       int ret;
+
+       /* Reset falcon. */
+       nvkm_falcon_reset(falcon);
+       nvkm_falcon_bind_context(falcon, acr->inst);
+
+       /* Load bootloader into IMEM. */
+       nvkm_falcon_load_imem(falcon, hsf->imem,
+                                     falcon->code.limit - hsf->imem_size,
+                                     hsf->imem_size,
+                                     hsf->imem_tag,
+                                     0, false);
+
+       /* Load bootloader data into DMEM. */
+       hsf->func->bld(acr, hsf);
+
+       /* Boot the falcon. */
+       nvkm_mc_intr_mask(device, falcon->owner->index, false);
+
+       nvkm_falcon_wr32(falcon, 0x040, 0xdeada5a5);
+       nvkm_falcon_set_start_addr(falcon, hsf->imem_tag << 8);
+       nvkm_falcon_start(falcon);
+       ret = nvkm_falcon_wait_for_halt(falcon, 100);
+       if (ret)
+               return ret;
+
+       /* Check for successful completion. */
+       mbox0 = nvkm_falcon_rd32(falcon, 0x040);
+       mbox1 = nvkm_falcon_rd32(falcon, 0x044);
+       nvkm_debug(subdev, "mailbox %08x %08x\n", mbox0, mbox1);
+       if (mbox0 && mbox0 != mbox0_ok)
+               return -EIO;
+
+       nvkm_falcon_clear_interrupt(falcon, intr_clear);
+       nvkm_mc_intr_mask(device, falcon->owner->index, true);
+       return ret;
+}
+
+int
+gm200_acr_hsfw_load(struct nvkm_acr *acr, struct nvkm_acr_hsfw *hsfw,
+                   struct nvkm_falcon *falcon)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       struct nvkm_acr_hsf *hsf;
+       int ret;
+
+       /* Patch the appropriate signature (production/debug) into the FW
+        * image, as determined by the mode the falcon is in.
+        */
+       ret = nvkm_falcon_get(falcon, subdev);
+       if (ret)
+               return ret;
+
+       if (hsfw->sig.patch_loc) {
+               if (!falcon->debug) {
+                       nvkm_debug(subdev, "patching production signature\n");
+                       memcpy(hsfw->image + hsfw->sig.patch_loc,
+                              hsfw->sig.prod.data,
+                              hsfw->sig.prod.size);
+               } else {
+                       nvkm_debug(subdev, "patching debug signature\n");
+                       memcpy(hsfw->image + hsfw->sig.patch_loc,
+                              hsfw->sig.dbg.data,
+                              hsfw->sig.dbg.size);
+               }
+       }
+
+       nvkm_falcon_put(falcon, subdev);
+
+       if (!(hsf = kzalloc(sizeof(*hsf), GFP_KERNEL)))
+               return -ENOMEM;
+       hsf->func = hsfw->func;
+       hsf->name = hsfw->name;
+       list_add_tail(&hsf->head, &acr->hsf);
+
+       hsf->imem_size = hsfw->imem_size;
+       hsf->imem_tag = hsfw->imem_tag;
+       hsf->imem = kmemdup(hsfw->imem, hsfw->imem_size, GFP_KERNEL);
+       if (!hsf->imem)
+               return -ENOMEM;
+
+       hsf->non_sec_addr = hsfw->non_sec_addr;
+       hsf->non_sec_size = hsfw->non_sec_size;
+       hsf->sec_addr = hsfw->sec_addr;
+       hsf->sec_size = hsfw->sec_size;
+       hsf->data_addr = hsfw->data_addr;
+       hsf->data_size = hsfw->data_size;
+
+       /* Make the FW image accessible to the HS bootloader. */
+       ret = nvkm_memory_new(subdev->device, NVKM_MEM_TARGET_INST,
+                             hsfw->image_size, 0x1000, false, &hsf->ucode);
+       if (ret)
+               return ret;
+
+       nvkm_kmap(hsf->ucode);
+       nvkm_wobj(hsf->ucode, 0, hsfw->image, hsfw->image_size);
+       nvkm_done(hsf->ucode);
+
+       ret = nvkm_vmm_get(acr->vmm, 12, nvkm_memory_size(hsf->ucode),
+                          &hsf->vma);
+       if (ret)
+               return ret;
+
+       ret = nvkm_memory_map(hsf->ucode, 0, acr->vmm, hsf->vma, NULL, 0);
+       if (ret)
+               return ret;
+
+       hsf->falcon = falcon;
+       return 0;
+}
+
+int
+gm200_acr_unload_boot(struct nvkm_acr *acr, struct nvkm_acr_hsf *hsf)
+{
+       return gm200_acr_hsfw_boot(acr, hsf, 0, 0x1d);
+}
+
+int
+gm200_acr_unload_load(struct nvkm_acr *acr, struct nvkm_acr_hsfw *hsfw)
+{
+       return gm200_acr_hsfw_load(acr, hsfw, &acr->subdev.device->pmu->falcon);
+}
+
+const struct nvkm_acr_hsf_func
+gm200_acr_unload_0 = {
+       .load = gm200_acr_unload_load,
+       .boot = gm200_acr_unload_boot,
+       .bld = gm200_acr_hsfw_bld,
+};
+
+MODULE_FIRMWARE("nvidia/gm200/acr/ucode_unload.bin");
+MODULE_FIRMWARE("nvidia/gm204/acr/ucode_unload.bin");
+MODULE_FIRMWARE("nvidia/gm206/acr/ucode_unload.bin");
+MODULE_FIRMWARE("nvidia/gp100/acr/ucode_unload.bin");
+
+static const struct nvkm_acr_hsf_fwif
+gm200_acr_unload_fwif[] = {
+       { 0, nvkm_acr_hsfw_load, &gm200_acr_unload_0 },
+       {}
+};
+
+int
+gm200_acr_load_boot(struct nvkm_acr *acr, struct nvkm_acr_hsf *hsf)
+{
+       return gm200_acr_hsfw_boot(acr, hsf, 0x10, 0);
+}
+
+static int
+gm200_acr_load_load(struct nvkm_acr *acr, struct nvkm_acr_hsfw *hsfw)
+{
+       struct flcn_acr_desc *desc = (void *)&hsfw->image[hsfw->data_addr];
+
+       desc->wpr_region_id = 1;
+       desc->regions.no_regions = 2;
+       desc->regions.region_props[0].start_addr = acr->wpr_start >> 8;
+       desc->regions.region_props[0].end_addr = acr->wpr_end >> 8;
+       desc->regions.region_props[0].region_id = 1;
+       desc->regions.region_props[0].read_mask = 0xf;
+       desc->regions.region_props[0].write_mask = 0xc;
+       desc->regions.region_props[0].client_mask = 0x2;
+       flcn_acr_desc_dump(&acr->subdev, desc);
+
+       return gm200_acr_hsfw_load(acr, hsfw, &acr->subdev.device->pmu->falcon);
+}
+
+static const struct nvkm_acr_hsf_func
+gm200_acr_load_0 = {
+       .load = gm200_acr_load_load,
+       .boot = gm200_acr_load_boot,
+       .bld = gm200_acr_hsfw_bld,
+};
+
+MODULE_FIRMWARE("nvidia/gm200/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gm200/acr/ucode_load.bin");
+
+MODULE_FIRMWARE("nvidia/gm204/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gm204/acr/ucode_load.bin");
+
+MODULE_FIRMWARE("nvidia/gm206/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gm206/acr/ucode_load.bin");
+
+MODULE_FIRMWARE("nvidia/gp100/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gp100/acr/ucode_load.bin");
+
+static const struct nvkm_acr_hsf_fwif
+gm200_acr_load_fwif[] = {
+       { 0, nvkm_acr_hsfw_load, &gm200_acr_load_0 },
+       {}
+};
+
+static const struct nvkm_acr_func
+gm200_acr = {
+       .load = gm200_acr_load_fwif,
+       .unload = gm200_acr_unload_fwif,
+       .wpr_parse = gm200_acr_wpr_parse,
+       .wpr_layout = gm200_acr_wpr_layout,
+       .wpr_alloc = gm200_acr_wpr_alloc,
+       .wpr_build = gm200_acr_wpr_build,
+       .wpr_patch = gm200_acr_wpr_patch,
+       .wpr_check = gm200_acr_wpr_check,
+       .init = gm200_acr_init,
+};
+
+static int
+gm200_acr_load(struct nvkm_acr *acr, int ver, const struct nvkm_acr_fwif *fwif)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       const struct nvkm_acr_hsf_fwif *hsfwif;
+
+       hsfwif = nvkm_firmware_load(subdev, fwif->func->load, "AcrLoad",
+                                   acr, "acr/bl", "acr/ucode_load", "load");
+       if (IS_ERR(hsfwif))
+               return PTR_ERR(hsfwif);
+
+       hsfwif = nvkm_firmware_load(subdev, fwif->func->unload, "AcrUnload",
+                                   acr, "acr/bl", "acr/ucode_unload",
+                                   "unload");
+       if (IS_ERR(hsfwif))
+               return PTR_ERR(hsfwif);
+
+       return 0;
+}
+
+static const struct nvkm_acr_fwif
+gm200_acr_fwif[] = {
+       { 0, gm200_acr_load, &gm200_acr },
+       {}
+};
+
+int
+gm200_acr_new(struct nvkm_device *device, int index, struct nvkm_acr **pacr)
+{
+       return nvkm_acr_new_(gm200_acr_fwif, device, index, pacr);
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm20b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm20b.c
new file mode 100644 (file)
index 0000000..034a6ed
--- /dev/null
@@ -0,0 +1,134 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+#include <core/firmware.h>
+#include <core/memory.h>
+#include <subdev/mmu.h>
+#include <subdev/pmu.h>
+
+#include <nvfw/acr.h>
+#include <nvfw/flcn.h>
+
+int
+gm20b_acr_wpr_alloc(struct nvkm_acr *acr, u32 wpr_size)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+
+       acr->func->wpr_check(acr, &acr->wpr_start, &acr->wpr_end);
+
+       if ((acr->wpr_end - acr->wpr_start) < wpr_size) {
+               nvkm_error(subdev, "WPR image too big for WPR!\n");
+               return -ENOSPC;
+       }
+
+       return nvkm_memory_new(subdev->device, NVKM_MEM_TARGET_INST,
+                              wpr_size, 0, true, &acr->wpr);
+}
+
+static void
+gm20b_acr_load_bld(struct nvkm_acr *acr, struct nvkm_acr_hsf *hsf)
+{
+       struct flcn_bl_dmem_desc hsdesc = {
+               .ctx_dma = FALCON_DMAIDX_VIRT,
+               .code_dma_base = hsf->vma->addr >> 8,
+               .non_sec_code_off = hsf->non_sec_addr,
+               .non_sec_code_size = hsf->non_sec_size,
+               .sec_code_off = hsf->sec_addr,
+               .sec_code_size = hsf->sec_size,
+               .code_entry_point = 0,
+               .data_dma_base = (hsf->vma->addr + hsf->data_addr) >> 8,
+               .data_size = hsf->data_size,
+       };
+
+       flcn_bl_dmem_desc_dump(&acr->subdev, &hsdesc);
+
+       nvkm_falcon_load_dmem(hsf->falcon, &hsdesc, 0, sizeof(hsdesc), 0);
+}
+
+static int
+gm20b_acr_load_load(struct nvkm_acr *acr, struct nvkm_acr_hsfw *hsfw)
+{
+       struct flcn_acr_desc *desc = (void *)&hsfw->image[hsfw->data_addr];
+
+       desc->ucode_blob_base = nvkm_memory_addr(acr->wpr);
+       desc->ucode_blob_size = nvkm_memory_size(acr->wpr);
+       flcn_acr_desc_dump(&acr->subdev, desc);
+
+       return gm200_acr_hsfw_load(acr, hsfw, &acr->subdev.device->pmu->falcon);
+}
+
+const struct nvkm_acr_hsf_func
+gm20b_acr_load_0 = {
+       .load = gm20b_acr_load_load,
+       .boot = gm200_acr_load_boot,
+       .bld = gm20b_acr_load_bld,
+};
+
+#if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+MODULE_FIRMWARE("nvidia/gm20b/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gm20b/acr/ucode_load.bin");
+#endif
+
+static const struct nvkm_acr_hsf_fwif
+gm20b_acr_load_fwif[] = {
+       { 0, nvkm_acr_hsfw_load, &gm20b_acr_load_0 },
+       {}
+};
+
+static const struct nvkm_acr_func
+gm20b_acr = {
+       .load = gm20b_acr_load_fwif,
+       .wpr_parse = gm200_acr_wpr_parse,
+       .wpr_layout = gm200_acr_wpr_layout,
+       .wpr_alloc = gm20b_acr_wpr_alloc,
+       .wpr_build = gm200_acr_wpr_build,
+       .wpr_patch = gm200_acr_wpr_patch,
+       .wpr_check = gm200_acr_wpr_check,
+       .init = gm200_acr_init,
+};
+
+int
+gm20b_acr_load(struct nvkm_acr *acr, int ver, const struct nvkm_acr_fwif *fwif)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       const struct nvkm_acr_hsf_fwif *hsfwif;
+
+       hsfwif = nvkm_firmware_load(subdev, fwif->func->load, "AcrLoad",
+                                   acr, "acr/bl", "acr/ucode_load", "load");
+       if (IS_ERR(hsfwif))
+               return PTR_ERR(hsfwif);
+
+       return 0;
+}
+
+static const struct nvkm_acr_fwif
+gm20b_acr_fwif[] = {
+       { 0, gm20b_acr_load, &gm20b_acr },
+       {}
+};
+
+int
+gm20b_acr_new(struct nvkm_device *device, int index, struct nvkm_acr **pacr)
+{
+       return nvkm_acr_new_(gm20b_acr_fwif, device, index, pacr);
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp102.c
new file mode 100644 (file)
index 0000000..49e11c4
--- /dev/null
@@ -0,0 +1,281 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+#include <core/firmware.h>
+#include <core/memory.h>
+#include <subdev/mmu.h>
+#include <engine/sec2.h>
+
+#include <nvfw/acr.h>
+#include <nvfw/flcn.h>
+
+void
+gp102_acr_wpr_patch(struct nvkm_acr *acr, s64 adjust)
+{
+       struct wpr_header_v1 hdr;
+       struct lsb_header_v1 lsb;
+       struct nvkm_acr_lsfw *lsfw;
+       u32 offset = 0;
+
+       do {
+               nvkm_robj(acr->wpr, offset, &hdr, sizeof(hdr));
+               wpr_header_v1_dump(&acr->subdev, &hdr);
+
+               list_for_each_entry(lsfw, &acr->lsfw, head) {
+                       if (lsfw->id != hdr.falcon_id)
+                               continue;
+
+                       nvkm_robj(acr->wpr, hdr.lsb_offset, &lsb, sizeof(lsb));
+                       lsb_header_v1_dump(&acr->subdev, &lsb);
+
+                       lsfw->func->bld_patch(acr, lsb.tail.bl_data_off, adjust);
+                       break;
+               }
+
+               offset += sizeof(hdr);
+       } while (hdr.falcon_id != WPR_HEADER_V1_FALCON_ID_INVALID);
+}
+
+int
+gp102_acr_wpr_build_lsb(struct nvkm_acr *acr, struct nvkm_acr_lsfw *lsfw)
+{
+       struct lsb_header_v1 hdr;
+
+       if (WARN_ON(lsfw->sig->size != sizeof(hdr.signature)))
+               return -EINVAL;
+
+       memcpy(&hdr.signature, lsfw->sig->data, lsfw->sig->size);
+       gm200_acr_wpr_build_lsb_tail(lsfw, &hdr.tail);
+
+       nvkm_wobj(acr->wpr, lsfw->offset.lsb, &hdr, sizeof(hdr));
+       return 0;
+}
+
+int
+gp102_acr_wpr_build(struct nvkm_acr *acr, struct nvkm_acr_lsf *rtos)
+{
+       struct nvkm_acr_lsfw *lsfw;
+       u32 offset = 0;
+       int ret;
+
+       /* Fill per-LSF structures. */
+       list_for_each_entry(lsfw, &acr->lsfw, head) {
+               struct lsf_signature_v1 *sig = (void *)lsfw->sig->data;
+               struct wpr_header_v1 hdr = {
+                       .falcon_id = lsfw->id,
+                       .lsb_offset = lsfw->offset.lsb,
+                       .bootstrap_owner = NVKM_ACR_LSF_SEC2,
+                       .lazy_bootstrap = rtos && lsfw->id != rtos->id,
+                       .bin_version = sig->version,
+                       .status = WPR_HEADER_V1_STATUS_COPY,
+               };
+
+               /* Write WPR header. */
+               nvkm_wobj(acr->wpr, offset, &hdr, sizeof(hdr));
+               offset += sizeof(hdr);
+
+               /* Write LSB header. */
+               ret = gp102_acr_wpr_build_lsb(acr, lsfw);
+               if (ret)
+                       return ret;
+
+               /* Write ucode image. */
+               nvkm_wobj(acr->wpr, lsfw->offset.img,
+                                   lsfw->img.data,
+                                   lsfw->img.size);
+
+               /* Write bootloader data. */
+               lsfw->func->bld_write(acr, lsfw->offset.bld, lsfw);
+       }
+
+       /* Finalise WPR. */
+       nvkm_wo32(acr->wpr, offset, WPR_HEADER_V1_FALCON_ID_INVALID);
+       return 0;
+}
+
+int
+gp102_acr_wpr_alloc(struct nvkm_acr *acr, u32 wpr_size)
+{
+       int ret = nvkm_memory_new(acr->subdev.device, NVKM_MEM_TARGET_INST,
+                                 ALIGN(wpr_size, 0x40000) << 1, 0x40000, true,
+                                 &acr->wpr);
+       if (ret)
+               return ret;
+
+       acr->shadow_start = nvkm_memory_addr(acr->wpr);
+       acr->wpr_start = acr->shadow_start + (nvkm_memory_size(acr->wpr) >> 1);
+       acr->wpr_end = acr->wpr_start + (nvkm_memory_size(acr->wpr) >> 1);
+       return 0;
+}
+
+u32
+gp102_acr_wpr_layout(struct nvkm_acr *acr)
+{
+       struct nvkm_acr_lsfw *lsfw;
+       u32 wpr = 0;
+
+       wpr += 11 /* MAX_LSF */ * sizeof(struct wpr_header_v1);
+       wpr  = ALIGN(wpr, 256);
+
+       wpr += 0x100; /* Shared sub-WPR headers. */
+
+       list_for_each_entry(lsfw, &acr->lsfw, head) {
+               wpr  = ALIGN(wpr, 256);
+               lsfw->offset.lsb = wpr;
+               wpr += sizeof(struct lsb_header_v1);
+
+               wpr  = ALIGN(wpr, 4096);
+               lsfw->offset.img = wpr;
+               wpr += lsfw->img.size;
+
+               wpr  = ALIGN(wpr, 256);
+               lsfw->offset.bld = wpr;
+               lsfw->bl_data_size = ALIGN(lsfw->func->bld_size, 256);
+               wpr += lsfw->bl_data_size;
+       }
+
+       return wpr;
+}
+
+int
+gp102_acr_wpr_parse(struct nvkm_acr *acr)
+{
+       const struct wpr_header_v1 *hdr = (void *)acr->wpr_fw->data;
+
+       while (hdr->falcon_id != WPR_HEADER_V1_FALCON_ID_INVALID) {
+               wpr_header_v1_dump(&acr->subdev, hdr);
+               if (!nvkm_acr_lsfw_add(NULL, acr, NULL, (hdr++)->falcon_id))
+                       return -ENOMEM;
+       }
+
+       return 0;
+}
+
+MODULE_FIRMWARE("nvidia/gp102/acr/unload_bl.bin");
+MODULE_FIRMWARE("nvidia/gp102/acr/ucode_unload.bin");
+
+MODULE_FIRMWARE("nvidia/gp104/acr/unload_bl.bin");
+MODULE_FIRMWARE("nvidia/gp104/acr/ucode_unload.bin");
+
+MODULE_FIRMWARE("nvidia/gp106/acr/unload_bl.bin");
+MODULE_FIRMWARE("nvidia/gp106/acr/ucode_unload.bin");
+
+MODULE_FIRMWARE("nvidia/gp107/acr/unload_bl.bin");
+MODULE_FIRMWARE("nvidia/gp107/acr/ucode_unload.bin");
+
+static const struct nvkm_acr_hsf_fwif
+gp102_acr_unload_fwif[] = {
+       { 0, nvkm_acr_hsfw_load, &gm200_acr_unload_0 },
+       {}
+};
+
+int
+gp102_acr_load_load(struct nvkm_acr *acr, struct nvkm_acr_hsfw *hsfw)
+{
+       struct flcn_acr_desc_v1 *desc = (void *)&hsfw->image[hsfw->data_addr];
+
+       desc->wpr_region_id = 1;
+       desc->regions.no_regions = 2;
+       desc->regions.region_props[0].start_addr = acr->wpr_start >> 8;
+       desc->regions.region_props[0].end_addr = acr->wpr_end >> 8;
+       desc->regions.region_props[0].region_id = 1;
+       desc->regions.region_props[0].read_mask = 0xf;
+       desc->regions.region_props[0].write_mask = 0xc;
+       desc->regions.region_props[0].client_mask = 0x2;
+       desc->regions.region_props[0].shadow_mem_start_addr =
+               acr->shadow_start >> 8;
+       flcn_acr_desc_v1_dump(&acr->subdev, desc);
+
+       return gm200_acr_hsfw_load(acr, hsfw,
+                                 &acr->subdev.device->sec2->falcon);
+}
+
+static const struct nvkm_acr_hsf_func
+gp102_acr_load_0 = {
+       .load = gp102_acr_load_load,
+       .boot = gm200_acr_load_boot,
+       .bld = gm200_acr_hsfw_bld,
+};
+
+MODULE_FIRMWARE("nvidia/gp102/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gp102/acr/ucode_load.bin");
+
+MODULE_FIRMWARE("nvidia/gp104/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gp104/acr/ucode_load.bin");
+
+MODULE_FIRMWARE("nvidia/gp106/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gp106/acr/ucode_load.bin");
+
+MODULE_FIRMWARE("nvidia/gp107/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gp107/acr/ucode_load.bin");
+
+static const struct nvkm_acr_hsf_fwif
+gp102_acr_load_fwif[] = {
+       { 0, nvkm_acr_hsfw_load, &gp102_acr_load_0 },
+       {}
+};
+
+static const struct nvkm_acr_func
+gp102_acr = {
+       .load = gp102_acr_load_fwif,
+       .unload = gp102_acr_unload_fwif,
+       .wpr_parse = gp102_acr_wpr_parse,
+       .wpr_layout = gp102_acr_wpr_layout,
+       .wpr_alloc = gp102_acr_wpr_alloc,
+       .wpr_build = gp102_acr_wpr_build,
+       .wpr_patch = gp102_acr_wpr_patch,
+       .wpr_check = gm200_acr_wpr_check,
+       .init = gm200_acr_init,
+};
+
+int
+gp102_acr_load(struct nvkm_acr *acr, int ver, const struct nvkm_acr_fwif *fwif)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       const struct nvkm_acr_hsf_fwif *hsfwif;
+
+       hsfwif = nvkm_firmware_load(subdev, fwif->func->load, "AcrLoad",
+                                   acr, "acr/bl", "acr/ucode_load", "load");
+       if (IS_ERR(hsfwif))
+               return PTR_ERR(hsfwif);
+
+       hsfwif = nvkm_firmware_load(subdev, fwif->func->unload, "AcrUnload",
+                                   acr, "acr/unload_bl", "acr/ucode_unload",
+                                   "unload");
+       if (IS_ERR(hsfwif))
+               return PTR_ERR(hsfwif);
+
+       return 0;
+}
+
+static const struct nvkm_acr_fwif
+gp102_acr_fwif[] = {
+       { 0, gp102_acr_load, &gp102_acr },
+       {}
+};
+
+int
+gp102_acr_new(struct nvkm_device *device, int index, struct nvkm_acr **pacr)
+{
+       return nvkm_acr_new_(gp102_acr_fwif, device, index, pacr);
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp108.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp108.c
new file mode 100644 (file)
index 0000000..f10dc91
--- /dev/null
@@ -0,0 +1,111 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+#include <subdev/mmu.h>
+
+#include <nvfw/flcn.h>
+
+void
+gp108_acr_hsfw_bld(struct nvkm_acr *acr, struct nvkm_acr_hsf *hsf)
+{
+       struct flcn_bl_dmem_desc_v2 hsdesc = {
+               .ctx_dma = FALCON_DMAIDX_VIRT,
+               .code_dma_base = hsf->vma->addr,
+               .non_sec_code_off = hsf->non_sec_addr,
+               .non_sec_code_size = hsf->non_sec_size,
+               .sec_code_off = hsf->sec_addr,
+               .sec_code_size = hsf->sec_size,
+               .code_entry_point = 0,
+               .data_dma_base = hsf->vma->addr + hsf->data_addr,
+               .data_size = hsf->data_size,
+               .argc = 0,
+               .argv = 0,
+       };
+
+       flcn_bl_dmem_desc_v2_dump(&acr->subdev, &hsdesc);
+
+       nvkm_falcon_load_dmem(hsf->falcon, &hsdesc, 0, sizeof(hsdesc), 0);
+}
+
+const struct nvkm_acr_hsf_func
+gp108_acr_unload_0 = {
+       .load = gm200_acr_unload_load,
+       .boot = gm200_acr_unload_boot,
+       .bld = gp108_acr_hsfw_bld,
+};
+
+MODULE_FIRMWARE("nvidia/gp108/acr/unload_bl.bin");
+MODULE_FIRMWARE("nvidia/gp108/acr/ucode_unload.bin");
+
+MODULE_FIRMWARE("nvidia/gv100/acr/unload_bl.bin");
+MODULE_FIRMWARE("nvidia/gv100/acr/ucode_unload.bin");
+
+static const struct nvkm_acr_hsf_fwif
+gp108_acr_unload_fwif[] = {
+       { 0, nvkm_acr_hsfw_load, &gp108_acr_unload_0 },
+       {}
+};
+
+static const struct nvkm_acr_hsf_func
+gp108_acr_load_0 = {
+       .load = gp102_acr_load_load,
+       .boot = gm200_acr_load_boot,
+       .bld = gp108_acr_hsfw_bld,
+};
+
+MODULE_FIRMWARE("nvidia/gp108/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gp108/acr/ucode_load.bin");
+
+MODULE_FIRMWARE("nvidia/gv100/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gv100/acr/ucode_load.bin");
+
+static const struct nvkm_acr_hsf_fwif
+gp108_acr_load_fwif[] = {
+       { 0, nvkm_acr_hsfw_load, &gp108_acr_load_0 },
+       {}
+};
+
+static const struct nvkm_acr_func
+gp108_acr = {
+       .load = gp108_acr_load_fwif,
+       .unload = gp108_acr_unload_fwif,
+       .wpr_parse = gp102_acr_wpr_parse,
+       .wpr_layout = gp102_acr_wpr_layout,
+       .wpr_alloc = gp102_acr_wpr_alloc,
+       .wpr_build = gp102_acr_wpr_build,
+       .wpr_patch = gp102_acr_wpr_patch,
+       .wpr_check = gm200_acr_wpr_check,
+       .init = gm200_acr_init,
+};
+
+static const struct nvkm_acr_fwif
+gp108_acr_fwif[] = {
+       { 0, gp102_acr_load, &gp108_acr },
+       {}
+};
+
+int
+gp108_acr_new(struct nvkm_device *device, int index, struct nvkm_acr **pacr)
+{
+       return nvkm_acr_new_(gp108_acr_fwif, device, index, pacr);
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp10b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp10b.c
new file mode 100644 (file)
index 0000000..39de642
--- /dev/null
@@ -0,0 +1,57 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+#if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC)
+MODULE_FIRMWARE("nvidia/gp10b/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/gp10b/acr/ucode_load.bin");
+#endif
+
+static const struct nvkm_acr_hsf_fwif
+gp10b_acr_load_fwif[] = {
+       { 0, nvkm_acr_hsfw_load, &gm20b_acr_load_0 },
+       {}
+};
+
+static const struct nvkm_acr_func
+gp10b_acr = {
+       .load = gp10b_acr_load_fwif,
+       .wpr_parse = gm200_acr_wpr_parse,
+       .wpr_layout = gm200_acr_wpr_layout,
+       .wpr_alloc = gm20b_acr_wpr_alloc,
+       .wpr_build = gm200_acr_wpr_build,
+       .wpr_patch = gm200_acr_wpr_patch,
+       .wpr_check = gm200_acr_wpr_check,
+       .init = gm200_acr_init,
+};
+
+static const struct nvkm_acr_fwif
+gp10b_acr_fwif[] = {
+       { 0, gm20b_acr_load, &gp10b_acr },
+       {}
+};
+
+int
+gp10b_acr_new(struct nvkm_device *device, int index, struct nvkm_acr **pacr)
+{
+       return nvkm_acr_new_(gp10b_acr_fwif, device, index, pacr);
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c
new file mode 100644 (file)
index 0000000..aecce2d
--- /dev/null
@@ -0,0 +1,180 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+#include <core/firmware.h>
+
+#include <nvfw/fw.h>
+#include <nvfw/hs.h>
+
+static void
+nvkm_acr_hsfw_del(struct nvkm_acr_hsfw *hsfw)
+{
+       list_del(&hsfw->head);
+       kfree(hsfw->imem);
+       kfree(hsfw->image);
+       kfree(hsfw->sig.prod.data);
+       kfree(hsfw->sig.dbg.data);
+       kfree(hsfw);
+}
+
+void
+nvkm_acr_hsfw_del_all(struct nvkm_acr *acr)
+{
+       struct nvkm_acr_hsfw *hsfw, *hsft;
+       list_for_each_entry_safe(hsfw, hsft, &acr->hsfw, head) {
+               nvkm_acr_hsfw_del(hsfw);
+       }
+}
+
+static int
+nvkm_acr_hsfw_load_image(struct nvkm_acr *acr, const char *name, int ver,
+                        struct nvkm_acr_hsfw *hsfw)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       const struct firmware *fw;
+       const struct nvfw_bin_hdr *hdr;
+       const struct nvfw_hs_header *fwhdr;
+       const struct nvfw_hs_load_header *lhdr;
+       u32 loc, sig;
+       int ret;
+
+       ret = nvkm_firmware_get(subdev, name, ver, &fw);
+       if (ret < 0)
+               return ret;
+
+       hdr = nvfw_bin_hdr(subdev, fw->data);
+       fwhdr = nvfw_hs_header(subdev, fw->data + hdr->header_offset);
+
+       /* Earlier FW releases by NVIDIA for Nouveau's use aren't in NVIDIA's
+        * standard format, and don't have the indirection seen in the 0x10de
+        * case.
+        */
+       switch (hdr->bin_magic) {
+       case 0x000010de:
+               loc = *(u32 *)(fw->data + fwhdr->patch_loc);
+               sig = *(u32 *)(fw->data + fwhdr->patch_sig);
+               break;
+       case 0x3b1d14f0:
+               loc = fwhdr->patch_loc;
+               sig = fwhdr->patch_sig;
+               break;
+       default:
+               ret = -EINVAL;
+               goto done;
+       }
+
+       lhdr = nvfw_hs_load_header(subdev, fw->data + fwhdr->hdr_offset);
+
+       if (!(hsfw->image = kmalloc(hdr->data_size, GFP_KERNEL))) {
+               ret = -ENOMEM;
+               goto done;
+       }
+
+       memcpy(hsfw->image, fw->data + hdr->data_offset, hdr->data_size);
+       hsfw->image_size = hdr->data_size;
+       hsfw->non_sec_addr = lhdr->non_sec_code_off;
+       hsfw->non_sec_size = lhdr->non_sec_code_size;
+       hsfw->sec_addr = lhdr->apps[0];
+       hsfw->sec_size = lhdr->apps[lhdr->num_apps];
+       hsfw->data_addr = lhdr->data_dma_base;
+       hsfw->data_size = lhdr->data_size;
+
+       hsfw->sig.prod.size = fwhdr->sig_prod_size;
+       hsfw->sig.prod.data = kmalloc(hsfw->sig.prod.size, GFP_KERNEL);
+       if (!hsfw->sig.prod.data) {
+               ret = -ENOMEM;
+               goto done;
+       }
+
+       memcpy(hsfw->sig.prod.data, fw->data + fwhdr->sig_prod_offset + sig,
+              hsfw->sig.prod.size);
+
+       hsfw->sig.dbg.size = fwhdr->sig_dbg_size;
+       hsfw->sig.dbg.data = kmalloc(hsfw->sig.dbg.size, GFP_KERNEL);
+       if (!hsfw->sig.dbg.data) {
+               ret = -ENOMEM;
+               goto done;
+       }
+
+       memcpy(hsfw->sig.dbg.data, fw->data + fwhdr->sig_dbg_offset + sig,
+              hsfw->sig.dbg.size);
+
+       hsfw->sig.patch_loc = loc;
+done:
+       nvkm_firmware_put(fw);
+       return ret;
+}
+
+static int
+nvkm_acr_hsfw_load_bl(struct nvkm_acr *acr, const char *name, int ver,
+                     struct nvkm_acr_hsfw *hsfw)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       const struct nvfw_bin_hdr *hdr;
+       const struct nvfw_bl_desc *desc;
+       const struct firmware *fw;
+       u8 *data;
+       int ret;
+
+       ret = nvkm_firmware_get(subdev, name, ver, &fw);
+       if (ret)
+               return ret;
+
+       hdr = nvfw_bin_hdr(subdev, fw->data);
+       desc = nvfw_bl_desc(subdev, fw->data + hdr->header_offset);
+       data = (void *)fw->data + hdr->data_offset;
+
+       hsfw->imem_size = desc->code_size;
+       hsfw->imem_tag = desc->start_tag;
+       hsfw->imem = kmalloc(desc->code_size, GFP_KERNEL);
+       memcpy(hsfw->imem, data + desc->code_off, desc->code_size);
+
+       nvkm_firmware_put(fw);
+       return 0;
+}
+
+int
+nvkm_acr_hsfw_load(struct nvkm_acr *acr, const char *bl, const char *fw,
+                  const char *name, int version,
+                  const struct nvkm_acr_hsf_fwif *fwif)
+{
+       struct nvkm_acr_hsfw *hsfw;
+       int ret;
+
+       if (!(hsfw = kzalloc(sizeof(*hsfw), GFP_KERNEL)))
+               return -ENOMEM;
+
+       hsfw->func = fwif->func;
+       hsfw->name = name;
+       list_add_tail(&hsfw->head, &acr->hsfw);
+
+       ret = nvkm_acr_hsfw_load_bl(acr, bl, version, hsfw);
+       if (ret)
+               goto done;
+
+       ret = nvkm_acr_hsfw_load_image(acr, fw, version, hsfw);
+done:
+       if (ret)
+               nvkm_acr_hsfw_del(hsfw);
+       return ret;
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/lsfw.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/lsfw.c
new file mode 100644 (file)
index 0000000..9896462
--- /dev/null
@@ -0,0 +1,249 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+#include <core/falcon.h>
+#include <core/firmware.h>
+#include <nvfw/fw.h>
+#include <nvfw/ls.h>
+
+void
+nvkm_acr_lsfw_del(struct nvkm_acr_lsfw *lsfw)
+{
+       nvkm_blob_dtor(&lsfw->img);
+       nvkm_firmware_put(lsfw->sig);
+       list_del(&lsfw->head);
+       kfree(lsfw);
+}
+
+void
+nvkm_acr_lsfw_del_all(struct nvkm_acr *acr)
+{
+       struct nvkm_acr_lsfw *lsfw, *lsft;
+       list_for_each_entry_safe(lsfw, lsft, &acr->lsfw, head) {
+               nvkm_acr_lsfw_del(lsfw);
+       }
+}
+
+static struct nvkm_acr_lsfw *
+nvkm_acr_lsfw_get(struct nvkm_acr *acr, enum nvkm_acr_lsf_id id)
+{
+       struct nvkm_acr_lsfw *lsfw;
+       list_for_each_entry(lsfw, &acr->lsfw, head) {
+               if (lsfw->id == id)
+                       return lsfw;
+       }
+       return NULL;
+}
+
+struct nvkm_acr_lsfw *
+nvkm_acr_lsfw_add(const struct nvkm_acr_lsf_func *func, struct nvkm_acr *acr,
+                struct nvkm_falcon *falcon, enum nvkm_acr_lsf_id id)
+{
+       struct nvkm_acr_lsfw *lsfw = nvkm_acr_lsfw_get(acr, id);
+
+       if (lsfw && lsfw->func) {
+               nvkm_error(&acr->subdev, "LSFW %d redefined\n", id);
+               return ERR_PTR(-EEXIST);
+       }
+
+       if (!lsfw) {
+               if (!(lsfw = kzalloc(sizeof(*lsfw), GFP_KERNEL)))
+                       return ERR_PTR(-ENOMEM);
+
+               lsfw->id = id;
+               list_add_tail(&lsfw->head, &acr->lsfw);
+       }
+
+       lsfw->func = func;
+       lsfw->falcon = falcon;
+       return lsfw;
+}
+
+static struct nvkm_acr_lsfw *
+nvkm_acr_lsfw_load_sig_image_desc_(struct nvkm_subdev *subdev,
+                                  struct nvkm_falcon *falcon,
+                                  enum nvkm_acr_lsf_id id,
+                                  const char *path, int ver,
+                                  const struct nvkm_acr_lsf_func *func,
+                                  const struct firmware **pdesc)
+{
+       struct nvkm_acr *acr = subdev->device->acr;
+       struct nvkm_acr_lsfw *lsfw;
+       int ret;
+
+       if (IS_ERR((lsfw = nvkm_acr_lsfw_add(func, acr, falcon, id))))
+               return lsfw;
+
+       ret = nvkm_firmware_load_name(subdev, path, "sig", ver, &lsfw->sig);
+       if (ret)
+               goto done;
+
+       ret = nvkm_firmware_load_blob(subdev, path, "image", ver, &lsfw->img);
+       if (ret)
+               goto done;
+
+       ret = nvkm_firmware_load_name(subdev, path, "desc", ver, pdesc);
+done:
+       if (ret) {
+               nvkm_acr_lsfw_del(lsfw);
+               return ERR_PTR(ret);
+       }
+
+       return lsfw;
+}
+
+static void
+nvkm_acr_lsfw_from_desc(const struct nvfw_ls_desc_head *desc,
+                       struct nvkm_acr_lsfw *lsfw)
+{
+       lsfw->bootloader_size = ALIGN(desc->bootloader_size, 256);
+       lsfw->bootloader_imem_offset = desc->bootloader_imem_offset;
+
+       lsfw->app_size = ALIGN(desc->app_size, 256);
+       lsfw->app_start_offset = desc->app_start_offset;
+       lsfw->app_imem_entry = desc->app_imem_entry;
+       lsfw->app_resident_code_offset = desc->app_resident_code_offset;
+       lsfw->app_resident_code_size = desc->app_resident_code_size;
+       lsfw->app_resident_data_offset = desc->app_resident_data_offset;
+       lsfw->app_resident_data_size = desc->app_resident_data_size;
+
+       lsfw->ucode_size = ALIGN(lsfw->app_resident_data_offset, 256) +
+                          lsfw->bootloader_size;
+       lsfw->data_size = lsfw->app_size + lsfw->bootloader_size -
+                         lsfw->ucode_size;
+}
+
+int
+nvkm_acr_lsfw_load_sig_image_desc(struct nvkm_subdev *subdev,
+                                 struct nvkm_falcon *falcon,
+                                 enum nvkm_acr_lsf_id id,
+                                 const char *path, int ver,
+                                 const struct nvkm_acr_lsf_func *func)
+{
+       const struct firmware *fw;
+       struct nvkm_acr_lsfw *lsfw;
+
+       lsfw = nvkm_acr_lsfw_load_sig_image_desc_(subdev, falcon, id, path, ver,
+                                                 func, &fw);
+       if (IS_ERR(lsfw))
+               return PTR_ERR(lsfw);
+
+       nvkm_acr_lsfw_from_desc(&nvfw_ls_desc(subdev, fw->data)->head, lsfw);
+       nvkm_firmware_put(fw);
+       return 0;
+}
+
+int
+nvkm_acr_lsfw_load_sig_image_desc_v1(struct nvkm_subdev *subdev,
+                                    struct nvkm_falcon *falcon,
+                                    enum nvkm_acr_lsf_id id,
+                                    const char *path, int ver,
+                                    const struct nvkm_acr_lsf_func *func)
+{
+       const struct firmware *fw;
+       struct nvkm_acr_lsfw *lsfw;
+
+       lsfw = nvkm_acr_lsfw_load_sig_image_desc_(subdev, falcon, id, path, ver,
+                                                 func, &fw);
+       if (IS_ERR(lsfw))
+               return PTR_ERR(lsfw);
+
+       nvkm_acr_lsfw_from_desc(&nvfw_ls_desc_v1(subdev, fw->data)->head, lsfw);
+       nvkm_firmware_put(fw);
+       return 0;
+}
+
+int
+nvkm_acr_lsfw_load_bl_inst_data_sig(struct nvkm_subdev *subdev,
+                                   struct nvkm_falcon *falcon,
+                                   enum nvkm_acr_lsf_id id,
+                                   const char *path, int ver,
+                                   const struct nvkm_acr_lsf_func *func)
+{
+       struct nvkm_acr *acr = subdev->device->acr;
+       struct nvkm_acr_lsfw *lsfw;
+       const struct firmware *bl = NULL, *inst = NULL, *data = NULL;
+       const struct nvfw_bin_hdr *hdr;
+       const struct nvfw_bl_desc *desc;
+       u32 *bldata;
+       int ret;
+
+       if (IS_ERR((lsfw = nvkm_acr_lsfw_add(func, acr, falcon, id))))
+               return PTR_ERR(lsfw);
+
+       ret = nvkm_firmware_load_name(subdev, path, "bl", ver, &bl);
+       if (ret)
+               goto done;
+
+       hdr = nvfw_bin_hdr(subdev, bl->data);
+       desc = nvfw_bl_desc(subdev, bl->data + hdr->header_offset);
+       bldata = (void *)(bl->data + hdr->data_offset);
+
+       ret = nvkm_firmware_load_name(subdev, path, "inst", ver, &inst);
+       if (ret)
+               goto done;
+
+       ret = nvkm_firmware_load_name(subdev, path, "data", ver, &data);
+       if (ret)
+               goto done;
+
+       ret = nvkm_firmware_load_name(subdev, path, "sig", ver, &lsfw->sig);
+       if (ret)
+               goto done;
+
+       lsfw->bootloader_size = ALIGN(desc->code_size, 256);
+       lsfw->bootloader_imem_offset = desc->start_tag << 8;
+
+       lsfw->app_start_offset = lsfw->bootloader_size;
+       lsfw->app_imem_entry = 0;
+       lsfw->app_resident_code_offset = 0;
+       lsfw->app_resident_code_size = ALIGN(inst->size, 256);
+       lsfw->app_resident_data_offset = lsfw->app_resident_code_size;
+       lsfw->app_resident_data_size = ALIGN(data->size, 256);
+       lsfw->app_size = lsfw->app_resident_code_size +
+                        lsfw->app_resident_data_size;
+
+       lsfw->img.size = lsfw->bootloader_size + lsfw->app_size;
+       if (!(lsfw->img.data = kzalloc(lsfw->img.size, GFP_KERNEL))) {
+               ret = -ENOMEM;
+               goto done;
+       }
+
+       memcpy(lsfw->img.data, bldata, lsfw->bootloader_size);
+       memcpy(lsfw->img.data + lsfw->app_start_offset +
+              lsfw->app_resident_code_offset, inst->data, inst->size);
+       memcpy(lsfw->img.data + lsfw->app_start_offset +
+              lsfw->app_resident_data_offset, data->data, data->size);
+
+       lsfw->ucode_size = ALIGN(lsfw->app_resident_data_offset, 256) +
+                          lsfw->bootloader_size;
+       lsfw->data_size = lsfw->app_size + lsfw->bootloader_size -
+                         lsfw->ucode_size;
+
+done:
+       if (ret)
+               nvkm_acr_lsfw_del(lsfw);
+       nvkm_firmware_put(data);
+       nvkm_firmware_put(inst);
+       nvkm_firmware_put(bl);
+       return ret;
+}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/priv.h
new file mode 100644 (file)
index 0000000..d8ba728
--- /dev/null
@@ -0,0 +1,151 @@
+#ifndef __NVKM_ACR_PRIV_H__
+#define __NVKM_ACR_PRIV_H__
+#include <subdev/acr.h>
+struct lsb_header_tail;
+
+struct nvkm_acr_fwif {
+       int version;
+       int (*load)(struct nvkm_acr *, int version,
+                   const struct nvkm_acr_fwif *);
+       const struct nvkm_acr_func *func;
+};
+
+int gm20b_acr_load(struct nvkm_acr *, int, const struct nvkm_acr_fwif *);
+int gp102_acr_load(struct nvkm_acr *, int, const struct nvkm_acr_fwif *);
+
+struct nvkm_acr_lsf;
+struct nvkm_acr_func {
+       const struct nvkm_acr_hsf_fwif *load;
+       const struct nvkm_acr_hsf_fwif *ahesasc;
+       const struct nvkm_acr_hsf_fwif *asb;
+       const struct nvkm_acr_hsf_fwif *unload;
+       int (*wpr_parse)(struct nvkm_acr *);
+       u32 (*wpr_layout)(struct nvkm_acr *);
+       int (*wpr_alloc)(struct nvkm_acr *, u32 wpr_size);
+       int (*wpr_build)(struct nvkm_acr *, struct nvkm_acr_lsf *rtos);
+       void (*wpr_patch)(struct nvkm_acr *, s64 adjust);
+       void (*wpr_check)(struct nvkm_acr *, u64 *start, u64 *limit);
+       int (*init)(struct nvkm_acr *);
+       void (*fini)(struct nvkm_acr *);
+};
+
+int gm200_acr_wpr_parse(struct nvkm_acr *);
+u32 gm200_acr_wpr_layout(struct nvkm_acr *);
+int gm200_acr_wpr_build(struct nvkm_acr *, struct nvkm_acr_lsf *);
+void gm200_acr_wpr_patch(struct nvkm_acr *, s64);
+void gm200_acr_wpr_check(struct nvkm_acr *, u64 *, u64 *);
+void gm200_acr_wpr_build_lsb_tail(struct nvkm_acr_lsfw *,
+                                 struct lsb_header_tail *);
+int gm200_acr_init(struct nvkm_acr *);
+
+int gm20b_acr_wpr_alloc(struct nvkm_acr *, u32 wpr_size);
+
+int gp102_acr_wpr_parse(struct nvkm_acr *);
+u32 gp102_acr_wpr_layout(struct nvkm_acr *);
+int gp102_acr_wpr_alloc(struct nvkm_acr *, u32 wpr_size);
+int gp102_acr_wpr_build(struct nvkm_acr *, struct nvkm_acr_lsf *);
+int gp102_acr_wpr_build_lsb(struct nvkm_acr *, struct nvkm_acr_lsfw *);
+void gp102_acr_wpr_patch(struct nvkm_acr *, s64);
+
+struct nvkm_acr_hsfw {
+       const struct nvkm_acr_hsf_func *func;
+       const char *name;
+       struct list_head head;
+
+       u32 imem_size;
+       u32 imem_tag;
+       u32 *imem;
+
+       u8 *image;
+       u32 image_size;
+       u32 non_sec_addr;
+       u32 non_sec_size;
+       u32 sec_addr;
+       u32 sec_size;
+       u32 data_addr;
+       u32 data_size;
+
+       struct {
+               struct {
+                       void *data;
+                       u32 size;
+               } prod, dbg;
+               u32 patch_loc;
+       } sig;
+};
+
+struct nvkm_acr_hsf_fwif {
+       int version;
+       int (*load)(struct nvkm_acr *, const char *bl, const char *fw,
+                   const char *name, int version,
+                   const struct nvkm_acr_hsf_fwif *);
+       const struct nvkm_acr_hsf_func *func;
+};
+
+int nvkm_acr_hsfw_load(struct nvkm_acr *, const char *, const char *,
+                      const char *, int, const struct nvkm_acr_hsf_fwif *);
+void nvkm_acr_hsfw_del_all(struct nvkm_acr *);
+
+struct nvkm_acr_hsf {
+       const struct nvkm_acr_hsf_func *func;
+       const char *name;
+       struct list_head head;
+
+       u32 imem_size;
+       u32 imem_tag;
+       u32 *imem;
+
+       u32 non_sec_addr;
+       u32 non_sec_size;
+       u32 sec_addr;
+       u32 sec_size;
+       u32 data_addr;
+       u32 data_size;
+
+       struct nvkm_memory *ucode;
+       struct nvkm_vma *vma;
+       struct nvkm_falcon *falcon;
+};
+
+struct nvkm_acr_hsf_func {
+       int (*load)(struct nvkm_acr *, struct nvkm_acr_hsfw *);
+       int (*boot)(struct nvkm_acr *, struct nvkm_acr_hsf *);
+       void (*bld)(struct nvkm_acr *, struct nvkm_acr_hsf *);
+};
+
+int gm200_acr_hsfw_load(struct nvkm_acr *, struct nvkm_acr_hsfw *,
+                       struct nvkm_falcon *);
+int gm200_acr_hsfw_boot(struct nvkm_acr *, struct nvkm_acr_hsf *,
+                       u32 clear_intr, u32 mbox0_ok);
+
+int gm200_acr_load_boot(struct nvkm_acr *, struct nvkm_acr_hsf *);
+
+extern const struct nvkm_acr_hsf_func gm200_acr_unload_0;
+int gm200_acr_unload_load(struct nvkm_acr *, struct nvkm_acr_hsfw *);
+int gm200_acr_unload_boot(struct nvkm_acr *, struct nvkm_acr_hsf *);
+void gm200_acr_hsfw_bld(struct nvkm_acr *, struct nvkm_acr_hsf *);
+
+extern const struct nvkm_acr_hsf_func gm20b_acr_load_0;
+
+int gp102_acr_load_load(struct nvkm_acr *, struct nvkm_acr_hsfw *);
+
+extern const struct nvkm_acr_hsf_func gp108_acr_unload_0;
+void gp108_acr_hsfw_bld(struct nvkm_acr *, struct nvkm_acr_hsf *);
+
+int nvkm_acr_new_(const struct nvkm_acr_fwif *, struct nvkm_device *, int,
+                 struct nvkm_acr **);
+int nvkm_acr_hsf_boot(struct nvkm_acr *, const char *name);
+
+struct nvkm_acr_lsf {
+       const struct nvkm_acr_lsf_func *func;
+       struct nvkm_falcon *falcon;
+       enum nvkm_acr_lsf_id id;
+       struct list_head head;
+};
+
+struct nvkm_acr_lsfw *nvkm_acr_lsfw_add(const struct nvkm_acr_lsf_func *,
+                                       struct nvkm_acr *, struct nvkm_falcon *,
+                                       enum nvkm_acr_lsf_id);
+void nvkm_acr_lsfw_del(struct nvkm_acr_lsfw *);
+void nvkm_acr_lsfw_del_all(struct nvkm_acr *);
+#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/tu102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/tu102.c
new file mode 100644 (file)
index 0000000..7f4b89d
--- /dev/null
@@ -0,0 +1,215 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+#include <core/firmware.h>
+#include <core/memory.h>
+#include <subdev/gsp.h>
+#include <subdev/pmu.h>
+#include <engine/sec2.h>
+
+#include <nvfw/acr.h>
+
+static int
+tu102_acr_init(struct nvkm_acr *acr)
+{
+       int ret = nvkm_acr_hsf_boot(acr, "AHESASC");
+       if (ret)
+               return ret;
+
+       return nvkm_acr_hsf_boot(acr, "ASB");
+}
+
+static int
+tu102_acr_wpr_build(struct nvkm_acr *acr, struct nvkm_acr_lsf *rtos)
+{
+       struct nvkm_acr_lsfw *lsfw;
+       u32 offset = 0;
+       int ret;
+
+       /*XXX: shared sub-WPR headers, fill terminator for now. */
+       nvkm_wo32(acr->wpr, 0x200, 0xffffffff);
+
+       /* Fill per-LSF structures. */
+       list_for_each_entry(lsfw, &acr->lsfw, head) {
+               struct lsf_signature_v1 *sig = (void *)lsfw->sig->data;
+               struct wpr_header_v1 hdr = {
+                       .falcon_id = lsfw->id,
+                       .lsb_offset = lsfw->offset.lsb,
+                       .bootstrap_owner = NVKM_ACR_LSF_GSPLITE,
+                       .lazy_bootstrap = 1,
+                       .bin_version = sig->version,
+                       .status = WPR_HEADER_V1_STATUS_COPY,
+               };
+
+               /* Write WPR header. */
+               nvkm_wobj(acr->wpr, offset, &hdr, sizeof(hdr));
+               offset += sizeof(hdr);
+
+               /* Write LSB header. */
+               ret = gp102_acr_wpr_build_lsb(acr, lsfw);
+               if (ret)
+                       return ret;
+
+               /* Write ucode image. */
+               nvkm_wobj(acr->wpr, lsfw->offset.img,
+                                   lsfw->img.data,
+                                   lsfw->img.size);
+
+               /* Write bootloader data. */
+               lsfw->func->bld_write(acr, lsfw->offset.bld, lsfw);
+       }
+
+       /* Finalise WPR. */
+       nvkm_wo32(acr->wpr, offset, WPR_HEADER_V1_FALCON_ID_INVALID);
+       return 0;
+}
+
+static int
+tu102_acr_hsfw_boot(struct nvkm_acr *acr, struct nvkm_acr_hsf *hsf)
+{
+       return gm200_acr_hsfw_boot(acr, hsf, 0, 0);
+}
+
+static int
+tu102_acr_hsfw_nofw(struct nvkm_acr *acr, const char *bl, const char *fw,
+                   const char *name, int version,
+                   const struct nvkm_acr_hsf_fwif *fwif)
+{
+       return 0;
+}
+
+MODULE_FIRMWARE("nvidia/tu102/acr/unload_bl.bin");
+MODULE_FIRMWARE("nvidia/tu102/acr/ucode_unload.bin");
+
+MODULE_FIRMWARE("nvidia/tu104/acr/unload_bl.bin");
+MODULE_FIRMWARE("nvidia/tu104/acr/ucode_unload.bin");
+
+MODULE_FIRMWARE("nvidia/tu106/acr/unload_bl.bin");
+MODULE_FIRMWARE("nvidia/tu106/acr/ucode_unload.bin");
+
+static const struct nvkm_acr_hsf_fwif
+tu102_acr_unload_fwif[] = {
+       {  0, nvkm_acr_hsfw_load, &gp108_acr_unload_0 },
+       { -1, tu102_acr_hsfw_nofw },
+       {}
+};
+
+static int
+tu102_acr_asb_load(struct nvkm_acr *acr, struct nvkm_acr_hsfw *hsfw)
+{
+       return gm200_acr_hsfw_load(acr, hsfw, &acr->subdev.device->gsp->falcon);
+}
+
+static const struct nvkm_acr_hsf_func
+tu102_acr_asb_0 = {
+       .load = tu102_acr_asb_load,
+       .boot = tu102_acr_hsfw_boot,
+       .bld = gp108_acr_hsfw_bld,
+};
+
+MODULE_FIRMWARE("nvidia/tu102/acr/ucode_asb.bin");
+MODULE_FIRMWARE("nvidia/tu104/acr/ucode_asb.bin");
+MODULE_FIRMWARE("nvidia/tu106/acr/ucode_asb.bin");
+
+static const struct nvkm_acr_hsf_fwif
+tu102_acr_asb_fwif[] = {
+       {  0, nvkm_acr_hsfw_load, &tu102_acr_asb_0 },
+       { -1, tu102_acr_hsfw_nofw },
+       {}
+};
+
+static const struct nvkm_acr_hsf_func
+tu102_acr_ahesasc_0 = {
+       .load = gp102_acr_load_load,
+       .boot = tu102_acr_hsfw_boot,
+       .bld = gp108_acr_hsfw_bld,
+};
+
+MODULE_FIRMWARE("nvidia/tu102/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/tu102/acr/ucode_ahesasc.bin");
+
+MODULE_FIRMWARE("nvidia/tu104/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/tu104/acr/ucode_ahesasc.bin");
+
+MODULE_FIRMWARE("nvidia/tu106/acr/bl.bin");
+MODULE_FIRMWARE("nvidia/tu106/acr/ucode_ahesasc.bin");
+
+static const struct nvkm_acr_hsf_fwif
+tu102_acr_ahesasc_fwif[] = {
+       {  0, nvkm_acr_hsfw_load, &tu102_acr_ahesasc_0 },
+       { -1, tu102_acr_hsfw_nofw },
+       {}
+};
+
+static const struct nvkm_acr_func
+tu102_acr = {
+       .ahesasc = tu102_acr_ahesasc_fwif,
+       .asb = tu102_acr_asb_fwif,
+       .unload = tu102_acr_unload_fwif,
+       .wpr_parse = gp102_acr_wpr_parse,
+       .wpr_layout = gp102_acr_wpr_layout,
+       .wpr_alloc = gp102_acr_wpr_alloc,
+       .wpr_patch = gp102_acr_wpr_patch,
+       .wpr_build = tu102_acr_wpr_build,
+       .wpr_check = gm200_acr_wpr_check,
+       .init = tu102_acr_init,
+};
+
+static int
+tu102_acr_load(struct nvkm_acr *acr, int version,
+              const struct nvkm_acr_fwif *fwif)
+{
+       struct nvkm_subdev *subdev = &acr->subdev;
+       const struct nvkm_acr_hsf_fwif *hsfwif;
+
+       hsfwif = nvkm_firmware_load(subdev, fwif->func->ahesasc, "AcrAHESASC",
+                                   acr, "acr/bl", "acr/ucode_ahesasc",
+                                   "AHESASC");
+       if (IS_ERR(hsfwif))
+               return PTR_ERR(hsfwif);
+
+       hsfwif = nvkm_firmware_load(subdev, fwif->func->asb, "AcrASB",
+                                   acr, "acr/bl", "acr/ucode_asb", "ASB");
+       if (IS_ERR(hsfwif))
+               return PTR_ERR(hsfwif);
+
+       hsfwif = nvkm_firmware_load(subdev, fwif->func->unload, "AcrUnload",
+                                   acr, "acr/unload_bl", "acr/ucode_unload",
+                                   "unload");
+       if (IS_ERR(hsfwif))
+               return PTR_ERR(hsfwif);
+
+       return 0;
+}
+
+static const struct nvkm_acr_fwif
+tu102_acr_fwif[] = {
+       {  0, tu102_acr_load, &tu102_acr },
+       {}
+};
+
+int
+tu102_acr_new(struct nvkm_device *device, int index, struct nvkm_acr **pacr)
+{
+       return nvkm_acr_new_(tu102_acr_fwif, device, index, pacr);
+}
index 53b9d63..d65ec71 100644 (file)
@@ -2,5 +2,6 @@
 nvkm-y += nvkm/subdev/fault/base.o
 nvkm-y += nvkm/subdev/fault/user.o
 nvkm-y += nvkm/subdev/fault/gp100.o
+nvkm-y += nvkm/subdev/fault/gp10b.o
 nvkm-y += nvkm/subdev/fault/gv100.o
 nvkm-y += nvkm/subdev/fault/tu102.o
index ca25156..f6dca97 100644 (file)
@@ -108,7 +108,7 @@ nvkm_fault_oneinit_buffer(struct nvkm_fault *fault, int id)
                return ret;
 
        /* Pin fault buffer in BAR2. */
-       buffer->addr = nvkm_memory_bar2(buffer->mem);
+       buffer->addr = fault->func->buffer.pin(buffer);
        if (buffer->addr == ~0ULL)
                return -EFAULT;
 
@@ -146,6 +146,7 @@ nvkm_fault_dtor(struct nvkm_subdev *subdev)
        struct nvkm_fault *fault = nvkm_fault(subdev);
        int i;
 
+       nvkm_notify_fini(&fault->nrpfb);
        nvkm_event_fini(&fault->event);
 
        for (i = 0; i < fault->buffer_nr; i++) {
index 4f3c4e0..f6b189c 100644 (file)
  */
 #include "priv.h"
 
+#include <core/memory.h>
 #include <subdev/mc.h>
 
 #include <nvif/class.h>
 
-static void
+void
 gp100_fault_buffer_intr(struct nvkm_fault_buffer *buffer, bool enable)
 {
        struct nvkm_device *device = buffer->fault->subdev.device;
        nvkm_mc_intr_mask(device, NVKM_SUBDEV_FAULT, enable);
 }
 
-static void
+void
 gp100_fault_buffer_fini(struct nvkm_fault_buffer *buffer)
 {
        struct nvkm_device *device = buffer->fault->subdev.device;
        nvkm_mask(device, 0x002a70, 0x00000001, 0x00000000);
 }
 
-static void
+void
 gp100_fault_buffer_init(struct nvkm_fault_buffer *buffer)
 {
        struct nvkm_device *device = buffer->fault->subdev.device;
@@ -48,7 +49,12 @@ gp100_fault_buffer_init(struct nvkm_fault_buffer *buffer)
        nvkm_mask(device, 0x002a70, 0x00000001, 0x00000001);
 }
 
-static void
+u64 gp100_fault_buffer_pin(struct nvkm_fault_buffer *buffer)
+{
+       return nvkm_memory_bar2(buffer->mem);
+}
+
+void
 gp100_fault_buffer_info(struct nvkm_fault_buffer *buffer)
 {
        buffer->entries = nvkm_rd32(buffer->fault->subdev.device, 0x002a78);
@@ -56,7 +62,7 @@ gp100_fault_buffer_info(struct nvkm_fault_buffer *buffer)
        buffer->put = 0x002a80;
 }
 
-static void
+void
 gp100_fault_intr(struct nvkm_fault *fault)
 {
        nvkm_event_send(&fault->event, 1, 0, NULL, 0);
@@ -68,6 +74,7 @@ gp100_fault = {
        .buffer.nr = 1,
        .buffer.entry_size = 32,
        .buffer.info = gp100_fault_buffer_info,
+       .buffer.pin = gp100_fault_buffer_pin,
        .buffer.init = gp100_fault_buffer_init,
        .buffer.fini = gp100_fault_buffer_fini,
        .buffer.intr = gp100_fault_buffer_intr,
@@ -1,5 +1,5 @@
 /*
- * Copyright 2019 Advanced Micro Devices, Inc.
+ * Copyright (c) 2019 NVIDIA Corporation.
  *
  * Permission is hereby granted, free of charge, to any person obtaining a
  * copy of this software and associated documentation files (the "Software"),
  * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
  * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
  * OTHER DEALINGS IN THE SOFTWARE.
- *
- * Authors: AMD
- *
  */
 
-#ifndef _DMUB_FW_STATE_H_
-#define _DMUB_FW_STATE_H_
-
-#include "dmub_types.h"
-
-#pragma pack(push, 1)
-
-struct dmub_fw_state {
-       /**
-        * @phy_initialized_during_fw_boot:
-        *
-        * Detects if VBIOS/VBL has ran before firmware boot.
-        * A value of 1 will usually mean S0i3 boot.
-        */
-       uint8_t phy_initialized_during_fw_boot;
-
-       /**
-        * @intialized_phy:
-        *
-        * Bit vector of initialized PHY.
-        */
-       uint8_t initialized_phy;
-
-       /**
-        * @enabled_phy:
-        *
-        * Bit vector of enabled PHY for DP alt mode switch tracking.
-        */
-       uint8_t enabled_phy;
-
-       /**
-        * @dmcu_fw_loaded:
-        *
-        * DMCU auto load state.
-        */
-       uint8_t dmcu_fw_loaded;
-
-       /**
-        * @psr_state:
-        *
-        * PSR state tracking.
-        */
-       uint8_t psr_state;
+#include "priv.h"
+
+#include <core/memory.h>
+
+#include <nvif/class.h>
+
+u64
+gp10b_fault_buffer_pin(struct nvkm_fault_buffer *buffer)
+{
+       return nvkm_memory_addr(buffer->mem);
+}
+
+static const struct nvkm_fault_func
+gp10b_fault = {
+       .intr = gp100_fault_intr,
+       .buffer.nr = 1,
+       .buffer.entry_size = 32,
+       .buffer.info = gp100_fault_buffer_info,
+       .buffer.pin = gp10b_fault_buffer_pin,
+       .buffer.init = gp100_fault_buffer_init,
+       .buffer.fini = gp100_fault_buffer_fini,
+       .buffer.intr = gp100_fault_buffer_intr,
+       .user = { { 0, 0, MAXWELL_FAULT_BUFFER_A }, 0 },
 };
 
-#pragma pack(pop)
-
-#endif /* _DMUB_FW_STATE_H_ */
+int
+gp10b_fault_new(struct nvkm_device *device, int index,
+               struct nvkm_fault **pfault)
+{
+       return nvkm_fault_new_(&gp10b_fault, device, index, pfault);
+}
index 6747f09..2707be4 100644 (file)
@@ -214,6 +214,7 @@ gv100_fault = {
        .buffer.nr = 2,
        .buffer.entry_size = 32,
        .buffer.info = gv100_fault_buffer_info,
+       .buffer.pin = gp100_fault_buffer_pin,
        .buffer.init = gv100_fault_buffer_init,
        .buffer.fini = gv100_fault_buffer_fini,
        .buffer.intr = gv100_fault_buffer_intr,
index 975e66a..f6f1dd7 100644 (file)
@@ -30,6 +30,7 @@ struct nvkm_fault_func {
                int nr;
                u32 entry_size;
                void (*info)(struct nvkm_fault_buffer *);
+               u64 (*pin)(struct nvkm_fault_buffer *);
                void (*init)(struct nvkm_fault_buffer *);
                void (*fini)(struct nvkm_fault_buffer *);
                void (*intr)(struct nvkm_fault_buffer *, bool enable);
@@ -40,6 +41,15 @@ struct nvkm_fault_func {
        } user;
 };
 
+void gp100_fault_buffer_intr(struct nvkm_fault_buffer *, bool enable);
+void gp100_fault_buffer_fini(struct nvkm_fault_buffer *);
+void gp100_fault_buffer_init(struct nvkm_fault_buffer *);
+u64 gp100_fault_buffer_pin(struct nvkm_fault_buffer *);
+void gp100_fault_buffer_info(struct nvkm_fault_buffer *);
+void gp100_fault_intr(struct nvkm_fault *);
+
+u64 gp10b_fault_buffer_pin(struct nvkm_fault_buffer *);
+
 int gv100_fault_oneinit(struct nvkm_fault *);
 
 int nvkm_ufault_new(struct nvkm_device *, const struct nvkm_oclass *,
index fa1dfe5..45a6a68 100644 (file)
@@ -154,6 +154,7 @@ tu102_fault = {
        .buffer.nr = 2,
        .buffer.entry_size = 32,
        .buffer.info = tu102_fault_buffer_info,
+       .buffer.pin = gp100_fault_buffer_pin,
        .buffer.init = tu102_fault_buffer_init,
        .buffer.fini = tu102_fault_buffer_fini,
        .buffer.intr = tu102_fault_buffer_intr,
index b2bb5a3..d09db7c 100644 (file)
@@ -154,6 +154,23 @@ nvkm_fb_init(struct nvkm_subdev *subdev)
 
        if (fb->func->init_unkn)
                fb->func->init_unkn(fb);
+
+       if (fb->func->vpr.scrub_required &&
+           fb->func->vpr.scrub_required(fb)) {
+               nvkm_debug(subdev, "VPR locked, running scrubber binary\n");
+
+               ret = fb->func->vpr.scrub(fb);
+               if (ret)
+                       return ret;
+
+               if (fb->func->vpr.scrub_required(fb)) {
+                       nvkm_error(subdev, "VPR still locked after scrub!\n");
+                       return -EIO;
+               }
+
+               nvkm_debug(subdev, "VPR scrubber binary successful\n");
+       }
+
        return 0;
 }
 
@@ -172,6 +189,8 @@ nvkm_fb_dtor(struct nvkm_subdev *subdev)
        nvkm_mm_fini(&fb->tags);
        nvkm_ram_del(&fb->ram);
 
+       nvkm_blob_dtor(&fb->vpr_scrubber);
+
        if (fb->func->dtor)
                return fb->func->dtor(fb);
        return fb;
index b4d74e8..9be7316 100644 (file)
 #include "gf100.h"
 #include "ram.h"
 
+#include <core/firmware.h>
 #include <core/memory.h>
+#include <nvfw/fw.h>
+#include <nvfw/hs.h>
+#include <engine/nvdec.h>
+
+int
+gp102_fb_vpr_scrub(struct nvkm_fb *fb)
+{
+       struct nvkm_subdev *subdev = &fb->subdev;
+       struct nvkm_device *device = subdev->device;
+       struct nvkm_falcon *falcon = &device->nvdec[0]->falcon;
+       struct nvkm_blob *blob = &fb->vpr_scrubber;
+       const struct nvfw_bin_hdr *hsbin_hdr;
+       const struct nvfw_hs_header *fw_hdr;
+       const struct nvfw_hs_load_header *lhdr;
+       void *scrub_data;
+       u32 patch_loc, patch_sig;
+       int ret;
+
+       nvkm_falcon_get(falcon, subdev);
+
+       hsbin_hdr = nvfw_bin_hdr(subdev, blob->data);
+       fw_hdr = nvfw_hs_header(subdev, blob->data + hsbin_hdr->header_offset);
+       lhdr = nvfw_hs_load_header(subdev, blob->data + fw_hdr->hdr_offset);
+       scrub_data = blob->data + hsbin_hdr->data_offset;
+
+       patch_loc = *(u32 *)(blob->data + fw_hdr->patch_loc);
+       patch_sig = *(u32 *)(blob->data + fw_hdr->patch_sig);
+       if (falcon->debug) {
+               memcpy(scrub_data + patch_loc,
+                      blob->data + fw_hdr->sig_dbg_offset + patch_sig,
+                      fw_hdr->sig_dbg_size);
+       } else {
+               memcpy(scrub_data + patch_loc,
+                      blob->data + fw_hdr->sig_prod_offset + patch_sig,
+                      fw_hdr->sig_prod_size);
+       }
+
+       nvkm_falcon_reset(falcon);
+       nvkm_falcon_bind_context(falcon, NULL);
+
+       nvkm_falcon_load_imem(falcon, scrub_data, lhdr->non_sec_code_off,
+                             lhdr->non_sec_code_size,
+                             lhdr->non_sec_code_off >> 8, 0, false);
+       nvkm_falcon_load_imem(falcon, scrub_data + lhdr->apps[0],
+                             ALIGN(lhdr->apps[0], 0x100),
+                             lhdr->apps[1],
+                             lhdr->apps[0] >> 8, 0, true);
+       nvkm_falcon_load_dmem(falcon, scrub_data + lhdr->data_dma_base, 0,
+                             lhdr->data_size, 0);
+
+       nvkm_falcon_set_start_addr(falcon, 0x0);
+       nvkm_falcon_start(falcon);
+
+       ret = nvkm_falcon_wait_for_halt(falcon, 500);
+       if (ret < 0) {
+               ret = -ETIMEDOUT;
+               goto end;
+       }
+
+       /* put nvdec in clean state - without reset it will remain in HS mode */
+       nvkm_falcon_reset(falcon);
+end:
+       nvkm_falcon_put(falcon, subdev);
+       return ret;
+}
+
+bool
+gp102_fb_vpr_scrub_required(struct nvkm_fb *fb)
+{
+       struct nvkm_device *device = fb->subdev.device;
+       nvkm_wr32(device, 0x100cd0, 0x2);
+       return (nvkm_rd32(device, 0x100cd0) & 0x00000010) != 0;
+}
 
 static const struct nvkm_fb_func
 gp102_fb = {
@@ -33,11 +107,31 @@ gp102_fb = {
        .init = gp100_fb_init,
        .init_remapper = gp100_fb_init_remapper,
        .init_page = gm200_fb_init_page,
+       .vpr.scrub_required = gp102_fb_vpr_scrub_required,
+       .vpr.scrub = gp102_fb_vpr_scrub,
        .ram_new = gp100_ram_new,
 };
 
 int
+gp102_fb_new_(const struct nvkm_fb_func *func, struct nvkm_device *device,
+             int index, struct nvkm_fb **pfb)
+{
+       int ret = gf100_fb_new_(func, device, index, pfb);
+       if (ret)
+               return ret;
+
+       return nvkm_firmware_load_blob(&(*pfb)->subdev, "nvdec/scrubber", "", 0,
+                                      &(*pfb)->vpr_scrubber);
+}
+
+int
 gp102_fb_new(struct nvkm_device *device, int index, struct nvkm_fb **pfb)
 {
-       return gf100_fb_new_(&gp102_fb, device, index, pfb);
+       return gp102_fb_new_(&gp102_fb, device, index, pfb);
 }
+
+MODULE_FIRMWARE("nvidia/gp102/nvdec/scrubber.bin");
+MODULE_FIRMWARE("nvidia/gp104/nvdec/scrubber.bin");
+MODULE_FIRMWARE("nvidia/gp106/nvdec/scrubber.bin");
+MODULE_FIRMWARE("nvidia/gp107/nvdec/scrubber.bin");
+MODULE_FIRMWARE("nvidia/gp108/nvdec/scrubber.bin");
index 3c5e02e..389bad3 100644 (file)
@@ -35,6 +35,8 @@ gv100_fb = {
        .init = gp100_fb_init,
        .init_page = gv100_fb_init_page,
        .init_unkn = gp100_fb_init_unkn,
+       .vpr.scrub_required = gp102_fb_vpr_scrub_required,
+       .vpr.scrub = gp102_fb_vpr_scrub,
        .ram_new = gp100_ram_new,
        .default_bigpage = 16,
 };
@@ -42,5 +44,10 @@ gv100_fb = {
 int
 gv100_fb_new(struct nvkm_device *device, int index, struct nvkm_fb **pfb)
 {
-       return gf100_fb_new_(&gv100_fb, device, index, pfb);
+       return gp102_fb_new_(&gv100_fb, device, index, pfb);
 }
+
+MODULE_FIRMWARE("nvidia/gv100/nvdec/scrubber.bin");
+MODULE_FIRMWARE("nvidia/tu102/nvdec/scrubber.bin");
+MODULE_FIRMWARE("nvidia/tu104/nvdec/scrubber.bin");
+MODULE_FIRMWARE("nvidia/tu106/nvdec/scrubber.bin");
index c4e9f55..5be9c56 100644 (file)
@@ -17,6 +17,11 @@ struct nvkm_fb_func {
        void (*intr)(struct nvkm_fb *);
 
        struct {
+               bool (*scrub_required)(struct nvkm_fb *);
+               int (*scrub)(struct nvkm_fb *);
+       } vpr;
+
+       struct {
                int regions;
                void (*init)(struct nvkm_fb *, int i, u32 addr, u32 size,
                             u32 pitch, u32 flags, struct nvkm_fb_tile *);
@@ -72,4 +77,9 @@ int gm200_fb_init_page(struct nvkm_fb *);
 
 void gp100_fb_init_remapper(struct nvkm_fb *);
 void gp100_fb_init_unkn(struct nvkm_fb *);
+
+int gp102_fb_new_(const struct nvkm_fb_func *, struct nvkm_device *, int,
+                 struct nvkm_fb **);
+bool gp102_fb_vpr_scrub_required(struct nvkm_fb *);
+int gp102_fb_vpr_scrub(struct nvkm_fb *);
 #endif
index ac87a3b..ba43fe1 100644 (file)
@@ -655,7 +655,7 @@ gf100_ram_new_(const struct nvkm_ram_func *func,
 
 static const struct nvkm_ram_func
 gf100_ram = {
-       .upper = 0x0200000000,
+       .upper = 0x0200000000ULL,
        .probe_fbp = gf100_ram_probe_fbp,
        .probe_fbp_amount = gf100_ram_probe_fbp_amount,
        .probe_fbpa_amount = gf100_ram_probe_fbpa_amount,
index 70a06e3..d97fa43 100644 (file)
@@ -43,7 +43,7 @@ gf108_ram_probe_fbp_amount(const struct nvkm_ram_func *func, u32 fbpao,
 
 static const struct nvkm_ram_func
 gf108_ram = {
-       .upper = 0x0200000000,
+       .upper = 0x0200000000ULL,
        .probe_fbp = gf100_ram_probe_fbp,
        .probe_fbp_amount = gf108_ram_probe_fbp_amount,
        .probe_fbpa_amount = gf100_ram_probe_fbpa_amount,
index 456aed1..d350d92 100644 (file)
@@ -1698,7 +1698,7 @@ gk104_ram_new_(const struct nvkm_ram_func *func, struct nvkm_fb *fb,
 
 static const struct nvkm_ram_func
 gk104_ram = {
-       .upper = 0x0200000000,
+       .upper = 0x0200000000ULL,
        .probe_fbp = gf100_ram_probe_fbp,
        .probe_fbp_amount = gf108_ram_probe_fbp_amount,
        .probe_fbpa_amount = gf100_ram_probe_fbpa_amount,
index 27c68e3..be91da8 100644 (file)
@@ -33,7 +33,7 @@ gm107_ram_probe_fbp(const struct nvkm_ram_func *func,
 
 static const struct nvkm_ram_func
 gm107_ram = {
-       .upper = 0x1000000000,
+       .upper = 0x1000000000ULL,
        .probe_fbp = gm107_ram_probe_fbp,
        .probe_fbp_amount = gf108_ram_probe_fbp_amount,
        .probe_fbpa_amount = gf100_ram_probe_fbpa_amount,
index 6b0cac1..8f91ea9 100644 (file)
@@ -48,7 +48,7 @@ gm200_ram_probe_fbp_amount(const struct nvkm_ram_func *func, u32 fbpao,
 
 static const struct nvkm_ram_func
 gm200_ram = {
-       .upper = 0x1000000000,
+       .upper = 0x1000000000ULL,
        .probe_fbp = gm107_ram_probe_fbp,
        .probe_fbp_amount = gm200_ram_probe_fbp_amount,
        .probe_fbpa_amount = gf100_ram_probe_fbpa_amount,
index adb62a6..378f6fb 100644 (file)
@@ -79,7 +79,7 @@ gp100_ram_probe_fbpa(struct nvkm_device *device, int fbpa)
 
 static const struct nvkm_ram_func
 gp100_ram = {
-       .upper = 0x1000000000,
+       .upper = 0x1000000000ULL,
        .probe_fbp = gm107_ram_probe_fbp,
        .probe_fbp_amount = gm200_ram_probe_fbp_amount,
        .probe_fbpa_amount = gp100_ram_probe_fbpa,
index e7c4f06..67cc3b3 100644 (file)
@@ -1,2 +1,3 @@
 # SPDX-License-Identifier: MIT
+nvkm-y += nvkm/subdev/gsp/base.o
 nvkm-y += nvkm/subdev/gsp/gv100.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.c
new file mode 100644 (file)
index 0000000..5a32df0
--- /dev/null
@@ -0,0 +1,59 @@
+/*
+ * Copyright 2019 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+#include <core/falcon.h>
+#include <core/firmware.h>
+#include <subdev/acr.h>
+#include <subdev/top.h>
+
+static void *
+nvkm_gsp_dtor(struct nvkm_subdev *subdev)
+{
+       struct nvkm_gsp *gsp = nvkm_gsp(subdev);
+       nvkm_falcon_dtor(&gsp->falcon);
+       return gsp;
+}
+
+static const struct nvkm_subdev_func
+nvkm_gsp = {
+       .dtor = nvkm_gsp_dtor,
+};
+
+int
+nvkm_gsp_new_(const struct nvkm_gsp_fwif *fwif, struct nvkm_device *device,
+             int index, struct nvkm_gsp **pgsp)
+{
+       struct nvkm_gsp *gsp;
+
+       if (!(gsp = *pgsp = kzalloc(sizeof(*gsp), GFP_KERNEL)))
+               return -ENOMEM;
+
+       nvkm_subdev_ctor(&nvkm_gsp, device, index, &gsp->subdev);
+
+       fwif = nvkm_firmware_load(&gsp->subdev, fwif, "Gsp", gsp);
+       if (IS_ERR(fwif))
+               return PTR_ERR(fwif);
+
+       return nvkm_falcon_ctor(fwif->flcn, &gsp->subdev,
+                               nvkm_subdev_name[gsp->subdev.index], 0,
+                               &gsp->falcon);
+}
index dccfaf1..2114f9b 100644 (file)
  * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
  * OTHER DEALINGS IN THE SOFTWARE.
  */
-#include <subdev/gsp.h>
-#include <subdev/top.h>
-#include <engine/falcon.h>
+#include "priv.h"
+
+static const struct nvkm_falcon_func
+gv100_gsp_flcn = {
+       .fbif = 0x600,
+       .load_imem = nvkm_falcon_v1_load_imem,
+       .load_dmem = nvkm_falcon_v1_load_dmem,
+       .read_dmem = nvkm_falcon_v1_read_dmem,
+       .bind_context = gp102_sec2_flcn_bind_context,
+       .wait_for_halt = nvkm_falcon_v1_wait_for_halt,
+       .clear_interrupt = nvkm_falcon_v1_clear_interrupt,
+       .set_start_addr = nvkm_falcon_v1_set_start_addr,
+       .start = nvkm_falcon_v1_start,
+       .enable = gp102_sec2_flcn_enable,
+       .disable = nvkm_falcon_v1_disable,
+};
 
 static int
-gv100_gsp_oneinit(struct nvkm_subdev *subdev)
-{
-       struct nvkm_gsp *gsp = nvkm_gsp(subdev);
-
-       gsp->addr = nvkm_top_addr(subdev->device, subdev->index);
-       if (!gsp->addr)
-               return -EINVAL;
-
-       return nvkm_falcon_v1_new(subdev, "GSP", gsp->addr, &gsp->falcon);
-}
-
-static void *
-gv100_gsp_dtor(struct nvkm_subdev *subdev)
+gv100_gsp_nofw(struct nvkm_gsp *gsp, int ver, const struct nvkm_gsp_fwif *fwif)
 {
-       struct nvkm_gsp *gsp = nvkm_gsp(subdev);
-       nvkm_falcon_del(&gsp->falcon);
-       return gsp;
+       return 0;
 }
 
-static const struct nvkm_subdev_func
-gv100_gsp = {
-       .dtor = gv100_gsp_dtor,
-       .oneinit = gv100_gsp_oneinit,
+struct nvkm_gsp_fwif
+gv100_gsp[] = {
+       { -1, gv100_gsp_nofw, &gv100_gsp_flcn },
+       {}
 };
 
 int
 gv100_gsp_new(struct nvkm_device *device, int index, struct nvkm_gsp **pgsp)
 {
-       struct nvkm_gsp *gsp;
-
-       if (!(gsp = *pgsp = kzalloc(sizeof(*gsp), GFP_KERNEL)))
-               return -ENOMEM;
-
-       nvkm_subdev_ctor(&gv100_gsp, device, index, &gsp->subdev);
-       return 0;
+       return nvkm_gsp_new_(gv100_gsp, device, index, pgsp);
 }
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/priv.h
new file mode 100644 (file)
index 0000000..92820fb
--- /dev/null
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: MIT */
+#ifndef __NVKM_GSP_PRIV_H__
+#define __NVKM_GSP_PRIV_H__
+#include <subdev/gsp.h>
+enum nvkm_acr_lsf_id;
+
+struct nvkm_gsp_fwif {
+       int version;
+       int (*load)(struct nvkm_gsp *, int ver, const struct nvkm_gsp_fwif *);
+       const struct nvkm_falcon_func *flcn;
+};
+
+int nvkm_gsp_new_(const struct nvkm_gsp_fwif *, struct nvkm_device *, int,
+                 struct nvkm_gsp **);
+#endif
index 2b6d36e..728d750 100644 (file)
@@ -6,3 +6,4 @@ nvkm-y += nvkm/subdev/ltc/gm107.o
 nvkm-y += nvkm/subdev/ltc/gm200.o
 nvkm-y += nvkm/subdev/ltc/gp100.o
 nvkm-y += nvkm/subdev/ltc/gp102.o
+nvkm-y += nvkm/subdev/ltc/gp10b.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gp10b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gp10b.c
new file mode 100644 (file)
index 0000000..c0063c7
--- /dev/null
@@ -0,0 +1,65 @@
+/*
+ * Copyright (c) 2019 NVIDIA Corporation.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Thierry Reding
+ */
+
+#include "priv.h"
+
+static void
+gp10b_ltc_init(struct nvkm_ltc *ltc)
+{
+       struct nvkm_device *device = ltc->subdev.device;
+       struct iommu_fwspec *spec;
+
+       nvkm_wr32(device, 0x17e27c, ltc->ltc_nr);
+       nvkm_wr32(device, 0x17e000, ltc->ltc_nr);
+       nvkm_wr32(device, 0x100800, ltc->ltc_nr);
+
+       spec = dev_iommu_fwspec_get(device->dev);
+       if (spec) {
+               u32 sid = spec->ids[0] & 0xffff;
+
+               /* stream ID */
+               nvkm_wr32(device, 0x160000, sid << 2);
+       }
+}
+
+static const struct nvkm_ltc_func
+gp10b_ltc = {
+       .oneinit = gp100_ltc_oneinit,
+       .init = gp10b_ltc_init,
+       .intr = gp100_ltc_intr,
+       .cbc_clear = gm107_ltc_cbc_clear,
+       .cbc_wait = gm107_ltc_cbc_wait,
+       .zbc = 16,
+       .zbc_clear_color = gm107_ltc_zbc_clear_color,
+       .zbc_clear_depth = gm107_ltc_zbc_clear_depth,
+       .zbc_clear_stencil = gp102_ltc_zbc_clear_stencil,
+       .invalidate = gf100_ltc_invalidate,
+       .flush = gf100_ltc_flush,
+};
+
+int
+gp10b_ltc_new(struct nvkm_device *device, int index, struct nvkm_ltc **pltc)
+{
+       return nvkm_ltc_new_(&gp10b_ltc, device, index, pltc);
+}
index 2fcf18e..eca5a71 100644 (file)
@@ -46,4 +46,6 @@ void gm107_ltc_zbc_clear_depth(struct nvkm_ltc *, int, const u32);
 int gp100_ltc_oneinit(struct nvkm_ltc *);
 void gp100_ltc_init(struct nvkm_ltc *);
 void gp100_ltc_intr(struct nvkm_ltc *);
+
+void gp102_ltc_zbc_clear_stencil(struct nvkm_ltc *, int, const u32);
 #endif
index 2d07524..2cd5ec8 100644 (file)
@@ -30,7 +30,7 @@
  * The value 0xff represents an invalid storage type.
  */
 const u8 *
-gf100_mmu_kind(struct nvkm_mmu *mmu, int *count)
+gf100_mmu_kind(struct nvkm_mmu *mmu, int *count, u8 *invalid)
 {
        static const u8
        kind[256] = {
@@ -69,6 +69,7 @@ gf100_mmu_kind(struct nvkm_mmu *mmu, int *count)
        };
 
        *count = ARRAY_SIZE(kind);
+       *invalid = 0xff;
        return kind;
 }
 
index dbf644e..83990c8 100644 (file)
@@ -27,7 +27,7 @@
 #include <nvif/class.h>
 
 const u8 *
-gm200_mmu_kind(struct nvkm_mmu *mmu, int *count)
+gm200_mmu_kind(struct nvkm_mmu *mmu, int *count, u8 *invalid)
 {
        static const u8
        kind[256] = {
@@ -65,6 +65,7 @@ gm200_mmu_kind(struct nvkm_mmu *mmu, int *count)
                0xfe, 0xfe, 0xfe, 0xfe, 0xff, 0xfd, 0xfe, 0xff
        };
        *count = ARRAY_SIZE(kind);
+       *invalid = 0xff;
        return kind;
 }
 
index db3dfbb..c0083dd 100644 (file)
@@ -27,7 +27,7 @@
 #include <nvif/class.h>
 
 const u8 *
-nv50_mmu_kind(struct nvkm_mmu *base, int *count)
+nv50_mmu_kind(struct nvkm_mmu *base, int *count, u8 *invalid)
 {
        /* 0x01: no bank swizzle
         * 0x02: bank swizzled
@@ -57,6 +57,7 @@ nv50_mmu_kind(struct nvkm_mmu *base, int *count)
                0x01, 0x01, 0x02, 0x02, 0x01, 0x01, 0x7f, 0x7f
        };
        *count = ARRAY_SIZE(kind);
+       *invalid = 0x7f;
        return kind;
 }
 
index 07f2fcd..479b023 100644 (file)
@@ -35,17 +35,17 @@ struct nvkm_mmu_func {
                u32 pd_offset;
        } vmm;
 
-       const u8 *(*kind)(struct nvkm_mmu *, int *count);
+       const u8 *(*kind)(struct nvkm_mmu *, int *count, u8 *invalid);
        bool kind_sys;
 };
 
 extern const struct nvkm_mmu_func nv04_mmu;
 
-const u8 *nv50_mmu_kind(struct nvkm_mmu *, int *count);
+const u8 *nv50_mmu_kind(struct nvkm_mmu *, int *count, u8 *invalid);
 
-const u8 *gf100_mmu_kind(struct nvkm_mmu *, int *count);
+const u8 *gf100_mmu_kind(struct nvkm_mmu *, int *count, u8 *invalid);
 
-const u8 *gm200_mmu_kind(struct nvkm_mmu *, int *);
+const u8 *gm200_mmu_kind(struct nvkm_mmu *, int *, u8 *);
 
 struct nvkm_mmu_pt {
        union {
index c0db0ce..b21e82e 100644 (file)
@@ -1,5 +1,6 @@
 /*
  * Copyright 2018 Red Hat Inc.
+ * Copyright 2019 NVIDIA Corporation.
  *
  * Permission is hereby granted, free of charge, to any person obtaining a
  * copy of this software and associated documentation files (the "Software"),
 
 #include <nvif/class.h>
 
+const u8 *
+tu102_mmu_kind(struct nvkm_mmu *mmu, int *count, u8 *invalid)
+{
+       static const u8
+       kind[16] = {
+               0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, /* 0x00 */
+               0x06, 0x06, 0x02, 0x01, 0x03, 0x04, 0x05, 0x07,
+       };
+       *count = ARRAY_SIZE(kind);
+       *invalid = 0x07;
+       return kind;
+}
+
 static const struct nvkm_mmu_func
 tu102_mmu = {
        .dma_bits = 47,
        .mmu = {{ -1, -1, NVIF_CLASS_MMU_GF100}},
        .mem = {{ -1,  0, NVIF_CLASS_MEM_GF100}, gf100_mem_new, gf100_mem_map },
        .vmm = {{ -1,  0, NVIF_CLASS_VMM_GP100}, tu102_vmm_new },
-       .kind = gm200_mmu_kind,
+       .kind = tu102_mmu_kind,
        .kind_sys = true,
 };
 
index 353f10f..0e4b894 100644 (file)
@@ -111,15 +111,17 @@ nvkm_ummu_kind(struct nvkm_ummu *ummu, void *argv, u32 argc)
        } *args = argv;
        const u8 *kind = NULL;
        int ret = -ENOSYS, count = 0;
+       u8 kind_inv = 0;
 
        if (mmu->func->kind)
-               kind = mmu->func->kind(mmu, &count);
+               kind = mmu->func->kind(mmu, &count, &kind_inv);
 
        if (!(ret = nvif_unpack(ret, &argv, &argc, args->v0, 0, 0, true))) {
                if (argc != args->v0.count * sizeof(*args->v0.data))
                        return -EINVAL;
                if (args->v0.count > count)
                        return -EINVAL;
+               args->v0.kind_inv = kind_inv;
                memcpy(args->v0.data, kind, args->v0.count);
        } else
                return ret;
@@ -157,9 +159,10 @@ nvkm_ummu_new(struct nvkm_device *device, const struct nvkm_oclass *oclass,
        struct nvkm_mmu *mmu = device->mmu;
        struct nvkm_ummu *ummu;
        int ret = -ENOSYS, kinds = 0;
+       u8 unused = 0;
 
        if (mmu->func->kind)
-               mmu->func->kind(mmu, &kinds);
+               mmu->func->kind(mmu, &kinds, &unused);
 
        if (!(ret = nvif_unpack(ret, &argv, &argc, args->v0, 0, 0, false))) {
                args->v0.dmabits = mmu->dma_bits;
index ab6424f..6a2d9eb 100644 (file)
@@ -247,7 +247,7 @@ gf100_vmm_valid(struct nvkm_vmm *vmm, void *argv, u32 argc,
        } *args = argv;
        struct nvkm_device *device = vmm->mmu->subdev.device;
        struct nvkm_memory *memory = map->memory;
-       u8  kind, priv, ro, vol;
+       u8  kind, kind_inv, priv, ro, vol;
        int kindn, aper, ret = -ENOSYS;
        const u8 *kindm;
 
@@ -274,8 +274,8 @@ gf100_vmm_valid(struct nvkm_vmm *vmm, void *argv, u32 argc,
        if (WARN_ON(aper < 0))
                return aper;
 
-       kindm = vmm->mmu->func->kind(vmm->mmu, &kindn);
-       if (kind >= kindn || kindm[kind] == 0xff) {
+       kindm = vmm->mmu->func->kind(vmm->mmu, &kindn, &kind_inv);
+       if (kind >= kindn || kindm[kind] == kind_inv) {
                VMM_DEBUG(vmm, "kind %02x", kind);
                return -EINVAL;
        }
index b4f5197..d862875 100644 (file)
@@ -320,7 +320,7 @@ gp100_vmm_valid(struct nvkm_vmm *vmm, void *argv, u32 argc,
        } *args = argv;
        struct nvkm_device *device = vmm->mmu->subdev.device;
        struct nvkm_memory *memory = map->memory;
-       u8  kind, priv, ro, vol;
+       u8  kind, kind_inv, priv, ro, vol;
        int kindn, aper, ret = -ENOSYS;
        const u8 *kindm;
 
@@ -347,8 +347,8 @@ gp100_vmm_valid(struct nvkm_vmm *vmm, void *argv, u32 argc,
        if (WARN_ON(aper < 0))
                return aper;
 
-       kindm = vmm->mmu->func->kind(vmm->mmu, &kindn);
-       if (kind >= kindn || kindm[kind] == 0xff) {
+       kindm = vmm->mmu->func->kind(vmm->mmu, &kindn, &kind_inv);
+       if (kind >= kindn || kindm[kind] == kind_inv) {
                VMM_DEBUG(vmm, "kind %02x", kind);
                return -EINVAL;
        }
index c98afe3..2d89e27 100644 (file)
@@ -235,7 +235,7 @@ nv50_vmm_valid(struct nvkm_vmm *vmm, void *argv, u32 argc,
        struct nvkm_device *device = vmm->mmu->subdev.device;
        struct nvkm_ram *ram = device->fb->ram;
        struct nvkm_memory *memory = map->memory;
-       u8  aper, kind, comp, priv, ro;
+       u8  aper, kind, kind_inv, comp, priv, ro;
        int kindn, ret = -ENOSYS;
        const u8 *kindm;
 
@@ -278,8 +278,8 @@ nv50_vmm_valid(struct nvkm_vmm *vmm, void *argv, u32 argc,
                return -EINVAL;
        }
 
-       kindm = vmm->mmu->func->kind(vmm->mmu, &kindn);
-       if (kind >= kindn || kindm[kind] == 0x7f) {
+       kindm = vmm->mmu->func->kind(vmm->mmu, &kindn, &kind_inv);
+       if (kind >= kindn || kindm[kind] == kind_inv) {
                VMM_DEBUG(vmm, "kind %02x", kind);
                return -EINVAL;
        }
index e37b6e4..a76c2a7 100644 (file)
@@ -12,3 +12,4 @@ nvkm-y += nvkm/subdev/pmu/gm107.o
 nvkm-y += nvkm/subdev/pmu/gm20b.o
 nvkm-y += nvkm/subdev/pmu/gp100.o
 nvkm-y += nvkm/subdev/pmu/gp102.o
+nvkm-y += nvkm/subdev/pmu/gp10b.o
index ea2e117..a0fe607 100644 (file)
@@ -23,7 +23,7 @@
  */
 #include "priv.h"
 
-#include <core/msgqueue.h>
+#include <core/firmware.h>
 #include <subdev/timer.h>
 
 bool
@@ -85,6 +85,12 @@ nvkm_pmu_fini(struct nvkm_subdev *subdev, bool suspend)
                pmu->func->fini(pmu);
 
        flush_work(&pmu->recv.work);
+
+       reinit_completion(&pmu->wpr_ready);
+
+       nvkm_falcon_cmdq_fini(pmu->lpq);
+       nvkm_falcon_cmdq_fini(pmu->hpq);
+       pmu->initmsg_received = false;
        return 0;
 }
 
@@ -133,19 +139,15 @@ nvkm_pmu_init(struct nvkm_subdev *subdev)
        return ret;
 }
 
-static int
-nvkm_pmu_oneinit(struct nvkm_subdev *subdev)
-{
-       struct nvkm_pmu *pmu = nvkm_pmu(subdev);
-       return nvkm_falcon_v1_new(&pmu->subdev, "PMU", 0x10a000, &pmu->falcon);
-}
-
 static void *
 nvkm_pmu_dtor(struct nvkm_subdev *subdev)
 {
        struct nvkm_pmu *pmu = nvkm_pmu(subdev);
-       nvkm_msgqueue_del(&pmu->queue);
-       nvkm_falcon_del(&pmu->falcon);
+       nvkm_falcon_msgq_del(&pmu->msgq);
+       nvkm_falcon_cmdq_del(&pmu->lpq);
+       nvkm_falcon_cmdq_del(&pmu->hpq);
+       nvkm_falcon_qmgr_del(&pmu->qmgr);
+       nvkm_falcon_dtor(&pmu->falcon);
        return nvkm_pmu(subdev);
 }
 
@@ -153,29 +155,50 @@ static const struct nvkm_subdev_func
 nvkm_pmu = {
        .dtor = nvkm_pmu_dtor,
        .preinit = nvkm_pmu_preinit,
-       .oneinit = nvkm_pmu_oneinit,
        .init = nvkm_pmu_init,
        .fini = nvkm_pmu_fini,
        .intr = nvkm_pmu_intr,
 };
 
 int
-nvkm_pmu_ctor(const struct nvkm_pmu_func *func, struct nvkm_device *device,
+nvkm_pmu_ctor(const struct nvkm_pmu_fwif *fwif, struct nvkm_device *device,
              int index, struct nvkm_pmu *pmu)
 {
+       int ret;
+
        nvkm_subdev_ctor(&nvkm_pmu, device, index, &pmu->subdev);
-       pmu->func = func;
+
        INIT_WORK(&pmu->recv.work, nvkm_pmu_recv);
        init_waitqueue_head(&pmu->recv.wait);
+
+       fwif = nvkm_firmware_load(&pmu->subdev, fwif, "Pmu", pmu);
+       if (IS_ERR(fwif))
+               return PTR_ERR(fwif);
+
+       pmu->func = fwif->func;
+
+       ret = nvkm_falcon_ctor(pmu->func->flcn, &pmu->subdev,
+                              nvkm_subdev_name[pmu->subdev.index], 0x10a000,
+                              &pmu->falcon);
+       if (ret)
+               return ret;
+
+       if ((ret = nvkm_falcon_qmgr_new(&pmu->falcon, &pmu->qmgr)) ||
+           (ret = nvkm_falcon_cmdq_new(pmu->qmgr, "hpq", &pmu->hpq)) ||
+           (ret = nvkm_falcon_cmdq_new(pmu->qmgr, "lpq", &pmu->lpq)) ||
+           (ret = nvkm_falcon_msgq_new(pmu->qmgr, "msgq", &pmu->msgq)))
+               return ret;
+
+       init_completion(&pmu->wpr_ready);
        return 0;
 }
 
 int
-nvkm_pmu_new_(const struct nvkm_pmu_func *func, struct nvkm_device *device,
+nvkm_pmu_new_(const struct nvkm_pmu_fwif *fwif, struct nvkm_device *device,
              int index, struct nvkm_pmu **ppmu)
 {
        struct nvkm_pmu *pmu;
        if (!(pmu = *ppmu = kzalloc(sizeof(*pmu), GFP_KERNEL)))
                return -ENOMEM;
-       return nvkm_pmu_ctor(func, device, index, *ppmu);
+       return nvkm_pmu_ctor(fwif, device, index, *ppmu);
 }
index 0b45865..3ecb3d9 100644 (file)
@@ -42,6 +42,7 @@ gf100_pmu_enabled(struct nvkm_pmu *pmu)
 
 static const struct nvkm_pmu_func
 gf100_pmu = {
+       .flcn = &gt215_pmu_flcn,
        .code.data = gf100_pmu_code,
        .code.size = sizeof(gf100_pmu_code),
        .data.data = gf100_pmu_data,
@@ -56,7 +57,19 @@ gf100_pmu = {
 };
 
 int
+gf100_pmu_nofw(struct nvkm_pmu *pmu, int ver, const struct nvkm_pmu_fwif *fwif)
+{
+       return 0;
+}
+
+static const struct nvkm_pmu_fwif
+gf100_pmu_fwif[] = {
+       { -1, gf100_pmu_nofw, &gf100_pmu },
+       {}
+};
+
+int
 gf100_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
 {
-       return nvkm_pmu_new_(&gf100_pmu, device, index, ppmu);
+       return nvkm_pmu_new_(gf100_pmu_fwif, device, index, ppmu);
 }
index 3dfa79d..8dd0271 100644 (file)
@@ -26,6 +26,7 @@
 
 static const struct nvkm_pmu_func
 gf119_pmu = {
+       .flcn = &gt215_pmu_flcn,
        .code.data = gf119_pmu_code,
        .code.size = sizeof(gf119_pmu_code),
        .data.data = gf119_pmu_data,
@@ -39,8 +40,14 @@ gf119_pmu = {
        .recv = gt215_pmu_recv,
 };
 
+static const struct nvkm_pmu_fwif
+gf119_pmu_fwif[] = {
+       { -1, gf100_pmu_nofw, &gf119_pmu },
+       {}
+};
+
 int
 gf119_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
 {
-       return nvkm_pmu_new_(&gf119_pmu, device, index, ppmu);
+       return nvkm_pmu_new_(gf119_pmu_fwif, device, index, ppmu);
 }
index 8f7ec10..8b70cc1 100644 (file)
@@ -105,6 +105,7 @@ gk104_pmu_pgob(struct nvkm_pmu *pmu, bool enable)
 
 static const struct nvkm_pmu_func
 gk104_pmu = {
+       .flcn = &gt215_pmu_flcn,
        .code.data = gk104_pmu_code,
        .code.size = sizeof(gk104_pmu_code),
        .data.data = gk104_pmu_data,
@@ -119,8 +120,14 @@ gk104_pmu = {
        .pgob = gk104_pmu_pgob,
 };
 
+static const struct nvkm_pmu_fwif
+gk104_pmu_fwif[] = {
+       { -1, gf100_pmu_nofw, &gk104_pmu },
+       {}
+};
+
 int
 gk104_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
 {
-       return nvkm_pmu_new_(&gk104_pmu, device, index, ppmu);
+       return nvkm_pmu_new_(gk104_pmu_fwif, device, index, ppmu);
 }
index 345741d..0081f21 100644 (file)
@@ -84,6 +84,7 @@ gk110_pmu_pgob(struct nvkm_pmu *pmu, bool enable)
 
 static const struct nvkm_pmu_func
 gk110_pmu = {
+       .flcn = &gt215_pmu_flcn,
        .code.data = gk110_pmu_code,
        .code.size = sizeof(gk110_pmu_code),
        .data.data = gk110_pmu_data,
@@ -98,8 +99,14 @@ gk110_pmu = {
        .pgob = gk110_pmu_pgob,
 };
 
+static const struct nvkm_pmu_fwif
+gk110_pmu_fwif[] = {
+       { -1, gf100_pmu_nofw, &gk110_pmu },
+       {}
+};
+
 int
 gk110_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
 {
-       return nvkm_pmu_new_(&gk110_pmu, device, index, ppmu);
+       return nvkm_pmu_new_(gk110_pmu_fwif, device, index, ppmu);
 }
index e4acf78..b227c70 100644 (file)
@@ -26,6 +26,7 @@
 
 static const struct nvkm_pmu_func
 gk208_pmu = {
+       .flcn = &gt215_pmu_flcn,
        .code.data = gk208_pmu_code,
        .code.size = sizeof(gk208_pmu_code),
        .data.data = gk208_pmu_data,
@@ -40,8 +41,14 @@ gk208_pmu = {
        .pgob = gk110_pmu_pgob,
 };
 
+static const struct nvkm_pmu_fwif
+gk208_pmu_fwif[] = {
+       { -1, gf100_pmu_nofw, &gk208_pmu },
+       {}
+};
+
 int
 gk208_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
 {
-       return nvkm_pmu_new_(&gk208_pmu, device, index, ppmu);
+       return nvkm_pmu_new_(gk208_pmu_fwif, device, index, ppmu);
 }
index 05e8185..26c1adf 100644 (file)
@@ -95,7 +95,7 @@ static void
 gk20a_pmu_dvfs_get_dev_status(struct gk20a_pmu *pmu,
                              struct gk20a_pmu_dvfs_dev_status *status)
 {
-       struct nvkm_falcon *falcon = pmu->base.falcon;
+       struct nvkm_falcon *falcon = &pmu->base.falcon;
 
        status->busy = nvkm_falcon_rd32(falcon, 0x508 + (BUSY_SLOT * 0x10));
        status->total= nvkm_falcon_rd32(falcon, 0x508 + (CLK_SLOT * 0x10));
@@ -104,7 +104,7 @@ gk20a_pmu_dvfs_get_dev_status(struct gk20a_pmu *pmu,
 static void
 gk20a_pmu_dvfs_reset_dev_status(struct gk20a_pmu *pmu)
 {
-       struct nvkm_falcon *falcon = pmu->base.falcon;
+       struct nvkm_falcon *falcon = &pmu->base.falcon;
 
        nvkm_falcon_wr32(falcon, 0x508 + (BUSY_SLOT * 0x10), 0x80000000);
        nvkm_falcon_wr32(falcon, 0x508 + (CLK_SLOT * 0x10), 0x80000000);
@@ -160,7 +160,7 @@ gk20a_pmu_fini(struct nvkm_pmu *pmu)
        struct gk20a_pmu *gpmu = gk20a_pmu(pmu);
        nvkm_timer_alarm(pmu->subdev.device->timer, 0, &gpmu->alarm);
 
-       nvkm_falcon_put(pmu->falcon, &pmu->subdev);
+       nvkm_falcon_put(&pmu->falcon, &pmu->subdev);
 }
 
 static int
@@ -169,7 +169,7 @@ gk20a_pmu_init(struct nvkm_pmu *pmu)
        struct gk20a_pmu *gpmu = gk20a_pmu(pmu);
        struct nvkm_subdev *subdev = &pmu->subdev;
        struct nvkm_device *device = pmu->subdev.device;
-       struct nvkm_falcon *falcon = pmu->falcon;
+       struct nvkm_falcon *falcon = &pmu->falcon;
        int ret;
 
        ret = nvkm_falcon_get(falcon, subdev);
@@ -196,25 +196,34 @@ gk20a_dvfs_data= {
 
 static const struct nvkm_pmu_func
 gk20a_pmu = {
+       .flcn = &gt215_pmu_flcn,
        .enabled = gf100_pmu_enabled,
        .init = gk20a_pmu_init,
        .fini = gk20a_pmu_fini,
        .reset = gf100_pmu_reset,
 };
 
+static const struct nvkm_pmu_fwif
+gk20a_pmu_fwif[] = {
+       { -1, gf100_pmu_nofw, &gk20a_pmu },
+       {}
+};
+
 int
 gk20a_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
 {
        struct gk20a_pmu *pmu;
+       int ret;
 
        if (!(pmu = kzalloc(sizeof(*pmu), GFP_KERNEL)))
                return -ENOMEM;
        *ppmu = &pmu->base;
 
-       nvkm_pmu_ctor(&gk20a_pmu, device, index, &pmu->base);
+       ret = nvkm_pmu_ctor(gk20a_pmu_fwif, device, index, &pmu->base);
+       if (ret)
+               return ret;
 
        pmu->data = &gk20a_dvfs_data;
        nvkm_alarm_init(&pmu->alarm, gk20a_pmu_dvfs_work);
-
        return 0;
 }
index 459df1e..5afb55e 100644 (file)
@@ -28,6 +28,7 @@
 
 static const struct nvkm_pmu_func
 gm107_pmu = {
+       .flcn = &gt215_pmu_flcn,
        .code.data = gm107_pmu_code,
        .code.size = sizeof(gm107_pmu_code),
        .data.data = gm107_pmu_data,
@@ -41,8 +42,14 @@ gm107_pmu = {
        .recv = gt215_pmu_recv,
 };
 
+static const struct nvkm_pmu_fwif
+gm107_pmu_fwif[] = {
+       { -1, gf100_pmu_nofw, &gm107_pmu },
+       {}
+};
+
 int
 gm107_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
 {
-       return nvkm_pmu_new_(&gm107_pmu, device, index, ppmu);
+       return nvkm_pmu_new_(gm107_pmu_fwif, device, index, ppmu);
 }
index 31c8431..6d5a13e 100644 (file)
  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  * DEALINGS IN THE SOFTWARE.
  */
-
-#include <engine/falcon.h>
-#include <core/msgqueue.h>
 #include "priv.h"
 
-static void
+#include <core/memory.h>
+#include <subdev/acr.h>
+
+#include <nvfw/flcn.h>
+#include <nvfw/pmu.h>
+
+static int
+gm20b_pmu_acr_bootstrap_falcon_cb(void *priv, struct nv_falcon_msg *hdr)
+{
+       struct nv_pmu_acr_bootstrap_falcon_msg *msg =
+               container_of(hdr, typeof(*msg), msg.hdr);
+       return msg->falcon_id;
+}
+
+int
+gm20b_pmu_acr_bootstrap_falcon(struct nvkm_falcon *falcon,
+                              enum nvkm_acr_lsf_id id)
+{
+       struct nvkm_pmu *pmu = container_of(falcon, typeof(*pmu), falcon);
+       struct nv_pmu_acr_bootstrap_falcon_cmd cmd = {
+               .cmd.hdr.unit_id = NV_PMU_UNIT_ACR,
+               .cmd.hdr.size = sizeof(cmd),
+               .cmd.cmd_type = NV_PMU_ACR_CMD_BOOTSTRAP_FALCON,
+               .flags = NV_PMU_ACR_BOOTSTRAP_FALCON_FLAGS_RESET_YES,
+               .falcon_id = id,
+       };
+       int ret;
+
+       ret = nvkm_falcon_cmdq_send(pmu->hpq, &cmd.cmd.hdr,
+                                   gm20b_pmu_acr_bootstrap_falcon_cb,
+                                   &pmu->subdev, msecs_to_jiffies(1000));
+       if (ret >= 0 && ret != cmd.falcon_id)
+               ret = -EIO;
+       return ret;
+}
+
+int
+gm20b_pmu_acr_boot(struct nvkm_falcon *falcon)
+{
+       struct nv_pmu_args args = { .secure_mode = true };
+       const u32 addr_args = falcon->data.limit - sizeof(struct nv_pmu_args);
+       nvkm_falcon_load_dmem(falcon, &args, addr_args, sizeof(args), 0);
+       nvkm_falcon_start(falcon);
+       return 0;
+}
+
+void
+gm20b_pmu_acr_bld_patch(struct nvkm_acr *acr, u32 bld, s64 adjust)
+{
+       struct loader_config hdr;
+       u64 addr;
+
+       nvkm_robj(acr->wpr, bld, &hdr, sizeof(hdr));
+       addr = ((u64)hdr.code_dma_base1 << 40 | hdr.code_dma_base << 8);
+       hdr.code_dma_base  = lower_32_bits((addr + adjust) >> 8);
+       hdr.code_dma_base1 = upper_32_bits((addr + adjust) >> 8);
+       addr = ((u64)hdr.data_dma_base1 << 40 | hdr.data_dma_base << 8);
+       hdr.data_dma_base  = lower_32_bits((addr + adjust) >> 8);
+       hdr.data_dma_base1 = upper_32_bits((addr + adjust) >> 8);
+       addr = ((u64)hdr.overlay_dma_base1 << 40 | hdr.overlay_dma_base << 8);
+       hdr.overlay_dma_base  = lower_32_bits((addr + adjust) << 8);
+       hdr.overlay_dma_base1 = upper_32_bits((addr + adjust) << 8);
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+
+       loader_config_dump(&acr->subdev, &hdr);
+}
+
+void
+gm20b_pmu_acr_bld_write(struct nvkm_acr *acr, u32 bld,
+                       struct nvkm_acr_lsfw *lsfw)
+{
+       const u64 base = lsfw->offset.img + lsfw->app_start_offset;
+       const u64 code = (base + lsfw->app_resident_code_offset) >> 8;
+       const u64 data = (base + lsfw->app_resident_data_offset) >> 8;
+       const struct loader_config hdr = {
+               .dma_idx = FALCON_DMAIDX_UCODE,
+               .code_dma_base = lower_32_bits(code),
+               .code_size_total = lsfw->app_size,
+               .code_size_to_load = lsfw->app_resident_code_size,
+               .code_entry_point = lsfw->app_imem_entry,
+               .data_dma_base = lower_32_bits(data),
+               .data_size = lsfw->app_resident_data_size,
+               .overlay_dma_base = lower_32_bits(code),
+               .argc = 1,
+               .argv = lsfw->falcon->data.limit - sizeof(struct nv_pmu_args),
+               .code_dma_base1 = upper_32_bits(code),
+               .data_dma_base1 = upper_32_bits(data),
+               .overlay_dma_base1 = upper_32_bits(code),
+       };
+
+       nvkm_wobj(acr->wpr, bld, &hdr, sizeof(hdr));
+}
+
+static const struct nvkm_acr_lsf_func
+gm20b_pmu_acr = {
+       .flags = NVKM_ACR_LSF_DMACTL_REQ_CTX,
+       .bld_size = sizeof(struct loader_config),
+       .bld_write = gm20b_pmu_acr_bld_write,
+       .bld_patch = gm20b_pmu_acr_bld_patch,
+       .boot = gm20b_pmu_acr_boot,
+       .bootstrap_falcon = gm20b_pmu_acr_bootstrap_falcon,
+};
+
+static int
+gm20b_pmu_acr_init_wpr_callback(void *priv, struct nv_falcon_msg *hdr)
+{
+       struct nv_pmu_acr_init_wpr_region_msg *msg =
+               container_of(hdr, typeof(*msg), msg.hdr);
+       struct nvkm_pmu *pmu = priv;
+       struct nvkm_subdev *subdev = &pmu->subdev;
+
+       if (msg->error_code) {
+               nvkm_error(subdev, "ACR WPR init failure: %d\n",
+                          msg->error_code);
+               return -EINVAL;
+       }
+
+       nvkm_debug(subdev, "ACR WPR init complete\n");
+       complete_all(&pmu->wpr_ready);
+       return 0;
+}
+
+static int
+gm20b_pmu_acr_init_wpr(struct nvkm_pmu *pmu)
+{
+       struct nv_pmu_acr_init_wpr_region_cmd cmd = {
+               .cmd.hdr.unit_id = NV_PMU_UNIT_ACR,
+               .cmd.hdr.size = sizeof(cmd),
+               .cmd.cmd_type = NV_PMU_ACR_CMD_INIT_WPR_REGION,
+               .region_id = 1,
+               .wpr_offset = 0,
+       };
+
+       return nvkm_falcon_cmdq_send(pmu->hpq, &cmd.cmd.hdr,
+                                    gm20b_pmu_acr_init_wpr_callback, pmu, 0);
+}
+
+int
+gm20b_pmu_initmsg(struct nvkm_pmu *pmu)
+{
+       struct nv_pmu_init_msg msg;
+       int ret;
+
+       ret = nvkm_falcon_msgq_recv_initmsg(pmu->msgq, &msg, sizeof(msg));
+       if (ret)
+               return ret;
+
+       if (msg.hdr.unit_id != NV_PMU_UNIT_INIT ||
+           msg.msg_type != NV_PMU_INIT_MSG_INIT)
+               return -EINVAL;
+
+       nvkm_falcon_cmdq_init(pmu->hpq, msg.queue_info[0].index,
+                                       msg.queue_info[0].offset,
+                                       msg.queue_info[0].size);
+       nvkm_falcon_cmdq_init(pmu->lpq, msg.queue_info[1].index,
+                                       msg.queue_info[1].offset,
+                                       msg.queue_info[1].size);
+       nvkm_falcon_msgq_init(pmu->msgq, msg.queue_info[4].index,
+                                        msg.queue_info[4].offset,
+                                        msg.queue_info[4].size);
+       return gm20b_pmu_acr_init_wpr(pmu);
+}
+
+void
 gm20b_pmu_recv(struct nvkm_pmu *pmu)
 {
-       if (!pmu->queue) {
-               nvkm_warn(&pmu->subdev,
-                         "recv function called while no firmware set!\n");
-               return;
+       if (!pmu->initmsg_received) {
+               int ret = pmu->func->initmsg(pmu);
+               if (ret) {
+                       nvkm_error(&pmu->subdev,
+                                  "error parsing init message: %d\n", ret);
+                       return;
+               }
+
+               pmu->initmsg_received = true;
        }
 
-       nvkm_msgqueue_recv(pmu->queue);
+       nvkm_falcon_msgq_recv(pmu->msgq);
 }
 
 static const struct nvkm_pmu_func
 gm20b_pmu = {
+       .flcn = &gt215_pmu_flcn,
        .enabled = gf100_pmu_enabled,
        .intr = gt215_pmu_intr,
        .recv = gm20b_pmu_recv,
+       .initmsg = gm20b_pmu_initmsg,
 };
 
+#if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+MODULE_FIRMWARE("nvidia/gm20b/pmu/desc.bin");
+MODULE_FIRMWARE("nvidia/gm20b/pmu/image.bin");
+MODULE_FIRMWARE("nvidia/gm20b/pmu/sig.bin");
+#endif
+
 int
-gm20b_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
+gm20b_pmu_load(struct nvkm_pmu *pmu, int ver, const struct nvkm_pmu_fwif *fwif)
 {
-       int ret;
+       return nvkm_acr_lsfw_load_sig_image_desc(&pmu->subdev, &pmu->falcon,
+                                                NVKM_ACR_LSF_PMU, "pmu/",
+                                                ver, fwif->acr);
+}
 
-       ret = nvkm_pmu_new_(&gm20b_pmu, device, index, ppmu);
-       if (ret)
-               return ret;
+static const struct nvkm_pmu_fwif
+gm20b_pmu_fwif[] = {
+       { 0, gm20b_pmu_load, &gm20b_pmu, &gm20b_pmu_acr },
+       {}
+};
 
-       return 0;
+int
+gm20b_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
+{
+       return nvkm_pmu_new_(gm20b_pmu_fwif, device, index, ppmu);
 }
index e210cd6..09e05db 100644 (file)
 
 static const struct nvkm_pmu_func
 gp100_pmu = {
+       .flcn = &gt215_pmu_flcn,
        .enabled = gf100_pmu_enabled,
        .reset = gf100_pmu_reset,
 };
 
+static const struct nvkm_pmu_fwif
+gp100_pmu_fwif[] = {
+       { -1, gf100_pmu_nofw, &gp100_pmu },
+       {}
+};
+
 int
 gp100_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
 {
-       return nvkm_pmu_new_(&gp100_pmu, device, index, ppmu);
+       return nvkm_pmu_new_(gp100_pmu_fwif, device, index, ppmu);
 }
index 98c7a2a..262b8a3 100644 (file)
@@ -39,12 +39,19 @@ gp102_pmu_enabled(struct nvkm_pmu *pmu)
 
 static const struct nvkm_pmu_func
 gp102_pmu = {
+       .flcn = &gt215_pmu_flcn,
        .enabled = gp102_pmu_enabled,
        .reset = gp102_pmu_reset,
 };
 
+static const struct nvkm_pmu_fwif
+gp102_pmu_fwif[] = {
+       { -1, gf100_pmu_nofw, &gp102_pmu },
+       {}
+};
+
 int
 gp102_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
 {
-       return nvkm_pmu_new_(&gp102_pmu, device, index, ppmu);
+       return nvkm_pmu_new_(gp102_pmu_fwif, device, index, ppmu);
 }
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
new file mode 100644 (file)
index 0000000..39c86bc
--- /dev/null
@@ -0,0 +1,96 @@
+/*
+ * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+#include "priv.h"
+
+#include <subdev/acr.h>
+
+#include <nvfw/flcn.h>
+#include <nvfw/pmu.h>
+
+static int
+gp10b_pmu_acr_bootstrap_multiple_falcons_cb(void *priv,
+                                           struct nv_falcon_msg *hdr)
+{
+       struct nv_pmu_acr_bootstrap_multiple_falcons_msg *msg =
+               container_of(hdr, typeof(*msg), msg.hdr);
+       return msg->falcon_mask;
+}
+static int
+gp10b_pmu_acr_bootstrap_multiple_falcons(struct nvkm_falcon *falcon, u32 mask)
+{
+       struct nvkm_pmu *pmu = container_of(falcon, typeof(*pmu), falcon);
+       struct nv_pmu_acr_bootstrap_multiple_falcons_cmd cmd = {
+               .cmd.hdr.unit_id = NV_PMU_UNIT_ACR,
+               .cmd.hdr.size = sizeof(cmd),
+               .cmd.cmd_type = NV_PMU_ACR_CMD_BOOTSTRAP_MULTIPLE_FALCONS,
+               .flags = NV_PMU_ACR_BOOTSTRAP_MULTIPLE_FALCONS_FLAGS_RESET_YES,
+               .falcon_mask = mask,
+               .wpr_lo = 0, /*XXX*/
+               .wpr_hi = 0, /*XXX*/
+       };
+       int ret;
+
+       ret = nvkm_falcon_cmdq_send(pmu->hpq, &cmd.cmd.hdr,
+                                   gp10b_pmu_acr_bootstrap_multiple_falcons_cb,
+                                   &pmu->subdev, msecs_to_jiffies(1000));
+       if (ret >= 0 && ret != cmd.falcon_mask)
+               ret = -EIO;
+       return ret;
+}
+
+static const struct nvkm_acr_lsf_func
+gp10b_pmu_acr = {
+       .flags = NVKM_ACR_LSF_DMACTL_REQ_CTX,
+       .bld_size = sizeof(struct loader_config),
+       .bld_write = gm20b_pmu_acr_bld_write,
+       .bld_patch = gm20b_pmu_acr_bld_patch,
+       .boot = gm20b_pmu_acr_boot,
+       .bootstrap_falcon = gm20b_pmu_acr_bootstrap_falcon,
+       .bootstrap_multiple_falcons = gp10b_pmu_acr_bootstrap_multiple_falcons,
+};
+
+static const struct nvkm_pmu_func
+gp10b_pmu = {
+       .flcn = &gt215_pmu_flcn,
+       .enabled = gf100_pmu_enabled,
+       .intr = gt215_pmu_intr,
+       .recv = gm20b_pmu_recv,
+       .initmsg = gm20b_pmu_initmsg,
+};
+
+#if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+MODULE_FIRMWARE("nvidia/gp10b/pmu/desc.bin");
+MODULE_FIRMWARE("nvidia/gp10b/pmu/image.bin");
+MODULE_FIRMWARE("nvidia/gp10b/pmu/sig.bin");
+#endif
+
+static const struct nvkm_pmu_fwif
+gp10b_pmu_fwif[] = {
+       { 0, gm20b_pmu_load, &gp10b_pmu, &gp10b_pmu_acr },
+       {}
+};
+
+int
+gp10b_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
+{
+       return nvkm_pmu_new_(gp10b_pmu_fwif, device, index, ppmu);
+}
index e04216d..88b9099 100644 (file)
@@ -241,8 +241,27 @@ gt215_pmu_init(struct nvkm_pmu *pmu)
        return 0;
 }
 
+const struct nvkm_falcon_func
+gt215_pmu_flcn = {
+       .debug = 0xc08,
+       .fbif = 0xe00,
+       .load_imem = nvkm_falcon_v1_load_imem,
+       .load_dmem = nvkm_falcon_v1_load_dmem,
+       .read_dmem = nvkm_falcon_v1_read_dmem,
+       .bind_context = nvkm_falcon_v1_bind_context,
+       .wait_for_halt = nvkm_falcon_v1_wait_for_halt,
+       .clear_interrupt = nvkm_falcon_v1_clear_interrupt,
+       .set_start_addr = nvkm_falcon_v1_set_start_addr,
+       .start = nvkm_falcon_v1_start,
+       .enable = nvkm_falcon_v1_enable,
+       .disable = nvkm_falcon_v1_disable,
+       .cmdq = { 0x4a0, 0x4b0, 4 },
+       .msgq = { 0x4c8, 0x4cc, 0 },
+};
+
 static const struct nvkm_pmu_func
 gt215_pmu = {
+       .flcn = &gt215_pmu_flcn,
        .code.data = gt215_pmu_code,
        .code.size = sizeof(gt215_pmu_code),
        .data.data = gt215_pmu_data,
@@ -256,8 +275,14 @@ gt215_pmu = {
        .recv = gt215_pmu_recv,
 };
 
+static const struct nvkm_pmu_fwif
+gt215_pmu_fwif[] = {
+       { -1, gf100_pmu_nofw, &gt215_pmu },
+       {}
+};
+
 int
 gt215_pmu_new(struct nvkm_device *device, int index, struct nvkm_pmu **ppmu)
 {
-       return nvkm_pmu_new_(&gt215_pmu, device, index, ppmu);
+       return nvkm_pmu_new_(gt215_pmu_fwif, device, index, ppmu);
 }
index 26d73f9..f470859 100644 (file)
@@ -4,13 +4,12 @@
 #define nvkm_pmu(p) container_of((p), struct nvkm_pmu, subdev)
 #include <subdev/pmu.h>
 #include <subdev/pmu/fuc/os.h>
-
-int nvkm_pmu_ctor(const struct nvkm_pmu_func *, struct nvkm_device *,
-                 int index, struct nvkm_pmu *);
-int nvkm_pmu_new_(const struct nvkm_pmu_func *, struct nvkm_device *,
-                 int index, struct nvkm_pmu **);
+enum nvkm_acr_lsf_id;
+struct nvkm_acr_lsfw;
 
 struct nvkm_pmu_func {
+       const struct nvkm_falcon_func *flcn;
+
        struct {
                u32 *data;
                u32  size;
@@ -29,9 +28,11 @@ struct nvkm_pmu_func {
        int (*send)(struct nvkm_pmu *, u32 reply[2], u32 process,
                    u32 message, u32 data0, u32 data1);
        void (*recv)(struct nvkm_pmu *);
+       int (*initmsg)(struct nvkm_pmu *);
        void (*pgob)(struct nvkm_pmu *, bool);
 };
 
+extern const struct nvkm_falcon_func gt215_pmu_flcn;
 int gt215_pmu_init(struct nvkm_pmu *);
 void gt215_pmu_fini(struct nvkm_pmu *);
 void gt215_pmu_intr(struct nvkm_pmu *);
@@ -42,4 +43,26 @@ bool gf100_pmu_enabled(struct nvkm_pmu *);
 void gf100_pmu_reset(struct nvkm_pmu *);
 
 void gk110_pmu_pgob(struct nvkm_pmu *, bool);
+
+void gm20b_pmu_acr_bld_patch(struct nvkm_acr *, u32, s64);
+void gm20b_pmu_acr_bld_write(struct nvkm_acr *, u32, struct nvkm_acr_lsfw *);
+int gm20b_pmu_acr_boot(struct nvkm_falcon *);
+int gm20b_pmu_acr_bootstrap_falcon(struct nvkm_falcon *, enum nvkm_acr_lsf_id);
+void gm20b_pmu_recv(struct nvkm_pmu *);
+int gm20b_pmu_initmsg(struct nvkm_pmu *);
+
+struct nvkm_pmu_fwif {
+       int version;
+       int (*load)(struct nvkm_pmu *, int ver, const struct nvkm_pmu_fwif *);
+       const struct nvkm_pmu_func *func;
+       const struct nvkm_acr_lsf_func *acr;
+};
+
+int gf100_pmu_nofw(struct nvkm_pmu *, int, const struct nvkm_pmu_fwif *);
+int gm20b_pmu_load(struct nvkm_pmu *, int, const struct nvkm_pmu_fwif *);
+
+int nvkm_pmu_ctor(const struct nvkm_pmu_fwif *, struct nvkm_device *,
+                 int index, struct nvkm_pmu *);
+int nvkm_pmu_new_(const struct nvkm_pmu_fwif *, struct nvkm_device *,
+                 int index, struct nvkm_pmu **);
 #endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/Kbuild
deleted file mode 100644 (file)
index f3dee26..0000000
+++ /dev/null
@@ -1,17 +0,0 @@
-# SPDX-License-Identifier: MIT
-nvkm-y += nvkm/subdev/secboot/base.o
-nvkm-y += nvkm/subdev/secboot/hs_ucode.o
-nvkm-y += nvkm/subdev/secboot/ls_ucode_gr.o
-nvkm-y += nvkm/subdev/secboot/ls_ucode_msgqueue.o
-nvkm-y += nvkm/subdev/secboot/acr.o
-nvkm-y += nvkm/subdev/secboot/acr_r352.o
-nvkm-y += nvkm/subdev/secboot/acr_r361.o
-nvkm-y += nvkm/subdev/secboot/acr_r364.o
-nvkm-y += nvkm/subdev/secboot/acr_r367.o
-nvkm-y += nvkm/subdev/secboot/acr_r370.o
-nvkm-y += nvkm/subdev/secboot/acr_r375.o
-nvkm-y += nvkm/subdev/secboot/gm200.o
-nvkm-y += nvkm/subdev/secboot/gm20b.o
-nvkm-y += nvkm/subdev/secboot/gp102.o
-nvkm-y += nvkm/subdev/secboot/gp108.o
-nvkm-y += nvkm/subdev/secboot/gp10b.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr.c
deleted file mode 100644 (file)
index dc80985..0000000
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#include "acr.h"
-
-#include <core/firmware.h>
-
-/**
- * Convenience function to duplicate a firmware file in memory and check that
- * it has the required minimum size.
- */
-void *
-nvkm_acr_load_firmware(const struct nvkm_subdev *subdev, const char *name,
-                      size_t min_size)
-{
-       const struct firmware *fw;
-       void *blob;
-       int ret;
-
-       ret = nvkm_firmware_get(subdev, name, &fw);
-       if (ret)
-               return ERR_PTR(ret);
-       if (fw->size < min_size) {
-               nvkm_error(subdev, "%s is smaller than expected size %zu\n",
-                          name, min_size);
-               nvkm_firmware_put(fw);
-               return ERR_PTR(-EINVAL);
-       }
-       blob = kmemdup(fw->data, fw->size, GFP_KERNEL);
-       nvkm_firmware_put(fw);
-       if (!blob)
-               return ERR_PTR(-ENOMEM);
-
-       return blob;
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr.h b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr.h
deleted file mode 100644 (file)
index 73a2ac8..0000000
+++ /dev/null
@@ -1,70 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-#ifndef __NVKM_SECBOOT_ACR_H__
-#define __NVKM_SECBOOT_ACR_H__
-
-#include "priv.h"
-
-struct nvkm_acr;
-
-/**
- * struct nvkm_acr_func - properties and functions specific to an ACR
- *
- * @load: make the ACR ready to run on the given secboot device
- * @reset: reset the specified falcon
- * @start: start the specified falcon (assumed to have been reset)
- */
-struct nvkm_acr_func {
-       void (*dtor)(struct nvkm_acr *);
-       int (*oneinit)(struct nvkm_acr *, struct nvkm_secboot *);
-       int (*fini)(struct nvkm_acr *, struct nvkm_secboot *, bool);
-       int (*load)(struct nvkm_acr *, struct nvkm_falcon *,
-                   struct nvkm_gpuobj *, u64);
-       int (*reset)(struct nvkm_acr *, struct nvkm_secboot *, unsigned long);
-};
-
-/**
- * struct nvkm_acr - instance of an ACR
- *
- * @boot_falcon: ID of the falcon that will perform secure boot
- * @managed_falcons: bitfield of falcons managed by this ACR
- * @optional_falcons: bitfield of falcons we can live without
- */
-struct nvkm_acr {
-       const struct nvkm_acr_func *func;
-       const struct nvkm_subdev *subdev;
-
-       enum nvkm_secboot_falcon boot_falcon;
-       unsigned long managed_falcons;
-       unsigned long optional_falcons;
-};
-
-void *nvkm_acr_load_firmware(const struct nvkm_subdev *, const char *, size_t);
-
-struct nvkm_acr *acr_r352_new(unsigned long);
-struct nvkm_acr *acr_r361_new(unsigned long);
-struct nvkm_acr *acr_r364_new(unsigned long);
-struct nvkm_acr *acr_r367_new(enum nvkm_secboot_falcon, unsigned long);
-struct nvkm_acr *acr_r370_new(enum nvkm_secboot_falcon, unsigned long);
-struct nvkm_acr *acr_r375_new(enum nvkm_secboot_falcon, unsigned long);
-
-#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r352.c
deleted file mode 100644 (file)
index 7af971d..0000000
+++ /dev/null
@@ -1,1241 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#include "acr_r352.h"
-#include "hs_ucode.h"
-
-#include <core/gpuobj.h>
-#include <core/firmware.h>
-#include <engine/falcon.h>
-#include <subdev/pmu.h>
-#include <core/msgqueue.h>
-#include <engine/sec2.h>
-
-/**
- * struct acr_r352_flcn_bl_desc - DMEM bootloader descriptor
- * @signature:         16B signature for secure code. 0s if no secure code
- * @ctx_dma:           DMA context to be used by BL while loading code/data
- * @code_dma_base:     256B-aligned Physical FB Address where code is located
- *                     (falcon's $xcbase register)
- * @non_sec_code_off:  offset from code_dma_base where the non-secure code is
- *                      located. The offset must be multiple of 256 to help perf
- * @non_sec_code_size: the size of the nonSecure code part.
- * @sec_code_off:      offset from code_dma_base where the secure code is
- *                      located. The offset must be multiple of 256 to help perf
- * @sec_code_size:     offset from code_dma_base where the secure code is
- *                      located. The offset must be multiple of 256 to help perf
- * @code_entry_point:  code entry point which will be invoked by BL after
- *                      code is loaded.
- * @data_dma_base:     256B aligned Physical FB Address where data is located.
- *                     (falcon's $xdbase register)
- * @data_size:         size of data block. Should be multiple of 256B
- *
- * Structure used by the bootloader to load the rest of the code. This has
- * to be filled by host and copied into DMEM at offset provided in the
- * hsflcn_bl_desc.bl_desc_dmem_load_off.
- */
-struct acr_r352_flcn_bl_desc {
-       u32 reserved[4];
-       u32 signature[4];
-       u32 ctx_dma;
-       u32 code_dma_base;
-       u32 non_sec_code_off;
-       u32 non_sec_code_size;
-       u32 sec_code_off;
-       u32 sec_code_size;
-       u32 code_entry_point;
-       u32 data_dma_base;
-       u32 data_size;
-       u32 code_dma_base1;
-       u32 data_dma_base1;
-};
-
-/**
- * acr_r352_generate_flcn_bl_desc - generate generic BL descriptor for LS image
- */
-static void
-acr_r352_generate_flcn_bl_desc(const struct nvkm_acr *acr,
-                              const struct ls_ucode_img *img, u64 wpr_addr,
-                              void *_desc)
-{
-       struct acr_r352_flcn_bl_desc *desc = _desc;
-       const struct ls_ucode_img_desc *pdesc = &img->ucode_desc;
-       u64 base, addr_code, addr_data;
-
-       base = wpr_addr + img->ucode_off + pdesc->app_start_offset;
-       addr_code = (base + pdesc->app_resident_code_offset) >> 8;
-       addr_data = (base + pdesc->app_resident_data_offset) >> 8;
-
-       desc->ctx_dma = FALCON_DMAIDX_UCODE;
-       desc->code_dma_base = lower_32_bits(addr_code);
-       desc->code_dma_base1 = upper_32_bits(addr_code);
-       desc->non_sec_code_off = pdesc->app_resident_code_offset;
-       desc->non_sec_code_size = pdesc->app_resident_code_size;
-       desc->code_entry_point = pdesc->app_imem_entry;
-       desc->data_dma_base = lower_32_bits(addr_data);
-       desc->data_dma_base1 = upper_32_bits(addr_data);
-       desc->data_size = pdesc->app_resident_data_size;
-}
-
-
-/**
- * struct hsflcn_acr_desc - data section of the HS firmware
- *
- * This header is to be copied at the beginning of DMEM by the HS bootloader.
- *
- * @signature:         signature of ACR ucode
- * @wpr_region_id:     region ID holding the WPR header and its details
- * @wpr_offset:                offset from the WPR region holding the wpr header
- * @regions:           region descriptors
- * @nonwpr_ucode_blob_size:    size of LS blob
- * @nonwpr_ucode_blob_start:   FB location of LS blob is
- */
-struct hsflcn_acr_desc {
-       union {
-               u8 reserved_dmem[0x200];
-               u32 signatures[4];
-       } ucode_reserved_space;
-       u32 wpr_region_id;
-       u32 wpr_offset;
-       u32 mmu_mem_range;
-#define FLCN_ACR_MAX_REGIONS 2
-       struct {
-               u32 no_regions;
-               struct {
-                       u32 start_addr;
-                       u32 end_addr;
-                       u32 region_id;
-                       u32 read_mask;
-                       u32 write_mask;
-                       u32 client_mask;
-               } region_props[FLCN_ACR_MAX_REGIONS];
-       } regions;
-       u32 ucode_blob_size;
-       u64 ucode_blob_base __aligned(8);
-       struct {
-               u32 vpr_enabled;
-               u32 vpr_start;
-               u32 vpr_end;
-               u32 hdcp_policies;
-       } vpr_desc;
-};
-
-
-/*
- * Low-secure blob creation
- */
-
-/**
- * struct acr_r352_lsf_lsb_header - LS firmware header
- * @signature:         signature to verify the firmware against
- * @ucode_off:         offset of the ucode blob in the WPR region. The ucode
- *                      blob contains the bootloader, code and data of the
- *                      LS falcon
- * @ucode_size:                size of the ucode blob, including bootloader
- * @data_size:         size of the ucode blob data
- * @bl_code_size:      size of the bootloader code
- * @bl_imem_off:       offset in imem of the bootloader
- * @bl_data_off:       offset of the bootloader data in WPR region
- * @bl_data_size:      size of the bootloader data
- * @app_code_off:      offset of the app code relative to ucode_off
- * @app_code_size:     size of the app code
- * @app_data_off:      offset of the app data relative to ucode_off
- * @app_data_size:     size of the app data
- * @flags:             flags for the secure bootloader
- *
- * This structure is written into the WPR region for each managed falcon. Each
- * instance is referenced by the lsb_offset member of the corresponding
- * lsf_wpr_header.
- */
-struct acr_r352_lsf_lsb_header {
-       /**
-        * LS falcon signatures
-        * @prd_keys:           signature to use in production mode
-        * @dgb_keys:           signature to use in debug mode
-        * @b_prd_present:      whether the production key is present
-        * @b_dgb_present:      whether the debug key is present
-        * @falcon_id:          ID of the falcon the ucode applies to
-        */
-       struct {
-               u8 prd_keys[2][16];
-               u8 dbg_keys[2][16];
-               u32 b_prd_present;
-               u32 b_dbg_present;
-               u32 falcon_id;
-       } signature;
-       u32 ucode_off;
-       u32 ucode_size;
-       u32 data_size;
-       u32 bl_code_size;
-       u32 bl_imem_off;
-       u32 bl_data_off;
-       u32 bl_data_size;
-       u32 app_code_off;
-       u32 app_code_size;
-       u32 app_data_off;
-       u32 app_data_size;
-       u32 flags;
-};
-
-/**
- * struct acr_r352_lsf_wpr_header - LS blob WPR Header
- * @falcon_id:         LS falcon ID
- * @lsb_offset:                offset of the lsb_lsf_header in the WPR region
- * @bootstrap_owner:   secure falcon reponsible for bootstrapping the LS falcon
- * @lazy_bootstrap:    skip bootstrapping by ACR
- * @status:            bootstrapping status
- *
- * An array of these is written at the beginning of the WPR region, one for
- * each managed falcon. The array is terminated by an instance which falcon_id
- * is LSF_FALCON_ID_INVALID.
- */
-struct acr_r352_lsf_wpr_header {
-       u32 falcon_id;
-       u32 lsb_offset;
-       u32 bootstrap_owner;
-       u32 lazy_bootstrap;
-       u32 status;
-#define LSF_IMAGE_STATUS_NONE                          0
-#define LSF_IMAGE_STATUS_COPY                          1
-#define LSF_IMAGE_STATUS_VALIDATION_CODE_FAILED                2
-#define LSF_IMAGE_STATUS_VALIDATION_DATA_FAILED                3
-#define LSF_IMAGE_STATUS_VALIDATION_DONE               4
-#define LSF_IMAGE_STATUS_VALIDATION_SKIPPED            5
-#define LSF_IMAGE_STATUS_BOOTSTRAP_READY               6
-};
-
-/**
- * struct ls_ucode_img_r352 - ucode image augmented with r352 headers
- */
-struct ls_ucode_img_r352 {
-       struct ls_ucode_img base;
-
-       const struct acr_r352_lsf_func *func;
-
-       struct acr_r352_lsf_wpr_header wpr_header;
-       struct acr_r352_lsf_lsb_header lsb_header;
-};
-#define ls_ucode_img_r352(i) container_of(i, struct ls_ucode_img_r352, base)
-
-/**
- * ls_ucode_img_load() - create a lsf_ucode_img and load it
- */
-struct ls_ucode_img *
-acr_r352_ls_ucode_img_load(const struct acr_r352 *acr,
-                          const struct nvkm_secboot *sb,
-                          enum nvkm_secboot_falcon falcon_id)
-{
-       const struct nvkm_subdev *subdev = acr->base.subdev;
-       const struct acr_r352_ls_func *func = acr->func->ls_func[falcon_id];
-       struct ls_ucode_img_r352 *img;
-       int ret;
-
-       img = kzalloc(sizeof(*img), GFP_KERNEL);
-       if (!img)
-               return ERR_PTR(-ENOMEM);
-
-       img->base.falcon_id = falcon_id;
-
-       ret = func->load(sb, func->version_max, &img->base);
-       if (ret < 0) {
-               kfree(img->base.ucode_data);
-               kfree(img->base.sig);
-               kfree(img);
-               return ERR_PTR(ret);
-       }
-
-       img->func = func->version[ret];
-
-       /* Check that the signature size matches our expectations... */
-       if (img->base.sig_size != sizeof(img->lsb_header.signature)) {
-               nvkm_error(subdev, "invalid signature size for %s falcon!\n",
-                          nvkm_secboot_falcon_name[falcon_id]);
-               return ERR_PTR(-EINVAL);
-       }
-
-       /* Copy signature to the right place */
-       memcpy(&img->lsb_header.signature, img->base.sig, img->base.sig_size);
-
-       /* not needed? the signature should already have the right value */
-       img->lsb_header.signature.falcon_id = falcon_id;
-
-       return &img->base;
-}
-
-#define LSF_LSB_HEADER_ALIGN 256
-#define LSF_BL_DATA_ALIGN 256
-#define LSF_BL_DATA_SIZE_ALIGN 256
-#define LSF_BL_CODE_SIZE_ALIGN 256
-#define LSF_UCODE_DATA_ALIGN 4096
-
-/**
- * acr_r352_ls_img_fill_headers - fill the WPR and LSB headers of an image
- * @acr:       ACR to use
- * @img:       image to generate for
- * @offset:    offset in the WPR region where this image starts
- *
- * Allocate space in the WPR area from offset and write the WPR and LSB headers
- * accordingly.
- *
- * Return: offset at the end of this image.
- */
-static u32
-acr_r352_ls_img_fill_headers(struct acr_r352 *acr,
-                            struct ls_ucode_img_r352 *img, u32 offset)
-{
-       struct ls_ucode_img *_img = &img->base;
-       struct acr_r352_lsf_wpr_header *whdr = &img->wpr_header;
-       struct acr_r352_lsf_lsb_header *lhdr = &img->lsb_header;
-       struct ls_ucode_img_desc *desc = &_img->ucode_desc;
-       const struct acr_r352_lsf_func *func = img->func;
-
-       /* Fill WPR header */
-       whdr->falcon_id = _img->falcon_id;
-       whdr->bootstrap_owner = acr->base.boot_falcon;
-       whdr->status = LSF_IMAGE_STATUS_COPY;
-
-       /* Skip bootstrapping falcons started by someone else than ACR */
-       if (acr->lazy_bootstrap & BIT(_img->falcon_id))
-               whdr->lazy_bootstrap = 1;
-
-       /* Align, save off, and include an LSB header size */
-       offset = ALIGN(offset, LSF_LSB_HEADER_ALIGN);
-       whdr->lsb_offset = offset;
-       offset += sizeof(*lhdr);
-
-       /*
-        * Align, save off, and include the original (static) ucode
-        * image size
-        */
-       offset = ALIGN(offset, LSF_UCODE_DATA_ALIGN);
-       _img->ucode_off = lhdr->ucode_off = offset;
-       offset += _img->ucode_size;
-
-       /*
-        * For falcons that use a boot loader (BL), we append a loader
-        * desc structure on the end of the ucode image and consider
-        * this the boot loader data. The host will then copy the loader
-        * desc args to this space within the WPR region (before locking
-        * down) and the HS bin will then copy them to DMEM 0 for the
-        * loader.
-        */
-       lhdr->bl_code_size = ALIGN(desc->bootloader_size,
-                                  LSF_BL_CODE_SIZE_ALIGN);
-       lhdr->ucode_size = ALIGN(desc->app_resident_data_offset,
-                                LSF_BL_CODE_SIZE_ALIGN) + lhdr->bl_code_size;
-       lhdr->data_size = ALIGN(desc->app_size, LSF_BL_CODE_SIZE_ALIGN) +
-                               lhdr->bl_code_size - lhdr->ucode_size;
-       /*
-        * Though the BL is located at 0th offset of the image, the VA
-        * is different to make sure that it doesn't collide the actual
-        * OS VA range
-        */
-       lhdr->bl_imem_off = desc->bootloader_imem_offset;
-       lhdr->app_code_off = desc->app_start_offset +
-                            desc->app_resident_code_offset;
-       lhdr->app_code_size = desc->app_resident_code_size;
-       lhdr->app_data_off = desc->app_start_offset +
-                            desc->app_resident_data_offset;
-       lhdr->app_data_size = desc->app_resident_data_size;
-
-       lhdr->flags = func->lhdr_flags;
-       if (_img->falcon_id == acr->base.boot_falcon)
-               lhdr->flags |= LSF_FLAG_DMACTL_REQ_CTX;
-
-       /* Align and save off BL descriptor size */
-       lhdr->bl_data_size = ALIGN(func->bl_desc_size, LSF_BL_DATA_SIZE_ALIGN);
-
-       /*
-        * Align, save off, and include the additional BL data
-        */
-       offset = ALIGN(offset, LSF_BL_DATA_ALIGN);
-       lhdr->bl_data_off = offset;
-       offset += lhdr->bl_data_size;
-
-       return offset;
-}
-
-/**
- * acr_r352_ls_fill_headers - fill WPR and LSB headers of all managed images
- */
-int
-acr_r352_ls_fill_headers(struct acr_r352 *acr, struct list_head *imgs)
-{
-       struct ls_ucode_img_r352 *img;
-       struct list_head *l;
-       u32 count = 0;
-       u32 offset;
-
-       /* Count the number of images to manage */
-       list_for_each(l, imgs)
-               count++;
-
-       /*
-        * Start with an array of WPR headers at the base of the WPR.
-        * The expectation here is that the secure falcon will do a single DMA
-        * read of this array and cache it internally so it's ok to pack these.
-        * Also, we add 1 to the falcon count to indicate the end of the array.
-        */
-       offset = sizeof(img->wpr_header) * (count + 1);
-
-       /*
-        * Walk the managed falcons, accounting for the LSB structs
-        * as well as the ucode images.
-        */
-       list_for_each_entry(img, imgs, base.node) {
-               offset = acr_r352_ls_img_fill_headers(acr, img, offset);
-       }
-
-       return offset;
-}
-
-/**
- * acr_r352_ls_write_wpr - write the WPR blob contents
- */
-int
-acr_r352_ls_write_wpr(struct acr_r352 *acr, struct list_head *imgs,
-                     struct nvkm_gpuobj *wpr_blob, u64 wpr_addr)
-{
-       struct ls_ucode_img *_img;
-       u32 pos = 0;
-       u32 max_desc_size = 0;
-       u8 *gdesc;
-
-       /* Figure out how large we need gdesc to be. */
-       list_for_each_entry(_img, imgs, node) {
-               struct ls_ucode_img_r352 *img = ls_ucode_img_r352(_img);
-               const struct acr_r352_lsf_func *ls_func = img->func;
-
-               max_desc_size = max(max_desc_size, ls_func->bl_desc_size);
-       }
-
-       gdesc = kmalloc(max_desc_size, GFP_KERNEL);
-       if (!gdesc)
-               return -ENOMEM;
-
-       nvkm_kmap(wpr_blob);
-
-       list_for_each_entry(_img, imgs, node) {
-               struct ls_ucode_img_r352 *img = ls_ucode_img_r352(_img);
-               const struct acr_r352_lsf_func *ls_func = img->func;
-
-               nvkm_gpuobj_memcpy_to(wpr_blob, pos, &img->wpr_header,
-                                     sizeof(img->wpr_header));
-
-               nvkm_gpuobj_memcpy_to(wpr_blob, img->wpr_header.lsb_offset,
-                                    &img->lsb_header, sizeof(img->lsb_header));
-
-               /* Generate and write BL descriptor */
-               memset(gdesc, 0, ls_func->bl_desc_size);
-               ls_func->generate_bl_desc(&acr->base, _img, wpr_addr, gdesc);
-
-               nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.bl_data_off,
-                                     gdesc, ls_func->bl_desc_size);
-
-               /* Copy ucode */
-               nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.ucode_off,
-                                     _img->ucode_data, _img->ucode_size);
-
-               pos += sizeof(img->wpr_header);
-       }
-
-       nvkm_wo32(wpr_blob, pos, NVKM_SECBOOT_FALCON_INVALID);
-
-       nvkm_done(wpr_blob);
-
-       kfree(gdesc);
-
-       return 0;
-}
-
-/* Both size and address of WPR need to be 256K-aligned */
-#define WPR_ALIGNMENT  0x40000
-/**
- * acr_r352_prepare_ls_blob() - prepare the LS blob
- *
- * For each securely managed falcon, load the FW, signatures and bootloaders and
- * prepare a ucode blob. Then, compute the offsets in the WPR region for each
- * blob, and finally write the headers and ucode blobs into a GPU object that
- * will be copied into the WPR region by the HS firmware.
- */
-static int
-acr_r352_prepare_ls_blob(struct acr_r352 *acr, struct nvkm_secboot *sb)
-{
-       const struct nvkm_subdev *subdev = acr->base.subdev;
-       struct list_head imgs;
-       struct ls_ucode_img *img, *t;
-       unsigned long managed_falcons = acr->base.managed_falcons;
-       u64 wpr_addr = sb->wpr_addr;
-       u32 wpr_size = sb->wpr_size;
-       int managed_count = 0;
-       u32 image_wpr_size, ls_blob_size;
-       int falcon_id;
-       int ret;
-
-       INIT_LIST_HEAD(&imgs);
-
-       /* Load all LS blobs */
-       for_each_set_bit(falcon_id, &managed_falcons, NVKM_SECBOOT_FALCON_END) {
-               struct ls_ucode_img *img;
-
-               img = acr->func->ls_ucode_img_load(acr, sb, falcon_id);
-               if (IS_ERR(img)) {
-                       if (acr->base.optional_falcons & BIT(falcon_id)) {
-                               managed_falcons &= ~BIT(falcon_id);
-                               nvkm_info(subdev, "skipping %s falcon...\n",
-                                         nvkm_secboot_falcon_name[falcon_id]);
-                               continue;
-                       }
-                       ret = PTR_ERR(img);
-                       goto cleanup;
-               }
-
-               list_add_tail(&img->node, &imgs);
-               managed_count++;
-       }
-
-       /* Commit the actual list of falcons we will manage from now on */
-       acr->base.managed_falcons = managed_falcons;
-
-       /*
-        * If the boot falcon has a firmare, let it manage the bootstrap of other
-        * falcons.
-        */
-       if (acr->func->ls_func[acr->base.boot_falcon] &&
-           (managed_falcons & BIT(acr->base.boot_falcon))) {
-               for_each_set_bit(falcon_id, &managed_falcons,
-                                NVKM_SECBOOT_FALCON_END) {
-                       if (falcon_id == acr->base.boot_falcon)
-                               continue;
-
-                       acr->lazy_bootstrap |= BIT(falcon_id);
-               }
-       }
-
-       /*
-        * Fill the WPR and LSF headers with the right offsets and compute
-        * required WPR size
-        */
-       image_wpr_size = acr->func->ls_fill_headers(acr, &imgs);
-       image_wpr_size = ALIGN(image_wpr_size, WPR_ALIGNMENT);
-
-       ls_blob_size = image_wpr_size;
-
-       /*
-        * If we need a shadow area, allocate twice the size and use the
-        * upper half as WPR
-        */
-       if (wpr_size == 0 && acr->func->shadow_blob)
-               ls_blob_size *= 2;
-
-       /* Allocate GPU object that will contain the WPR region */
-       ret = nvkm_gpuobj_new(subdev->device, ls_blob_size, WPR_ALIGNMENT,
-                             false, NULL, &acr->ls_blob);
-       if (ret)
-               goto cleanup;
-
-       nvkm_debug(subdev, "%d managed LS falcons, WPR size is %d bytes\n",
-                   managed_count, image_wpr_size);
-
-       /* If WPR address and size are not fixed, set them to fit the LS blob */
-       if (wpr_size == 0) {
-               wpr_addr = acr->ls_blob->addr;
-               if (acr->func->shadow_blob)
-                       wpr_addr += acr->ls_blob->size / 2;
-
-               wpr_size = image_wpr_size;
-       /*
-        * But if the WPR region is set by the bootloader, it is illegal for
-        * the HS blob to be larger than this region.
-        */
-       } else if (image_wpr_size > wpr_size) {
-               nvkm_error(subdev, "WPR region too small for FW blob!\n");
-               nvkm_error(subdev, "required: %dB\n", image_wpr_size);
-               nvkm_error(subdev, "available: %dB\n", wpr_size);
-               ret = -ENOSPC;
-               goto cleanup;
-       }
-
-       /* Write LS blob */
-       ret = acr->func->ls_write_wpr(acr, &imgs, acr->ls_blob, wpr_addr);
-       if (ret)
-               nvkm_gpuobj_del(&acr->ls_blob);
-
-cleanup:
-       list_for_each_entry_safe(img, t, &imgs, node) {
-               kfree(img->ucode_data);
-               kfree(img->sig);
-               kfree(img);
-       }
-
-       return ret;
-}
-
-
-
-
-void
-acr_r352_fixup_hs_desc(struct acr_r352 *acr, struct nvkm_secboot *sb,
-                      void *_desc)
-{
-       struct hsflcn_acr_desc *desc = _desc;
-       struct nvkm_gpuobj *ls_blob = acr->ls_blob;
-
-       /* WPR region information if WPR is not fixed */
-       if (sb->wpr_size == 0) {
-               u64 wpr_start = ls_blob->addr;
-               u64 wpr_end = wpr_start + ls_blob->size;
-
-               desc->wpr_region_id = 1;
-               desc->regions.no_regions = 2;
-               desc->regions.region_props[0].start_addr = wpr_start >> 8;
-               desc->regions.region_props[0].end_addr = wpr_end >> 8;
-               desc->regions.region_props[0].region_id = 1;
-               desc->regions.region_props[0].read_mask = 0xf;
-               desc->regions.region_props[0].write_mask = 0xc;
-               desc->regions.region_props[0].client_mask = 0x2;
-       } else {
-               desc->ucode_blob_base = ls_blob->addr;
-               desc->ucode_blob_size = ls_blob->size;
-       }
-}
-
-static void
-acr_r352_generate_hs_bl_desc(const struct hsf_load_header *hdr, void *_bl_desc,
-                            u64 offset)
-{
-       struct acr_r352_flcn_bl_desc *bl_desc = _bl_desc;
-       u64 addr_code, addr_data;
-
-       addr_code = offset >> 8;
-       addr_data = (offset + hdr->data_dma_base) >> 8;
-
-       bl_desc->ctx_dma = FALCON_DMAIDX_VIRT;
-       bl_desc->code_dma_base = lower_32_bits(addr_code);
-       bl_desc->non_sec_code_off = hdr->non_sec_code_off;
-       bl_desc->non_sec_code_size = hdr->non_sec_code_size;
-       bl_desc->sec_code_off = hsf_load_header_app_off(hdr, 0);
-       bl_desc->sec_code_size = hsf_load_header_app_size(hdr, 0);
-       bl_desc->code_entry_point = 0;
-       bl_desc->data_dma_base = lower_32_bits(addr_data);
-       bl_desc->data_size = hdr->data_size;
-}
-
-/**
- * acr_r352_prepare_hs_blob - load and prepare a HS blob and BL descriptor
- *
- * @sb secure boot instance to prepare for
- * @fw name of the HS firmware to load
- * @blob pointer to gpuobj that will be allocated to receive the HS FW payload
- * @bl_desc pointer to the BL descriptor to write for this firmware
- * @patch whether we should patch the HS descriptor (only for HS loaders)
- */
-static int
-acr_r352_prepare_hs_blob(struct acr_r352 *acr, struct nvkm_secboot *sb,
-                        const char *fw, struct nvkm_gpuobj **blob,
-                        struct hsf_load_header *load_header, bool patch)
-{
-       struct nvkm_subdev *subdev = &sb->subdev;
-       void *acr_image;
-       struct fw_bin_header *hsbin_hdr;
-       struct hsf_fw_header *fw_hdr;
-       struct hsf_load_header *load_hdr;
-       void *acr_data;
-       int ret;
-
-       acr_image = hs_ucode_load_blob(subdev, sb->boot_falcon, fw);
-       if (IS_ERR(acr_image))
-               return PTR_ERR(acr_image);
-
-       hsbin_hdr = acr_image;
-       fw_hdr = acr_image + hsbin_hdr->header_offset;
-       load_hdr = acr_image + fw_hdr->hdr_offset;
-       acr_data = acr_image + hsbin_hdr->data_offset;
-
-       /* Patch descriptor with WPR information? */
-       if (patch) {
-               struct hsflcn_acr_desc *desc;
-
-               desc = acr_data + load_hdr->data_dma_base;
-               acr->func->fixup_hs_desc(acr, sb, desc);
-       }
-
-       if (load_hdr->num_apps > ACR_R352_MAX_APPS) {
-               nvkm_error(subdev, "more apps (%d) than supported (%d)!",
-                          load_hdr->num_apps, ACR_R352_MAX_APPS);
-               ret = -EINVAL;
-               goto cleanup;
-       }
-       memcpy(load_header, load_hdr, sizeof(*load_header) +
-                         (sizeof(load_hdr->apps[0]) * 2 * load_hdr->num_apps));
-
-       /* Create ACR blob and copy HS data to it */
-       ret = nvkm_gpuobj_new(subdev->device, ALIGN(hsbin_hdr->data_size, 256),
-                             0x1000, false, NULL, blob);
-       if (ret)
-               goto cleanup;
-
-       nvkm_kmap(*blob);
-       nvkm_gpuobj_memcpy_to(*blob, 0, acr_data, hsbin_hdr->data_size);
-       nvkm_done(*blob);
-
-cleanup:
-       kfree(acr_image);
-
-       return ret;
-}
-
-/**
- * acr_r352_load_blobs - load blobs common to all ACR V1 versions.
- *
- * This includes the LS blob, HS ucode loading blob, and HS bootloader.
- *
- * The HS ucode unload blob is only used on dGPU if the WPR region is variable.
- */
-int
-acr_r352_load_blobs(struct acr_r352 *acr, struct nvkm_secboot *sb)
-{
-       struct nvkm_subdev *subdev = &sb->subdev;
-       int ret;
-
-       /* Firmware already loaded? */
-       if (acr->firmware_ok)
-               return 0;
-
-       /* Load and prepare the managed falcon's firmwares */
-       ret = acr_r352_prepare_ls_blob(acr, sb);
-       if (ret)
-               return ret;
-
-       /* Load the HS firmware that will load the LS firmwares */
-       if (!acr->load_blob) {
-               ret = acr_r352_prepare_hs_blob(acr, sb, "acr/ucode_load",
-                                              &acr->load_blob,
-                                              &acr->load_bl_header, true);
-               if (ret)
-                       return ret;
-       }
-
-       /* If the ACR region is dynamically programmed, we need an unload FW */
-       if (sb->wpr_size == 0) {
-               ret = acr_r352_prepare_hs_blob(acr, sb, "acr/ucode_unload",
-                                              &acr->unload_blob,
-                                              &acr->unload_bl_header, false);
-               if (ret)
-                       return ret;
-       }
-
-       /* Load the HS firmware bootloader */
-       if (!acr->hsbl_blob) {
-               acr->hsbl_blob = nvkm_acr_load_firmware(subdev, "acr/bl", 0);
-               if (IS_ERR(acr->hsbl_blob)) {
-                       ret = PTR_ERR(acr->hsbl_blob);
-                       acr->hsbl_blob = NULL;
-                       return ret;
-               }
-
-               if (acr->base.boot_falcon != NVKM_SECBOOT_FALCON_PMU) {
-                       acr->hsbl_unload_blob = nvkm_acr_load_firmware(subdev,
-                                                           "acr/unload_bl", 0);
-                       if (IS_ERR(acr->hsbl_unload_blob)) {
-                               ret = PTR_ERR(acr->hsbl_unload_blob);
-                               acr->hsbl_unload_blob = NULL;
-                               return ret;
-                       }
-               } else {
-                       acr->hsbl_unload_blob = acr->hsbl_blob;
-               }
-       }
-
-       acr->firmware_ok = true;
-       nvkm_debug(&sb->subdev, "LS blob successfully created\n");
-
-       return 0;
-}
-
-/**
- * acr_r352_load() - prepare HS falcon to run the specified blob, mapped.
- *
- * Returns the start address to use, or a negative error value.
- */
-static int
-acr_r352_load(struct nvkm_acr *_acr, struct nvkm_falcon *falcon,
-             struct nvkm_gpuobj *blob, u64 offset)
-{
-       struct acr_r352 *acr = acr_r352(_acr);
-       const u32 bl_desc_size = acr->func->hs_bl_desc_size;
-       const struct hsf_load_header *load_hdr;
-       struct fw_bin_header *bl_hdr;
-       struct fw_bl_desc *hsbl_desc;
-       void *bl, *blob_data, *hsbl_code, *hsbl_data;
-       u32 code_size;
-       u8 *bl_desc;
-
-       bl_desc = kzalloc(bl_desc_size, GFP_KERNEL);
-       if (!bl_desc)
-               return -ENOMEM;
-
-       /* Find the bootloader descriptor for our blob and copy it */
-       if (blob == acr->load_blob) {
-               load_hdr = &acr->load_bl_header;
-               bl = acr->hsbl_blob;
-       } else if (blob == acr->unload_blob) {
-               load_hdr = &acr->unload_bl_header;
-               bl = acr->hsbl_unload_blob;
-       } else {
-               nvkm_error(_acr->subdev, "invalid secure boot blob!\n");
-               kfree(bl_desc);
-               return -EINVAL;
-       }
-
-       bl_hdr = bl;
-       hsbl_desc = bl + bl_hdr->header_offset;
-       blob_data = bl + bl_hdr->data_offset;
-       hsbl_code = blob_data + hsbl_desc->code_off;
-       hsbl_data = blob_data + hsbl_desc->data_off;
-       code_size = ALIGN(hsbl_desc->code_size, 256);
-
-       /*
-        * Copy HS bootloader data
-        */
-       nvkm_falcon_load_dmem(falcon, hsbl_data, 0x0, hsbl_desc->data_size, 0);
-
-       /* Copy HS bootloader code to end of IMEM */
-       nvkm_falcon_load_imem(falcon, hsbl_code, falcon->code.limit - code_size,
-                             code_size, hsbl_desc->start_tag, 0, false);
-
-       /* Generate the BL header */
-       acr->func->generate_hs_bl_desc(load_hdr, bl_desc, offset);
-
-       /*
-        * Copy HS BL header where the HS descriptor expects it to be
-        */
-       nvkm_falcon_load_dmem(falcon, bl_desc, hsbl_desc->dmem_load_off,
-                             bl_desc_size, 0);
-
-       kfree(bl_desc);
-       return hsbl_desc->start_tag << 8;
-}
-
-static int
-acr_r352_shutdown(struct acr_r352 *acr, struct nvkm_secboot *sb)
-{
-       struct nvkm_subdev *subdev = &sb->subdev;
-       int i;
-
-       /* Run the unload blob to unprotect the WPR region */
-       if (acr->unload_blob && sb->wpr_set) {
-               int ret;
-
-               nvkm_debug(subdev, "running HS unload blob\n");
-               ret = sb->func->run_blob(sb, acr->unload_blob, sb->halt_falcon);
-               if (ret < 0)
-                       return ret;
-               /*
-                * Unload blob will return this error code - it is not an error
-                * and the expected behavior on RM as well
-                */
-               if (ret && ret != 0x1d) {
-                       nvkm_error(subdev, "HS unload failed, ret 0x%08x\n", ret);
-                       return -EINVAL;
-               }
-               nvkm_debug(subdev, "HS unload blob completed\n");
-       }
-
-       for (i = 0; i < NVKM_SECBOOT_FALCON_END; i++)
-               acr->falcon_state[i] = NON_SECURE;
-
-       sb->wpr_set = false;
-
-       return 0;
-}
-
-/**
- * Check if the WPR region has been indeed set by the ACR firmware, and
- * matches where it should be.
- */
-static bool
-acr_r352_wpr_is_set(const struct acr_r352 *acr, const struct nvkm_secboot *sb)
-{
-       const struct nvkm_subdev *subdev = &sb->subdev;
-       const struct nvkm_device *device = subdev->device;
-       u64 wpr_lo, wpr_hi;
-       u64 wpr_range_lo, wpr_range_hi;
-
-       nvkm_wr32(device, 0x100cd4, 0x2);
-       wpr_lo = (nvkm_rd32(device, 0x100cd4) & ~0xff);
-       wpr_lo <<= 8;
-       nvkm_wr32(device, 0x100cd4, 0x3);
-       wpr_hi = (nvkm_rd32(device, 0x100cd4) & ~0xff);
-       wpr_hi <<= 8;
-
-       if (sb->wpr_size != 0) {
-               wpr_range_lo = sb->wpr_addr;
-               wpr_range_hi = wpr_range_lo + sb->wpr_size;
-       } else {
-               wpr_range_lo = acr->ls_blob->addr;
-               wpr_range_hi = wpr_range_lo + acr->ls_blob->size;
-       }
-
-       return (wpr_lo >= wpr_range_lo && wpr_lo < wpr_range_hi &&
-               wpr_hi > wpr_range_lo && wpr_hi <= wpr_range_hi);
-}
-
-static int
-acr_r352_bootstrap(struct acr_r352 *acr, struct nvkm_secboot *sb)
-{
-       const struct nvkm_subdev *subdev = &sb->subdev;
-       unsigned long managed_falcons = acr->base.managed_falcons;
-       int falcon_id;
-       int ret;
-
-       if (sb->wpr_set)
-               return 0;
-
-       /* Make sure all blobs are ready */
-       ret = acr_r352_load_blobs(acr, sb);
-       if (ret)
-               return ret;
-
-       nvkm_debug(subdev, "running HS load blob\n");
-       ret = sb->func->run_blob(sb, acr->load_blob, sb->boot_falcon);
-       /* clear halt interrupt */
-       nvkm_falcon_clear_interrupt(sb->boot_falcon, 0x10);
-       sb->wpr_set = acr_r352_wpr_is_set(acr, sb);
-       if (ret < 0) {
-               return ret;
-       } else if (ret > 0) {
-               nvkm_error(subdev, "HS load failed, ret 0x%08x\n", ret);
-               return -EINVAL;
-       }
-       nvkm_debug(subdev, "HS load blob completed\n");
-       /* WPR must be set at this point */
-       if (!sb->wpr_set) {
-               nvkm_error(subdev, "ACR blob completed but WPR not set!\n");
-               return -EINVAL;
-       }
-
-       /* Run LS firmwares post_run hooks */
-       for_each_set_bit(falcon_id, &managed_falcons, NVKM_SECBOOT_FALCON_END) {
-               const struct acr_r352_ls_func *func =
-                                                 acr->func->ls_func[falcon_id];
-
-               if (func->post_run) {
-                       ret = func->post_run(&acr->base, sb);
-                       if (ret)
-                               return ret;
-               }
-       }
-
-       return 0;
-}
-
-/**
- * acr_r352_reset_nopmu - dummy reset method when no PMU firmware is loaded
- *
- * Reset is done by re-executing secure boot from scratch, with lazy bootstrap
- * disabled. This has the effect of making all managed falcons ready-to-run.
- */
-static int
-acr_r352_reset_nopmu(struct acr_r352 *acr, struct nvkm_secboot *sb,
-                    unsigned long falcon_mask)
-{
-       int falcon;
-       int ret;
-
-       /*
-        * Perform secure boot each time we are called on FECS. Since only FECS
-        * and GPCCS are managed and started together, this ought to be safe.
-        */
-       if (!(falcon_mask & BIT(NVKM_SECBOOT_FALCON_FECS)))
-               goto end;
-
-       ret = acr_r352_shutdown(acr, sb);
-       if (ret)
-               return ret;
-
-       ret = acr_r352_bootstrap(acr, sb);
-       if (ret)
-               return ret;
-
-end:
-       for_each_set_bit(falcon, &falcon_mask, NVKM_SECBOOT_FALCON_END) {
-               acr->falcon_state[falcon] = RESET;
-       }
-       return 0;
-}
-
-/*
- * acr_r352_reset() - execute secure boot from the prepared state
- *
- * Load the HS bootloader and ask the falcon to run it. This will in turn
- * load the HS firmware and run it, so once the falcon stops all the managed
- * falcons should have their LS firmware loaded and be ready to run.
- */
-static int
-acr_r352_reset(struct nvkm_acr *_acr, struct nvkm_secboot *sb,
-              unsigned long falcon_mask)
-{
-       struct acr_r352 *acr = acr_r352(_acr);
-       struct nvkm_msgqueue *queue;
-       int falcon;
-       bool wpr_already_set = sb->wpr_set;
-       int ret;
-
-       /* Make sure secure boot is performed */
-       ret = acr_r352_bootstrap(acr, sb);
-       if (ret)
-               return ret;
-
-       /* No PMU interface? */
-       if (!nvkm_secboot_is_managed(sb, _acr->boot_falcon)) {
-               /* Redo secure boot entirely if it was already done */
-               if (wpr_already_set)
-                       return acr_r352_reset_nopmu(acr, sb, falcon_mask);
-               /* Else return the result of the initial invokation */
-               else
-                       return ret;
-       }
-
-       switch (_acr->boot_falcon) {
-       case NVKM_SECBOOT_FALCON_PMU:
-               queue = sb->subdev.device->pmu->queue;
-               break;
-       case NVKM_SECBOOT_FALCON_SEC2:
-               queue = sb->subdev.device->sec2->queue;
-               break;
-       default:
-               return -EINVAL;
-       }
-
-       /* Otherwise just ask the LS firmware to reset the falcon */
-       for_each_set_bit(falcon, &falcon_mask, NVKM_SECBOOT_FALCON_END)
-               nvkm_debug(&sb->subdev, "resetting %s falcon\n",
-                          nvkm_secboot_falcon_name[falcon]);
-       ret = nvkm_msgqueue_acr_boot_falcons(queue, falcon_mask);
-       if (ret) {
-               nvkm_error(&sb->subdev, "error during falcon reset: %d\n", ret);
-               return ret;
-       }
-       nvkm_debug(&sb->subdev, "falcon reset done\n");
-
-       return 0;
-}
-
-static int
-acr_r352_fini(struct nvkm_acr *_acr, struct nvkm_secboot *sb, bool suspend)
-{
-       struct acr_r352 *acr = acr_r352(_acr);
-
-       return acr_r352_shutdown(acr, sb);
-}
-
-static void
-acr_r352_dtor(struct nvkm_acr *_acr)
-{
-       struct acr_r352 *acr = acr_r352(_acr);
-
-       nvkm_gpuobj_del(&acr->unload_blob);
-
-       if (_acr->boot_falcon != NVKM_SECBOOT_FALCON_PMU)
-               kfree(acr->hsbl_unload_blob);
-       kfree(acr->hsbl_blob);
-       nvkm_gpuobj_del(&acr->load_blob);
-       nvkm_gpuobj_del(&acr->ls_blob);
-
-       kfree(acr);
-}
-
-static const struct acr_r352_lsf_func
-acr_r352_ls_fecs_func_0 = {
-       .generate_bl_desc = acr_r352_generate_flcn_bl_desc,
-       .bl_desc_size = sizeof(struct acr_r352_flcn_bl_desc),
-};
-
-const struct acr_r352_ls_func
-acr_r352_ls_fecs_func = {
-       .load = acr_ls_ucode_load_fecs,
-       .version_max = 0,
-       .version = {
-               &acr_r352_ls_fecs_func_0,
-       }
-};
-
-static const struct acr_r352_lsf_func
-acr_r352_ls_gpccs_func_0 = {
-       .generate_bl_desc = acr_r352_generate_flcn_bl_desc,
-       .bl_desc_size = sizeof(struct acr_r352_flcn_bl_desc),
-       /* GPCCS will be loaded using PRI */
-       .lhdr_flags = LSF_FLAG_FORCE_PRIV_LOAD,
-};
-
-static const struct acr_r352_ls_func
-acr_r352_ls_gpccs_func = {
-       .load = acr_ls_ucode_load_gpccs,
-       .version_max = 0,
-       .version = {
-               &acr_r352_ls_gpccs_func_0,
-       }
-};
-
-
-
-/**
- * struct acr_r352_pmu_bl_desc - PMU DMEM bootloader descriptor
- * @dma_idx:           DMA context to be used by BL while loading code/data
- * @code_dma_base:     256B-aligned Physical FB Address where code is located
- * @total_code_size:   total size of the code part in the ucode
- * @code_size_to_load: size of the code part to load in PMU IMEM.
- * @code_entry_point:  entry point in the code.
- * @data_dma_base:     Physical FB address where data part of ucode is located
- * @data_size:         Total size of the data portion.
- * @overlay_dma_base:  Physical Fb address for resident code present in ucode
- * @argc:              Total number of args
- * @argv:              offset where args are copied into PMU's DMEM.
- *
- * Structure used by the PMU bootloader to load the rest of the code
- */
-struct acr_r352_pmu_bl_desc {
-       u32 dma_idx;
-       u32 code_dma_base;
-       u32 code_size_total;
-       u32 code_size_to_load;
-       u32 code_entry_point;
-       u32 data_dma_base;
-       u32 data_size;
-       u32 overlay_dma_base;
-       u32 argc;
-       u32 argv;
-       u16 code_dma_base1;
-       u16 data_dma_base1;
-       u16 overlay_dma_base1;
-};
-
-/**
- * acr_r352_generate_pmu_bl_desc() - populate a DMEM BL descriptor for PMU LS image
- *
- */
-static void
-acr_r352_generate_pmu_bl_desc(const struct nvkm_acr *acr,
-                             const struct ls_ucode_img *img, u64 wpr_addr,
-                             void *_desc)
-{
-       const struct ls_ucode_img_desc *pdesc = &img->ucode_desc;
-       const struct nvkm_pmu *pmu = acr->subdev->device->pmu;
-       struct acr_r352_pmu_bl_desc *desc = _desc;
-       u64 base;
-       u64 addr_code;
-       u64 addr_data;
-       u32 addr_args;
-
-       base = wpr_addr + img->ucode_off + pdesc->app_start_offset;
-       addr_code = (base + pdesc->app_resident_code_offset) >> 8;
-       addr_data = (base + pdesc->app_resident_data_offset) >> 8;
-       addr_args = pmu->falcon->data.limit;
-       addr_args -= NVKM_MSGQUEUE_CMDLINE_SIZE;
-
-       desc->dma_idx = FALCON_DMAIDX_UCODE;
-       desc->code_dma_base = lower_32_bits(addr_code);
-       desc->code_dma_base1 = upper_32_bits(addr_code);
-       desc->code_size_total = pdesc->app_size;
-       desc->code_size_to_load = pdesc->app_resident_code_size;
-       desc->code_entry_point = pdesc->app_imem_entry;
-       desc->data_dma_base = lower_32_bits(addr_data);
-       desc->data_dma_base1 = upper_32_bits(addr_data);
-       desc->data_size = pdesc->app_resident_data_size;
-       desc->overlay_dma_base = lower_32_bits(addr_code);
-       desc->overlay_dma_base1 = upper_32_bits(addr_code);
-       desc->argc = 1;
-       desc->argv = addr_args;
-}
-
-static const struct acr_r352_lsf_func
-acr_r352_ls_pmu_func_0 = {
-       .generate_bl_desc = acr_r352_generate_pmu_bl_desc,
-       .bl_desc_size = sizeof(struct acr_r352_pmu_bl_desc),
-};
-
-static const struct acr_r352_ls_func
-acr_r352_ls_pmu_func = {
-       .load = acr_ls_ucode_load_pmu,
-       .post_run = acr_ls_pmu_post_run,
-       .version_max = 0,
-       .version = {
-               &acr_r352_ls_pmu_func_0,
-       }
-};
-
-const struct acr_r352_func
-acr_r352_func = {
-       .fixup_hs_desc = acr_r352_fixup_hs_desc,
-       .generate_hs_bl_desc = acr_r352_generate_hs_bl_desc,
-       .hs_bl_desc_size = sizeof(struct acr_r352_flcn_bl_desc),
-       .ls_ucode_img_load = acr_r352_ls_ucode_img_load,
-       .ls_fill_headers = acr_r352_ls_fill_headers,
-       .ls_write_wpr = acr_r352_ls_write_wpr,
-       .ls_func = {
-               [NVKM_SECBOOT_FALCON_FECS] = &acr_r352_ls_fecs_func,
-               [NVKM_SECBOOT_FALCON_GPCCS] = &acr_r352_ls_gpccs_func,
-               [NVKM_SECBOOT_FALCON_PMU] = &acr_r352_ls_pmu_func,
-       },
-};
-
-static const struct nvkm_acr_func
-acr_r352_base_func = {
-       .dtor = acr_r352_dtor,
-       .fini = acr_r352_fini,
-       .load = acr_r352_load,
-       .reset = acr_r352_reset,
-};
-
-struct nvkm_acr *
-acr_r352_new_(const struct acr_r352_func *func,
-             enum nvkm_secboot_falcon boot_falcon,
-             unsigned long managed_falcons)
-{
-       struct acr_r352 *acr;
-       int i;
-
-       /* Check that all requested falcons are supported */
-       for_each_set_bit(i, &managed_falcons, NVKM_SECBOOT_FALCON_END) {
-               if (!func->ls_func[i])
-                       return ERR_PTR(-ENOTSUPP);
-       }
-
-       acr = kzalloc(sizeof(*acr), GFP_KERNEL);
-       if (!acr)
-               return ERR_PTR(-ENOMEM);
-
-       acr->base.boot_falcon = boot_falcon;
-       acr->base.managed_falcons = managed_falcons;
-       acr->base.func = &acr_r352_base_func;
-       acr->func = func;
-
-       return &acr->base;
-}
-
-struct nvkm_acr *
-acr_r352_new(unsigned long managed_falcons)
-{
-       return acr_r352_new_(&acr_r352_func, NVKM_SECBOOT_FALCON_PMU,
-                            managed_falcons);
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r352.h b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r352.h
deleted file mode 100644 (file)
index e516cab..0000000
+++ /dev/null
@@ -1,167 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-#ifndef __NVKM_SECBOOT_ACR_R352_H__
-#define __NVKM_SECBOOT_ACR_R352_H__
-
-#include "acr.h"
-#include "ls_ucode.h"
-#include "hs_ucode.h"
-
-struct ls_ucode_img;
-
-#define ACR_R352_MAX_APPS 8
-
-#define LSF_FLAG_LOAD_CODE_AT_0                1
-#define LSF_FLAG_DMACTL_REQ_CTX                4
-#define LSF_FLAG_FORCE_PRIV_LOAD       8
-
-static inline u32
-hsf_load_header_app_off(const struct hsf_load_header *hdr, u32 app)
-{
-       return hdr->apps[app];
-}
-
-static inline u32
-hsf_load_header_app_size(const struct hsf_load_header *hdr, u32 app)
-{
-       return hdr->apps[hdr->num_apps + app];
-}
-
-/**
- * struct acr_r352_lsf_func - manages a specific LS firmware version
- *
- * @generate_bl_desc: function called on a block of bl_desc_size to generate the
- *                   proper bootloader descriptor for this LS firmware
- * @bl_desc_size: size of the bootloader descriptor
- * @lhdr_flags: LS flags
- */
-struct acr_r352_lsf_func {
-       void (*generate_bl_desc)(const struct nvkm_acr *,
-                                const struct ls_ucode_img *, u64, void *);
-       u32 bl_desc_size;
-       u32 lhdr_flags;
-};
-
-/**
- * struct acr_r352_ls_func - manages a single LS falcon
- *
- * @load: load the external firmware into a ls_ucode_img
- * @post_run: hook called right after the ACR is executed
- */
-struct acr_r352_ls_func {
-       int (*load)(const struct nvkm_secboot *, int maxver,
-                   struct ls_ucode_img *);
-       int (*post_run)(const struct nvkm_acr *, const struct nvkm_secboot *);
-       int version_max;
-       const struct acr_r352_lsf_func *version[];
-};
-
-struct acr_r352;
-
-/**
- * struct acr_r352_func - manages nuances between ACR versions
- *
- * @generate_hs_bl_desc: function called on a block of bl_desc_size to generate
- *                      the proper HS bootloader descriptor
- * @hs_bl_desc_size: size of the HS bootloader descriptor
- */
-struct acr_r352_func {
-       void (*generate_hs_bl_desc)(const struct hsf_load_header *, void *,
-                                   u64);
-       void (*fixup_hs_desc)(struct acr_r352 *, struct nvkm_secboot *, void *);
-       u32 hs_bl_desc_size;
-       bool shadow_blob;
-
-       struct ls_ucode_img *(*ls_ucode_img_load)(const struct acr_r352 *,
-                                                 const struct nvkm_secboot *,
-                                                 enum nvkm_secboot_falcon);
-       int (*ls_fill_headers)(struct acr_r352 *, struct list_head *);
-       int (*ls_write_wpr)(struct acr_r352 *, struct list_head *,
-                           struct nvkm_gpuobj *, u64);
-
-       const struct acr_r352_ls_func *ls_func[NVKM_SECBOOT_FALCON_END];
-};
-
-/**
- * struct acr_r352 - ACR data for driver release 352 (and beyond)
- */
-struct acr_r352 {
-       struct nvkm_acr base;
-       const struct acr_r352_func *func;
-
-       /*
-        * HS FW - lock WPR region (dGPU only) and load LS FWs
-        * on Tegra the HS FW copies the LS blob into the fixed WPR instead
-        */
-       struct nvkm_gpuobj *load_blob;
-       struct {
-               struct hsf_load_header load_bl_header;
-               u32 __load_apps[ACR_R352_MAX_APPS * 2];
-       };
-
-       /* HS FW - unlock WPR region (dGPU only) */
-       struct nvkm_gpuobj *unload_blob;
-       struct {
-               struct hsf_load_header unload_bl_header;
-               u32 __unload_apps[ACR_R352_MAX_APPS * 2];
-       };
-
-       /* HS bootloader */
-       void *hsbl_blob;
-
-       /* HS bootloader for unload blob, if using a different falcon */
-       void *hsbl_unload_blob;
-
-       /* LS FWs, to be loaded by the HS ACR */
-       struct nvkm_gpuobj *ls_blob;
-
-       /* Firmware already loaded? */
-       bool firmware_ok;
-
-       /* Falcons to lazy-bootstrap */
-       u32 lazy_bootstrap;
-
-       /* To keep track of the state of all managed falcons */
-       enum {
-               /* In non-secure state, no firmware loaded, no privileges*/
-               NON_SECURE = 0,
-               /* In low-secure mode and ready to be started */
-               RESET,
-               /* In low-secure mode and running */
-               RUNNING,
-       } falcon_state[NVKM_SECBOOT_FALCON_END];
-};
-#define acr_r352(acr) container_of(acr, struct acr_r352, base)
-
-struct nvkm_acr *acr_r352_new_(const struct acr_r352_func *,
-                              enum nvkm_secboot_falcon, unsigned long);
-
-struct ls_ucode_img *acr_r352_ls_ucode_img_load(const struct acr_r352 *,
-                                               const struct nvkm_secboot *,
-                                               enum nvkm_secboot_falcon);
-int acr_r352_ls_fill_headers(struct acr_r352 *, struct list_head *);
-int acr_r352_ls_write_wpr(struct acr_r352 *, struct list_head *,
-                         struct nvkm_gpuobj *, u64);
-
-void acr_r352_fixup_hs_desc(struct acr_r352 *, struct nvkm_secboot *, void *);
-
-#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r361.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r361.c
deleted file mode 100644 (file)
index f6b2d20..0000000
+++ /dev/null
@@ -1,229 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#include "acr_r361.h"
-
-#include <engine/falcon.h>
-#include <core/msgqueue.h>
-#include <subdev/pmu.h>
-#include <engine/sec2.h>
-
-static void
-acr_r361_generate_flcn_bl_desc(const struct nvkm_acr *acr,
-                              const struct ls_ucode_img *img, u64 wpr_addr,
-                              void *_desc)
-{
-       struct acr_r361_flcn_bl_desc *desc = _desc;
-       const struct ls_ucode_img_desc *pdesc = &img->ucode_desc;
-       u64 base, addr_code, addr_data;
-
-       base = wpr_addr + img->ucode_off + pdesc->app_start_offset;
-       addr_code = base + pdesc->app_resident_code_offset;
-       addr_data = base + pdesc->app_resident_data_offset;
-
-       desc->ctx_dma = FALCON_DMAIDX_UCODE;
-       desc->code_dma_base = u64_to_flcn64(addr_code);
-       desc->non_sec_code_off = pdesc->app_resident_code_offset;
-       desc->non_sec_code_size = pdesc->app_resident_code_size;
-       desc->code_entry_point = pdesc->app_imem_entry;
-       desc->data_dma_base = u64_to_flcn64(addr_data);
-       desc->data_size = pdesc->app_resident_data_size;
-}
-
-void
-acr_r361_generate_hs_bl_desc(const struct hsf_load_header *hdr, void *_bl_desc,
-                           u64 offset)
-{
-       struct acr_r361_flcn_bl_desc *bl_desc = _bl_desc;
-
-       bl_desc->ctx_dma = FALCON_DMAIDX_VIRT;
-       bl_desc->code_dma_base = u64_to_flcn64(offset);
-       bl_desc->non_sec_code_off = hdr->non_sec_code_off;
-       bl_desc->non_sec_code_size = hdr->non_sec_code_size;
-       bl_desc->sec_code_off = hsf_load_header_app_off(hdr, 0);
-       bl_desc->sec_code_size = hsf_load_header_app_size(hdr, 0);
-       bl_desc->code_entry_point = 0;
-       bl_desc->data_dma_base = u64_to_flcn64(offset + hdr->data_dma_base);
-       bl_desc->data_size = hdr->data_size;
-}
-
-static const struct acr_r352_lsf_func
-acr_r361_ls_fecs_func_0 = {
-       .generate_bl_desc = acr_r361_generate_flcn_bl_desc,
-       .bl_desc_size = sizeof(struct acr_r361_flcn_bl_desc),
-};
-
-const struct acr_r352_ls_func
-acr_r361_ls_fecs_func = {
-       .load = acr_ls_ucode_load_fecs,
-       .version_max = 0,
-       .version = {
-               &acr_r361_ls_fecs_func_0,
-       }
-};
-
-static const struct acr_r352_lsf_func
-acr_r361_ls_gpccs_func_0 = {
-       .generate_bl_desc = acr_r361_generate_flcn_bl_desc,
-       .bl_desc_size = sizeof(struct acr_r361_flcn_bl_desc),
-       /* GPCCS will be loaded using PRI */
-       .lhdr_flags = LSF_FLAG_FORCE_PRIV_LOAD,
-};
-
-const struct acr_r352_ls_func
-acr_r361_ls_gpccs_func = {
-       .load = acr_ls_ucode_load_gpccs,
-       .version_max = 0,
-       .version = {
-               &acr_r361_ls_gpccs_func_0,
-       }
-};
-
-struct acr_r361_pmu_bl_desc {
-       u32 reserved;
-       u32 dma_idx;
-       struct flcn_u64 code_dma_base;
-       u32 total_code_size;
-       u32 code_size_to_load;
-       u32 code_entry_point;
-       struct flcn_u64 data_dma_base;
-       u32 data_size;
-       struct flcn_u64 overlay_dma_base;
-       u32 argc;
-       u32 argv;
-};
-
-static void
-acr_r361_generate_pmu_bl_desc(const struct nvkm_acr *acr,
-                             const struct ls_ucode_img *img, u64 wpr_addr,
-                             void *_desc)
-{
-       const struct ls_ucode_img_desc *pdesc = &img->ucode_desc;
-       const struct nvkm_pmu *pmu = acr->subdev->device->pmu;
-       struct acr_r361_pmu_bl_desc *desc = _desc;
-       u64 base, addr_code, addr_data;
-       u32 addr_args;
-
-       base = wpr_addr + img->ucode_off + pdesc->app_start_offset;
-       addr_code = base + pdesc->app_resident_code_offset;
-       addr_data = base + pdesc->app_resident_data_offset;
-       addr_args = pmu->falcon->data.limit;
-       addr_args -= NVKM_MSGQUEUE_CMDLINE_SIZE;
-
-       desc->dma_idx = FALCON_DMAIDX_UCODE;
-       desc->code_dma_base = u64_to_flcn64(addr_code);
-       desc->total_code_size = pdesc->app_size;
-       desc->code_size_to_load = pdesc->app_resident_code_size;
-       desc->code_entry_point = pdesc->app_imem_entry;
-       desc->data_dma_base = u64_to_flcn64(addr_data);
-       desc->data_size = pdesc->app_resident_data_size;
-       desc->overlay_dma_base = u64_to_flcn64(addr_code);
-       desc->argc = 1;
-       desc->argv = addr_args;
-}
-
-static const struct acr_r352_lsf_func
-acr_r361_ls_pmu_func_0 = {
-       .generate_bl_desc = acr_r361_generate_pmu_bl_desc,
-       .bl_desc_size = sizeof(struct acr_r361_pmu_bl_desc),
-};
-
-const struct acr_r352_ls_func
-acr_r361_ls_pmu_func = {
-       .load = acr_ls_ucode_load_pmu,
-       .post_run = acr_ls_pmu_post_run,
-       .version_max = 0,
-       .version = {
-               &acr_r361_ls_pmu_func_0,
-       }
-};
-
-static void
-acr_r361_generate_sec2_bl_desc(const struct nvkm_acr *acr,
-                              const struct ls_ucode_img *img, u64 wpr_addr,
-                              void *_desc)
-{
-       const struct ls_ucode_img_desc *pdesc = &img->ucode_desc;
-       const struct nvkm_sec2 *sec = acr->subdev->device->sec2;
-       struct acr_r361_pmu_bl_desc *desc = _desc;
-       u64 base, addr_code, addr_data;
-       u32 addr_args;
-
-       base = wpr_addr + img->ucode_off + pdesc->app_start_offset;
-       /* For some reason we should not add app_resident_code_offset here */
-       addr_code = base;
-       addr_data = base + pdesc->app_resident_data_offset;
-       addr_args = sec->falcon->data.limit;
-       addr_args -= NVKM_MSGQUEUE_CMDLINE_SIZE;
-
-       desc->dma_idx = FALCON_SEC2_DMAIDX_UCODE;
-       desc->code_dma_base = u64_to_flcn64(addr_code);
-       desc->total_code_size = pdesc->app_size;
-       desc->code_size_to_load = pdesc->app_resident_code_size;
-       desc->code_entry_point = pdesc->app_imem_entry;
-       desc->data_dma_base = u64_to_flcn64(addr_data);
-       desc->data_size = pdesc->app_resident_data_size;
-       desc->overlay_dma_base = u64_to_flcn64(addr_code);
-       desc->argc = 1;
-       /* args are stored at the beginning of EMEM */
-       desc->argv = 0x01000000;
-}
-
-const struct acr_r352_lsf_func
-acr_r361_ls_sec2_func_0 = {
-       .generate_bl_desc = acr_r361_generate_sec2_bl_desc,
-       .bl_desc_size = sizeof(struct acr_r361_pmu_bl_desc),
-};
-
-static const struct acr_r352_ls_func
-acr_r361_ls_sec2_func = {
-       .load = acr_ls_ucode_load_sec2,
-       .post_run = acr_ls_sec2_post_run,
-       .version_max = 0,
-       .version = {
-               &acr_r361_ls_sec2_func_0,
-       }
-};
-
-
-const struct acr_r352_func
-acr_r361_func = {
-       .fixup_hs_desc = acr_r352_fixup_hs_desc,
-       .generate_hs_bl_desc = acr_r361_generate_hs_bl_desc,
-       .hs_bl_desc_size = sizeof(struct acr_r361_flcn_bl_desc),
-       .ls_ucode_img_load = acr_r352_ls_ucode_img_load,
-       .ls_fill_headers = acr_r352_ls_fill_headers,
-       .ls_write_wpr = acr_r352_ls_write_wpr,
-       .ls_func = {
-               [NVKM_SECBOOT_FALCON_FECS] = &acr_r361_ls_fecs_func,
-               [NVKM_SECBOOT_FALCON_GPCCS] = &acr_r361_ls_gpccs_func,
-               [NVKM_SECBOOT_FALCON_PMU] = &acr_r361_ls_pmu_func,
-               [NVKM_SECBOOT_FALCON_SEC2] = &acr_r361_ls_sec2_func,
-       },
-};
-
-struct nvkm_acr *
-acr_r361_new(unsigned long managed_falcons)
-{
-       return acr_r352_new_(&acr_r361_func, NVKM_SECBOOT_FALCON_PMU,
-                            managed_falcons);
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r361.h b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r361.h
deleted file mode 100644 (file)
index 38dec93..0000000
+++ /dev/null
@@ -1,71 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#ifndef __NVKM_SECBOOT_ACR_R361_H__
-#define __NVKM_SECBOOT_ACR_R361_H__
-
-#include "acr_r352.h"
-
-/**
- * struct acr_r361_flcn_bl_desc - DMEM bootloader descriptor
- * @signature:         16B signature for secure code. 0s if no secure code
- * @ctx_dma:           DMA context to be used by BL while loading code/data
- * @code_dma_base:     256B-aligned Physical FB Address where code is located
- *                     (falcon's $xcbase register)
- * @non_sec_code_off:  offset from code_dma_base where the non-secure code is
- *                      located. The offset must be multiple of 256 to help perf
- * @non_sec_code_size: the size of the nonSecure code part.
- * @sec_code_off:      offset from code_dma_base where the secure code is
- *                      located. The offset must be multiple of 256 to help perf
- * @sec_code_size:     offset from code_dma_base where the secure code is
- *                      located. The offset must be multiple of 256 to help perf
- * @code_entry_point:  code entry point which will be invoked by BL after
- *                      code is loaded.
- * @data_dma_base:     256B aligned Physical FB Address where data is located.
- *                     (falcon's $xdbase register)
- * @data_size:         size of data block. Should be multiple of 256B
- *
- * Structure used by the bootloader to load the rest of the code. This has
- * to be filled by host and copied into DMEM at offset provided in the
- * hsflcn_bl_desc.bl_desc_dmem_load_off.
- */
-struct acr_r361_flcn_bl_desc {
-       u32 reserved[4];
-       u32 signature[4];
-       u32 ctx_dma;
-       struct flcn_u64 code_dma_base;
-       u32 non_sec_code_off;
-       u32 non_sec_code_size;
-       u32 sec_code_off;
-       u32 sec_code_size;
-       u32 code_entry_point;
-       struct flcn_u64 data_dma_base;
-       u32 data_size;
-};
-
-void acr_r361_generate_hs_bl_desc(const struct hsf_load_header *, void *, u64);
-
-extern const struct acr_r352_ls_func acr_r361_ls_fecs_func;
-extern const struct acr_r352_ls_func acr_r361_ls_gpccs_func;
-extern const struct acr_r352_ls_func acr_r361_ls_pmu_func;
-extern const struct acr_r352_lsf_func acr_r361_ls_sec2_func_0;
-#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r364.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r364.c
deleted file mode 100644 (file)
index 30cf041..0000000
+++ /dev/null
@@ -1,117 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#include "acr_r361.h"
-
-#include <core/gpuobj.h>
-
-/*
- * r364 ACR: hsflcn_desc structure has changed to introduce the shadow_mem
- * parameter.
- */
-
-struct acr_r364_hsflcn_desc {
-       union {
-               u8 reserved_dmem[0x200];
-               u32 signatures[4];
-       } ucode_reserved_space;
-       u32 wpr_region_id;
-       u32 wpr_offset;
-       u32 mmu_memory_range;
-       struct {
-               u32 no_regions;
-               struct {
-                       u32 start_addr;
-                       u32 end_addr;
-                       u32 region_id;
-                       u32 read_mask;
-                       u32 write_mask;
-                       u32 client_mask;
-                       u32 shadow_mem_start_addr;
-               } region_props[2];
-       } regions;
-       u32 ucode_blob_size;
-       u64 ucode_blob_base __aligned(8);
-       struct {
-               u32 vpr_enabled;
-               u32 vpr_start;
-               u32 vpr_end;
-               u32 hdcp_policies;
-       } vpr_desc;
-};
-
-static void
-acr_r364_fixup_hs_desc(struct acr_r352 *acr, struct nvkm_secboot *sb,
-                      void *_desc)
-{
-       struct acr_r364_hsflcn_desc *desc = _desc;
-       struct nvkm_gpuobj *ls_blob = acr->ls_blob;
-
-       /* WPR region information if WPR is not fixed */
-       if (sb->wpr_size == 0) {
-               u64 wpr_start = ls_blob->addr;
-               u64 wpr_end = ls_blob->addr + ls_blob->size;
-
-               if (acr->func->shadow_blob)
-                       wpr_start += ls_blob->size / 2;
-
-               desc->wpr_region_id = 1;
-               desc->regions.no_regions = 2;
-               desc->regions.region_props[0].start_addr = wpr_start >> 8;
-               desc->regions.region_props[0].end_addr = wpr_end >> 8;
-               desc->regions.region_props[0].region_id = 1;
-               desc->regions.region_props[0].read_mask = 0xf;
-               desc->regions.region_props[0].write_mask = 0xc;
-               desc->regions.region_props[0].client_mask = 0x2;
-               if (acr->func->shadow_blob)
-                       desc->regions.region_props[0].shadow_mem_start_addr =
-                                                            ls_blob->addr >> 8;
-               else
-                       desc->regions.region_props[0].shadow_mem_start_addr = 0;
-       } else {
-               desc->ucode_blob_base = ls_blob->addr;
-               desc->ucode_blob_size = ls_blob->size;
-       }
-}
-
-const struct acr_r352_func
-acr_r364_func = {
-       .fixup_hs_desc = acr_r364_fixup_hs_desc,
-       .generate_hs_bl_desc = acr_r361_generate_hs_bl_desc,
-       .hs_bl_desc_size = sizeof(struct acr_r361_flcn_bl_desc),
-       .ls_ucode_img_load = acr_r352_ls_ucode_img_load,
-       .ls_fill_headers = acr_r352_ls_fill_headers,
-       .ls_write_wpr = acr_r352_ls_write_wpr,
-       .ls_func = {
-               [NVKM_SECBOOT_FALCON_FECS] = &acr_r361_ls_fecs_func,
-               [NVKM_SECBOOT_FALCON_GPCCS] = &acr_r361_ls_gpccs_func,
-               [NVKM_SECBOOT_FALCON_PMU] = &acr_r361_ls_pmu_func,
-       },
-};
-
-
-struct nvkm_acr *
-acr_r364_new(unsigned long managed_falcons)
-{
-       return acr_r352_new_(&acr_r364_func, NVKM_SECBOOT_FALCON_PMU,
-                            managed_falcons);
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r367.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r367.c
deleted file mode 100644 (file)
index 472ced2..0000000
+++ /dev/null
@@ -1,418 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#include "acr_r367.h"
-#include "acr_r361.h"
-#include "acr_r370.h"
-
-#include <core/gpuobj.h>
-
-/*
- * r367 ACR: new LS signature format requires a rewrite of LS firmware and
- * blob creation functions. Also the hsflcn_desc layout has changed slightly.
- */
-
-#define LSF_LSB_DEPMAP_SIZE 11
-
-/**
- * struct acr_r367_lsf_lsb_header - LS firmware header
- *
- * See also struct acr_r352_lsf_lsb_header for documentation.
- */
-struct acr_r367_lsf_lsb_header {
-       /**
-        * LS falcon signatures
-        * @prd_keys:           signature to use in production mode
-        * @dgb_keys:           signature to use in debug mode
-        * @b_prd_present:      whether the production key is present
-        * @b_dgb_present:      whether the debug key is present
-        * @falcon_id:          ID of the falcon the ucode applies to
-        */
-       struct {
-               u8 prd_keys[2][16];
-               u8 dbg_keys[2][16];
-               u32 b_prd_present;
-               u32 b_dbg_present;
-               u32 falcon_id;
-               u32 supports_versioning;
-               u32 version;
-               u32 depmap_count;
-               u8 depmap[LSF_LSB_DEPMAP_SIZE * 2 * 4];
-               u8 kdf[16];
-       } signature;
-       u32 ucode_off;
-       u32 ucode_size;
-       u32 data_size;
-       u32 bl_code_size;
-       u32 bl_imem_off;
-       u32 bl_data_off;
-       u32 bl_data_size;
-       u32 app_code_off;
-       u32 app_code_size;
-       u32 app_data_off;
-       u32 app_data_size;
-       u32 flags;
-};
-
-/**
- * struct acr_r367_lsf_wpr_header - LS blob WPR Header
- *
- * See also struct acr_r352_lsf_wpr_header for documentation.
- */
-struct acr_r367_lsf_wpr_header {
-       u32 falcon_id;
-       u32 lsb_offset;
-       u32 bootstrap_owner;
-       u32 lazy_bootstrap;
-       u32 bin_version;
-       u32 status;
-#define LSF_IMAGE_STATUS_NONE                          0
-#define LSF_IMAGE_STATUS_COPY                          1
-#define LSF_IMAGE_STATUS_VALIDATION_CODE_FAILED                2
-#define LSF_IMAGE_STATUS_VALIDATION_DATA_FAILED                3
-#define LSF_IMAGE_STATUS_VALIDATION_DONE               4
-#define LSF_IMAGE_STATUS_VALIDATION_SKIPPED            5
-#define LSF_IMAGE_STATUS_BOOTSTRAP_READY               6
-#define LSF_IMAGE_STATUS_REVOCATION_CHECK_FAILED               7
-};
-
-/**
- * struct ls_ucode_img_r367 - ucode image augmented with r367 headers
- */
-struct ls_ucode_img_r367 {
-       struct ls_ucode_img base;
-
-       const struct acr_r352_lsf_func *func;
-
-       struct acr_r367_lsf_wpr_header wpr_header;
-       struct acr_r367_lsf_lsb_header lsb_header;
-};
-#define ls_ucode_img_r367(i) container_of(i, struct ls_ucode_img_r367, base)
-
-struct ls_ucode_img *
-acr_r367_ls_ucode_img_load(const struct acr_r352 *acr,
-                          const struct nvkm_secboot *sb,
-                          enum nvkm_secboot_falcon falcon_id)
-{
-       const struct nvkm_subdev *subdev = acr->base.subdev;
-       const struct acr_r352_ls_func *func = acr->func->ls_func[falcon_id];
-       struct ls_ucode_img_r367 *img;
-       int ret;
-
-       img = kzalloc(sizeof(*img), GFP_KERNEL);
-       if (!img)
-               return ERR_PTR(-ENOMEM);
-
-       img->base.falcon_id = falcon_id;
-
-       ret = func->load(sb, func->version_max, &img->base);
-       if (ret < 0) {
-               kfree(img->base.ucode_data);
-               kfree(img->base.sig);
-               kfree(img);
-               return ERR_PTR(ret);
-       }
-
-       img->func = func->version[ret];
-
-       /* Check that the signature size matches our expectations... */
-       if (img->base.sig_size != sizeof(img->lsb_header.signature)) {
-               nvkm_error(subdev, "invalid signature size for %s falcon!\n",
-                          nvkm_secboot_falcon_name[falcon_id]);
-               return ERR_PTR(-EINVAL);
-       }
-
-       /* Copy signature to the right place */
-       memcpy(&img->lsb_header.signature, img->base.sig, img->base.sig_size);
-
-       /* not needed? the signature should already have the right value */
-       img->lsb_header.signature.falcon_id = falcon_id;
-
-       return &img->base;
-}
-
-#define LSF_LSB_HEADER_ALIGN 256
-#define LSF_BL_DATA_ALIGN 256
-#define LSF_BL_DATA_SIZE_ALIGN 256
-#define LSF_BL_CODE_SIZE_ALIGN 256
-#define LSF_UCODE_DATA_ALIGN 4096
-
-static u32
-acr_r367_ls_img_fill_headers(struct acr_r352 *acr,
-                            struct ls_ucode_img_r367 *img, u32 offset)
-{
-       struct ls_ucode_img *_img = &img->base;
-       struct acr_r367_lsf_wpr_header *whdr = &img->wpr_header;
-       struct acr_r367_lsf_lsb_header *lhdr = &img->lsb_header;
-       struct ls_ucode_img_desc *desc = &_img->ucode_desc;
-       const struct acr_r352_lsf_func *func = img->func;
-
-       /* Fill WPR header */
-       whdr->falcon_id = _img->falcon_id;
-       whdr->bootstrap_owner = acr->base.boot_falcon;
-       whdr->bin_version = lhdr->signature.version;
-       whdr->status = LSF_IMAGE_STATUS_COPY;
-
-       /* Skip bootstrapping falcons started by someone else than ACR */
-       if (acr->lazy_bootstrap & BIT(_img->falcon_id))
-               whdr->lazy_bootstrap = 1;
-
-       /* Align, save off, and include an LSB header size */
-       offset = ALIGN(offset, LSF_LSB_HEADER_ALIGN);
-       whdr->lsb_offset = offset;
-       offset += sizeof(*lhdr);
-
-       /*
-        * Align, save off, and include the original (static) ucode
-        * image size
-        */
-       offset = ALIGN(offset, LSF_UCODE_DATA_ALIGN);
-       _img->ucode_off = lhdr->ucode_off = offset;
-       offset += _img->ucode_size;
-
-       /*
-        * For falcons that use a boot loader (BL), we append a loader
-        * desc structure on the end of the ucode image and consider
-        * this the boot loader data. The host will then copy the loader
-        * desc args to this space within the WPR region (before locking
-        * down) and the HS bin will then copy them to DMEM 0 for the
-        * loader.
-        */
-       lhdr->bl_code_size = ALIGN(desc->bootloader_size,
-                                  LSF_BL_CODE_SIZE_ALIGN);
-       lhdr->ucode_size = ALIGN(desc->app_resident_data_offset,
-                                LSF_BL_CODE_SIZE_ALIGN) + lhdr->bl_code_size;
-       lhdr->data_size = ALIGN(desc->app_size, LSF_BL_CODE_SIZE_ALIGN) +
-                               lhdr->bl_code_size - lhdr->ucode_size;
-       /*
-        * Though the BL is located at 0th offset of the image, the VA
-        * is different to make sure that it doesn't collide the actual
-        * OS VA range
-        */
-       lhdr->bl_imem_off = desc->bootloader_imem_offset;
-       lhdr->app_code_off = desc->app_start_offset +
-                            desc->app_resident_code_offset;
-       lhdr->app_code_size = desc->app_resident_code_size;
-       lhdr->app_data_off = desc->app_start_offset +
-                            desc->app_resident_data_offset;
-       lhdr->app_data_size = desc->app_resident_data_size;
-
-       lhdr->flags = func->lhdr_flags;
-       if (_img->falcon_id == acr->base.boot_falcon)
-               lhdr->flags |= LSF_FLAG_DMACTL_REQ_CTX;
-
-       /* Align and save off BL descriptor size */
-       lhdr->bl_data_size = ALIGN(func->bl_desc_size, LSF_BL_DATA_SIZE_ALIGN);
-
-       /*
-        * Align, save off, and include the additional BL data
-        */
-       offset = ALIGN(offset, LSF_BL_DATA_ALIGN);
-       lhdr->bl_data_off = offset;
-       offset += lhdr->bl_data_size;
-
-       return offset;
-}
-
-int
-acr_r367_ls_fill_headers(struct acr_r352 *acr, struct list_head *imgs)
-{
-       struct ls_ucode_img_r367 *img;
-       struct list_head *l;
-       u32 count = 0;
-       u32 offset;
-
-       /* Count the number of images to manage */
-       list_for_each(l, imgs)
-               count++;
-
-       /*
-        * Start with an array of WPR headers at the base of the WPR.
-        * The expectation here is that the secure falcon will do a single DMA
-        * read of this array and cache it internally so it's ok to pack these.
-        * Also, we add 1 to the falcon count to indicate the end of the array.
-        */
-       offset = sizeof(img->wpr_header) * (count + 1);
-
-       /*
-        * Walk the managed falcons, accounting for the LSB structs
-        * as well as the ucode images.
-        */
-       list_for_each_entry(img, imgs, base.node) {
-               offset = acr_r367_ls_img_fill_headers(acr, img, offset);
-       }
-
-       return offset;
-}
-
-int
-acr_r367_ls_write_wpr(struct acr_r352 *acr, struct list_head *imgs,
-                     struct nvkm_gpuobj *wpr_blob, u64 wpr_addr)
-{
-       struct ls_ucode_img *_img;
-       u32 pos = 0;
-       u32 max_desc_size = 0;
-       u8 *gdesc;
-
-       list_for_each_entry(_img, imgs, node) {
-               struct ls_ucode_img_r367 *img = ls_ucode_img_r367(_img);
-               const struct acr_r352_lsf_func *ls_func = img->func;
-
-               max_desc_size = max(max_desc_size, ls_func->bl_desc_size);
-       }
-
-       gdesc = kmalloc(max_desc_size, GFP_KERNEL);
-       if (!gdesc)
-               return -ENOMEM;
-
-       nvkm_kmap(wpr_blob);
-
-       list_for_each_entry(_img, imgs, node) {
-               struct ls_ucode_img_r367 *img = ls_ucode_img_r367(_img);
-               const struct acr_r352_lsf_func *ls_func = img->func;
-
-               nvkm_gpuobj_memcpy_to(wpr_blob, pos, &img->wpr_header,
-                                     sizeof(img->wpr_header));
-
-               nvkm_gpuobj_memcpy_to(wpr_blob, img->wpr_header.lsb_offset,
-                                    &img->lsb_header, sizeof(img->lsb_header));
-
-               /* Generate and write BL descriptor */
-               memset(gdesc, 0, ls_func->bl_desc_size);
-               ls_func->generate_bl_desc(&acr->base, _img, wpr_addr, gdesc);
-
-               nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.bl_data_off,
-                                     gdesc, ls_func->bl_desc_size);
-
-               /* Copy ucode */
-               nvkm_gpuobj_memcpy_to(wpr_blob, img->lsb_header.ucode_off,
-                                     _img->ucode_data, _img->ucode_size);
-
-               pos += sizeof(img->wpr_header);
-       }
-
-       nvkm_wo32(wpr_blob, pos, NVKM_SECBOOT_FALCON_INVALID);
-
-       nvkm_done(wpr_blob);
-
-       kfree(gdesc);
-
-       return 0;
-}
-
-struct acr_r367_hsflcn_desc {
-       u8 reserved_dmem[0x200];
-       u32 signatures[4];
-       u32 wpr_region_id;
-       u32 wpr_offset;
-       u32 mmu_memory_range;
-#define FLCN_ACR_MAX_REGIONS 2
-       struct {
-               u32 no_regions;
-               struct {
-                       u32 start_addr;
-                       u32 end_addr;
-                       u32 region_id;
-                       u32 read_mask;
-                       u32 write_mask;
-                       u32 client_mask;
-                       u32 shadow_mem_start_addr;
-               } region_props[FLCN_ACR_MAX_REGIONS];
-       } regions;
-       u32 ucode_blob_size;
-       u64 ucode_blob_base __aligned(8);
-       struct {
-               u32 vpr_enabled;
-               u32 vpr_start;
-               u32 vpr_end;
-               u32 hdcp_policies;
-       } vpr_desc;
-};
-
-void
-acr_r367_fixup_hs_desc(struct acr_r352 *acr, struct nvkm_secboot *sb,
-                      void *_desc)
-{
-       struct acr_r367_hsflcn_desc *desc = _desc;
-       struct nvkm_gpuobj *ls_blob = acr->ls_blob;
-
-       /* WPR region information if WPR is not fixed */
-       if (sb->wpr_size == 0) {
-               u64 wpr_start = ls_blob->addr;
-               u64 wpr_end = ls_blob->addr + ls_blob->size;
-
-               if (acr->func->shadow_blob)
-                       wpr_start += ls_blob->size / 2;
-
-               desc->wpr_region_id = 1;
-               desc->regions.no_regions = 2;
-               desc->regions.region_props[0].start_addr = wpr_start >> 8;
-               desc->regions.region_props[0].end_addr = wpr_end >> 8;
-               desc->regions.region_props[0].region_id = 1;
-               desc->regions.region_props[0].read_mask = 0xf;
-               desc->regions.region_props[0].write_mask = 0xc;
-               desc->regions.region_props[0].client_mask = 0x2;
-               if (acr->func->shadow_blob)
-                       desc->regions.region_props[0].shadow_mem_start_addr =
-                                                            ls_blob->addr >> 8;
-               else
-                       desc->regions.region_props[0].shadow_mem_start_addr = 0;
-       } else {
-               desc->ucode_blob_base = ls_blob->addr;
-               desc->ucode_blob_size = ls_blob->size;
-       }
-}
-
-static const struct acr_r352_ls_func
-acr_r367_ls_sec2_func = {
-       .load = acr_ls_ucode_load_sec2,
-       .post_run = acr_ls_sec2_post_run,
-       .version_max = 1,
-       .version = {
-               &acr_r361_ls_sec2_func_0,
-               &acr_r370_ls_sec2_func_0,
-       }
-};
-
-const struct acr_r352_func
-acr_r367_func = {
-       .fixup_hs_desc = acr_r367_fixup_hs_desc,
-       .generate_hs_bl_desc = acr_r361_generate_hs_bl_desc,
-       .hs_bl_desc_size = sizeof(struct acr_r361_flcn_bl_desc),
-       .shadow_blob = true,
-       .ls_ucode_img_load = acr_r367_ls_ucode_img_load,
-       .ls_fill_headers = acr_r367_ls_fill_headers,
-       .ls_write_wpr = acr_r367_ls_write_wpr,
-       .ls_func = {
-               [NVKM_SECBOOT_FALCON_FECS] = &acr_r361_ls_fecs_func,
-               [NVKM_SECBOOT_FALCON_GPCCS] = &acr_r361_ls_gpccs_func,
-               [NVKM_SECBOOT_FALCON_PMU] = &acr_r361_ls_pmu_func,
-               [NVKM_SECBOOT_FALCON_SEC2] = &acr_r367_ls_sec2_func,
-       },
-};
-
-struct nvkm_acr *
-acr_r367_new(enum nvkm_secboot_falcon boot_falcon,
-            unsigned long managed_falcons)
-{
-       return acr_r352_new_(&acr_r367_func, boot_falcon, managed_falcons);
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r370.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r370.c
deleted file mode 100644 (file)
index e821d0f..0000000
+++ /dev/null
@@ -1,168 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#include "acr_r370.h"
-#include "acr_r367.h"
-
-#include <core/msgqueue.h>
-#include <engine/falcon.h>
-#include <engine/sec2.h>
-
-static void
-acr_r370_generate_flcn_bl_desc(const struct nvkm_acr *acr,
-                              const struct ls_ucode_img *img, u64 wpr_addr,
-                              void *_desc)
-{
-       struct acr_r370_flcn_bl_desc *desc = _desc;
-       const struct ls_ucode_img_desc *pdesc = &img->ucode_desc;
-       u64 base, addr_code, addr_data;
-
-       base = wpr_addr + img->ucode_off + pdesc->app_start_offset;
-       addr_code = base + pdesc->app_resident_code_offset;
-       addr_data = base + pdesc->app_resident_data_offset;
-
-       desc->ctx_dma = FALCON_DMAIDX_UCODE;
-       desc->code_dma_base = u64_to_flcn64(addr_code);
-       desc->non_sec_code_off = pdesc->app_resident_code_offset;
-       desc->non_sec_code_size = pdesc->app_resident_code_size;
-       desc->code_entry_point = pdesc->app_imem_entry;
-       desc->data_dma_base = u64_to_flcn64(addr_data);
-       desc->data_size = pdesc->app_resident_data_size;
-}
-
-static const struct acr_r352_lsf_func
-acr_r370_ls_fecs_func_0 = {
-       .generate_bl_desc = acr_r370_generate_flcn_bl_desc,
-       .bl_desc_size = sizeof(struct acr_r370_flcn_bl_desc),
-};
-
-const struct acr_r352_ls_func
-acr_r370_ls_fecs_func = {
-       .load = acr_ls_ucode_load_fecs,
-       .version_max = 0,
-       .version = {
-               &acr_r370_ls_fecs_func_0,
-       }
-};
-
-static const struct acr_r352_lsf_func
-acr_r370_ls_gpccs_func_0 = {
-       .generate_bl_desc = acr_r370_generate_flcn_bl_desc,
-       .bl_desc_size = sizeof(struct acr_r370_flcn_bl_desc),
-       /* GPCCS will be loaded using PRI */
-       .lhdr_flags = LSF_FLAG_FORCE_PRIV_LOAD,
-};
-
-const struct acr_r352_ls_func
-acr_r370_ls_gpccs_func = {
-       .load = acr_ls_ucode_load_gpccs,
-       .version_max = 0,
-       .version = {
-               &acr_r370_ls_gpccs_func_0,
-       }
-};
-
-static void
-acr_r370_generate_sec2_bl_desc(const struct nvkm_acr *acr,
-                              const struct ls_ucode_img *img, u64 wpr_addr,
-                              void *_desc)
-{
-       const struct ls_ucode_img_desc *pdesc = &img->ucode_desc;
-       const struct nvkm_sec2 *sec = acr->subdev->device->sec2;
-       struct acr_r370_flcn_bl_desc *desc = _desc;
-       u64 base, addr_code, addr_data;
-       u32 addr_args;
-
-       base = wpr_addr + img->ucode_off + pdesc->app_start_offset;
-       /* For some reason we should not add app_resident_code_offset here */
-       addr_code = base;
-       addr_data = base + pdesc->app_resident_data_offset;
-       addr_args = sec->falcon->data.limit;
-       addr_args -= NVKM_MSGQUEUE_CMDLINE_SIZE;
-
-       desc->ctx_dma = FALCON_SEC2_DMAIDX_UCODE;
-       desc->code_dma_base = u64_to_flcn64(addr_code);
-       desc->non_sec_code_off = pdesc->app_resident_code_offset;
-       desc->non_sec_code_size = pdesc->app_resident_code_size;
-       desc->code_entry_point = pdesc->app_imem_entry;
-       desc->data_dma_base = u64_to_flcn64(addr_data);
-       desc->data_size = pdesc->app_resident_data_size;
-       desc->argc = 1;
-       /* args are stored at the beginning of EMEM */
-       desc->argv = 0x01000000;
-}
-
-const struct acr_r352_lsf_func
-acr_r370_ls_sec2_func_0 = {
-       .generate_bl_desc = acr_r370_generate_sec2_bl_desc,
-       .bl_desc_size = sizeof(struct acr_r370_flcn_bl_desc),
-};
-
-const struct acr_r352_ls_func
-acr_r370_ls_sec2_func = {
-       .load = acr_ls_ucode_load_sec2,
-       .post_run = acr_ls_sec2_post_run,
-       .version_max = 0,
-       .version = {
-               &acr_r370_ls_sec2_func_0,
-       }
-};
-
-void
-acr_r370_generate_hs_bl_desc(const struct hsf_load_header *hdr, void *_bl_desc,
-                            u64 offset)
-{
-       struct acr_r370_flcn_bl_desc *bl_desc = _bl_desc;
-
-       bl_desc->ctx_dma = FALCON_DMAIDX_VIRT;
-       bl_desc->non_sec_code_off = hdr->non_sec_code_off;
-       bl_desc->non_sec_code_size = hdr->non_sec_code_size;
-       bl_desc->sec_code_off = hsf_load_header_app_off(hdr, 0);
-       bl_desc->sec_code_size = hsf_load_header_app_size(hdr, 0);
-       bl_desc->code_entry_point = 0;
-       bl_desc->code_dma_base = u64_to_flcn64(offset);
-       bl_desc->data_dma_base = u64_to_flcn64(offset + hdr->data_dma_base);
-       bl_desc->data_size = hdr->data_size;
-}
-
-const struct acr_r352_func
-acr_r370_func = {
-       .fixup_hs_desc = acr_r367_fixup_hs_desc,
-       .generate_hs_bl_desc = acr_r370_generate_hs_bl_desc,
-       .hs_bl_desc_size = sizeof(struct acr_r370_flcn_bl_desc),
-       .shadow_blob = true,
-       .ls_ucode_img_load = acr_r367_ls_ucode_img_load,
-       .ls_fill_headers = acr_r367_ls_fill_headers,
-       .ls_write_wpr = acr_r367_ls_write_wpr,
-       .ls_func = {
-               [NVKM_SECBOOT_FALCON_SEC2] = &acr_r370_ls_sec2_func,
-               [NVKM_SECBOOT_FALCON_FECS] = &acr_r370_ls_fecs_func,
-               [NVKM_SECBOOT_FALCON_GPCCS] = &acr_r370_ls_gpccs_func,
-       },
-};
-
-struct nvkm_acr *
-acr_r370_new(enum nvkm_secboot_falcon boot_falcon,
-            unsigned long managed_falcons)
-{
-       return acr_r352_new_(&acr_r370_func, boot_falcon, managed_falcons);
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r370.h b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r370.h
deleted file mode 100644 (file)
index 2efed6f..0000000
+++ /dev/null
@@ -1,50 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#ifndef __NVKM_SECBOOT_ACR_R370_H__
-#define __NVKM_SECBOOT_ACR_R370_H__
-
-#include "priv.h"
-struct hsf_load_header;
-
-/* Same as acr_r361_flcn_bl_desc, plus argc/argv */
-struct acr_r370_flcn_bl_desc {
-       u32 reserved[4];
-       u32 signature[4];
-       u32 ctx_dma;
-       struct flcn_u64 code_dma_base;
-       u32 non_sec_code_off;
-       u32 non_sec_code_size;
-       u32 sec_code_off;
-       u32 sec_code_size;
-       u32 code_entry_point;
-       struct flcn_u64 data_dma_base;
-       u32 data_size;
-       u32 argc;
-       u32 argv;
-};
-
-void acr_r370_generate_hs_bl_desc(const struct hsf_load_header *, void *, u64);
-extern const struct acr_r352_ls_func acr_r370_ls_fecs_func;
-extern const struct acr_r352_ls_func acr_r370_ls_gpccs_func;
-extern const struct acr_r352_lsf_func acr_r370_ls_sec2_func_0;
-#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r375.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r375.c
deleted file mode 100644 (file)
index 8f06477..0000000
+++ /dev/null
@@ -1,94 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#include "acr_r370.h"
-#include "acr_r367.h"
-
-#include <core/msgqueue.h>
-#include <subdev/pmu.h>
-
-static void
-acr_r375_generate_pmu_bl_desc(const struct nvkm_acr *acr,
-                             const struct ls_ucode_img *img, u64 wpr_addr,
-                             void *_desc)
-{
-       const struct ls_ucode_img_desc *pdesc = &img->ucode_desc;
-       const struct nvkm_pmu *pmu = acr->subdev->device->pmu;
-       struct acr_r370_flcn_bl_desc *desc = _desc;
-       u64 base, addr_code, addr_data;
-       u32 addr_args;
-
-       base = wpr_addr + img->ucode_off + pdesc->app_start_offset;
-       addr_code = base + pdesc->app_resident_code_offset;
-       addr_data = base + pdesc->app_resident_data_offset;
-       addr_args = pmu->falcon->data.limit;
-       addr_args -= NVKM_MSGQUEUE_CMDLINE_SIZE;
-
-       desc->ctx_dma = FALCON_DMAIDX_UCODE;
-       desc->code_dma_base = u64_to_flcn64(addr_code);
-       desc->non_sec_code_off = pdesc->app_resident_code_offset;
-       desc->non_sec_code_size = pdesc->app_resident_code_size;
-       desc->code_entry_point = pdesc->app_imem_entry;
-       desc->data_dma_base = u64_to_flcn64(addr_data);
-       desc->data_size = pdesc->app_resident_data_size;
-       desc->argc = 1;
-       desc->argv = addr_args;
-}
-
-static const struct acr_r352_lsf_func
-acr_r375_ls_pmu_func_0 = {
-       .generate_bl_desc = acr_r375_generate_pmu_bl_desc,
-       .bl_desc_size = sizeof(struct acr_r370_flcn_bl_desc),
-};
-
-const struct acr_r352_ls_func
-acr_r375_ls_pmu_func = {
-       .load = acr_ls_ucode_load_pmu,
-       .post_run = acr_ls_pmu_post_run,
-       .version_max = 0,
-       .version = {
-               &acr_r375_ls_pmu_func_0,
-       }
-};
-
-const struct acr_r352_func
-acr_r375_func = {
-       .fixup_hs_desc = acr_r367_fixup_hs_desc,
-       .generate_hs_bl_desc = acr_r370_generate_hs_bl_desc,
-       .hs_bl_desc_size = sizeof(struct acr_r370_flcn_bl_desc),
-       .shadow_blob = true,
-       .ls_ucode_img_load = acr_r367_ls_ucode_img_load,
-       .ls_fill_headers = acr_r367_ls_fill_headers,
-       .ls_write_wpr = acr_r367_ls_write_wpr,
-       .ls_func = {
-               [NVKM_SECBOOT_FALCON_FECS] = &acr_r370_ls_fecs_func,
-               [NVKM_SECBOOT_FALCON_GPCCS] = &acr_r370_ls_gpccs_func,
-               [NVKM_SECBOOT_FALCON_PMU] = &acr_r375_ls_pmu_func,
-       },
-};
-
-struct nvkm_acr *
-acr_r375_new(enum nvkm_secboot_falcon boot_falcon,
-            unsigned long managed_falcons)
-{
-       return acr_r352_new_(&acr_r375_func, boot_falcon, managed_falcons);
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/base.c
deleted file mode 100644 (file)
index ee29c6c..0000000
+++ /dev/null
@@ -1,213 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * Secure boot is the process by which NVIDIA-signed firmware is loaded into
- * some of the falcons of a GPU. For production devices this is the only way
- * for the firmware to access useful (but sensitive) registers.
- *
- * A Falcon microprocessor supporting advanced security modes can run in one of
- * three modes:
- *
- * - Non-secure (NS). In this mode, functionality is similar to Falcon
- *   architectures before security modes were introduced (pre-Maxwell), but
- *   capability is restricted. In particular, certain registers may be
- *   inaccessible for reads and/or writes, and physical memory access may be
- *   disabled (on certain Falcon instances). This is the only possible mode that
- *   can be used if you don't have microcode cryptographically signed by NVIDIA.
- *
- * - Heavy Secure (HS). In this mode, the microprocessor is a black box - it's
- *   not possible to read or write any Falcon internal state or Falcon registers
- *   from outside the Falcon (for example, from the host system). The only way
- *   to enable this mode is by loading microcode that has been signed by NVIDIA.
- *   (The loading process involves tagging the IMEM block as secure, writing the
- *   signature into a Falcon register, and starting execution. The hardware will
- *   validate the signature, and if valid, grant HS privileges.)
- *
- * - Light Secure (LS). In this mode, the microprocessor has more privileges
- *   than NS but fewer than HS. Some of the microprocessor state is visible to
- *   host software to ease debugging. The only way to enable this mode is by HS
- *   microcode enabling LS mode. Some privileges available to HS mode are not
- *   available here. LS mode is introduced in GM20x.
- *
- * Secure boot consists in temporarily switching a HS-capable falcon (typically
- * PMU) into HS mode in order to validate the LS firmwares of managed falcons,
- * load them, and switch managed falcons into LS mode. Once secure boot
- * completes, no falcon remains in HS mode.
- *
- * Secure boot requires a write-protected memory region (WPR) which can only be
- * written by the secure falcon. On dGPU, the driver sets up the WPR region in
- * video memory. On Tegra, it is set up by the bootloader and its location and
- * size written into memory controller registers.
- *
- * The secure boot process takes place as follows:
- *
- * 1) A LS blob is constructed that contains all the LS firmwares we want to
- *    load, along with their signatures and bootloaders.
- *
- * 2) A HS blob (also called ACR) is created that contains the signed HS
- *    firmware in charge of loading the LS firmwares into their respective
- *    falcons.
- *
- * 3) The HS blob is loaded (via its own bootloader) and executed on the
- *    HS-capable falcon. It authenticates itself, switches the secure falcon to
- *    HS mode and setup the WPR region around the LS blob (dGPU) or copies the
- *    LS blob into the WPR region (Tegra).
- *
- * 4) The LS blob is now secure from all external tampering. The HS falcon
- *    checks the signatures of the LS firmwares and, if valid, switches the
- *    managed falcons to LS mode and makes them ready to run the LS firmware.
- *
- * 5) The managed falcons remain in LS mode and can be started.
- *
- */
-
-#include "priv.h"
-#include "acr.h"
-
-#include <subdev/mc.h>
-#include <subdev/timer.h>
-#include <subdev/pmu.h>
-#include <engine/sec2.h>
-
-const char *
-nvkm_secboot_falcon_name[] = {
-       [NVKM_SECBOOT_FALCON_PMU] = "PMU",
-       [NVKM_SECBOOT_FALCON_RESERVED] = "<reserved>",
-       [NVKM_SECBOOT_FALCON_FECS] = "FECS",
-       [NVKM_SECBOOT_FALCON_GPCCS] = "GPCCS",
-       [NVKM_SECBOOT_FALCON_SEC2] = "SEC2",
-       [NVKM_SECBOOT_FALCON_END] = "<invalid>",
-};
-/**
- * nvkm_secboot_reset() - reset specified falcon
- */
-int
-nvkm_secboot_reset(struct nvkm_secboot *sb, unsigned long falcon_mask)
-{
-       /* Unmanaged falcon? */
-       if ((falcon_mask | sb->acr->managed_falcons) != sb->acr->managed_falcons) {
-               nvkm_error(&sb->subdev, "cannot reset unmanaged falcon!\n");
-               return -EINVAL;
-       }
-
-       return sb->acr->func->reset(sb->acr, sb, falcon_mask);
-}
-
-/**
- * nvkm_secboot_is_managed() - check whether a given falcon is securely-managed
- */
-bool
-nvkm_secboot_is_managed(struct nvkm_secboot *sb, enum nvkm_secboot_falcon fid)
-{
-       if (!sb)
-               return false;
-
-       return sb->acr->managed_falcons & BIT(fid);
-}
-
-static int
-nvkm_secboot_oneinit(struct nvkm_subdev *subdev)
-{
-       struct nvkm_secboot *sb = nvkm_secboot(subdev);
-       int ret = 0;
-
-       switch (sb->acr->boot_falcon) {
-       case NVKM_SECBOOT_FALCON_PMU:
-               sb->halt_falcon = sb->boot_falcon = subdev->device->pmu->falcon;
-               break;
-       case NVKM_SECBOOT_FALCON_SEC2:
-               /* we must keep SEC2 alive forever since ACR will run on it */
-               nvkm_engine_ref(&subdev->device->sec2->engine);
-               sb->boot_falcon = subdev->device->sec2->falcon;
-               sb->halt_falcon = subdev->device->pmu->falcon;
-               break;
-       default:
-               nvkm_error(subdev, "Unmanaged boot falcon %s!\n",
-                                       nvkm_secboot_falcon_name[sb->acr->boot_falcon]);
-               return -EINVAL;
-       }
-       nvkm_debug(subdev, "using %s falcon for ACR\n", sb->boot_falcon->name);
-
-       /* Call chip-specific init function */
-       if (sb->func->oneinit)
-               ret = sb->func->oneinit(sb);
-       if (ret) {
-               nvkm_error(subdev, "Secure Boot initialization failed: %d\n",
-                          ret);
-               return ret;
-       }
-
-       return 0;
-}
-
-static int
-nvkm_secboot_fini(struct nvkm_subdev *subdev, bool suspend)
-{
-       struct nvkm_secboot *sb = nvkm_secboot(subdev);
-       int ret = 0;
-
-       if (sb->func->fini)
-               ret = sb->func->fini(sb, suspend);
-
-       return ret;
-}
-
-static void *
-nvkm_secboot_dtor(struct nvkm_subdev *subdev)
-{
-       struct nvkm_secboot *sb = nvkm_secboot(subdev);
-       void *ret = NULL;
-
-       if (sb->func->dtor)
-               ret = sb->func->dtor(sb);
-
-       return ret;
-}
-
-static const struct nvkm_subdev_func
-nvkm_secboot = {
-       .oneinit = nvkm_secboot_oneinit,
-       .fini = nvkm_secboot_fini,
-       .dtor = nvkm_secboot_dtor,
-};
-
-int
-nvkm_secboot_ctor(const struct nvkm_secboot_func *func, struct nvkm_acr *acr,
-                 struct nvkm_device *device, int index,
-                 struct nvkm_secboot *sb)
-{
-       unsigned long fid;
-
-       nvkm_subdev_ctor(&nvkm_secboot, device, index, &sb->subdev);
-       sb->func = func;
-       sb->acr = acr;
-       acr->subdev = &sb->subdev;
-
-       nvkm_debug(&sb->subdev, "securely managed falcons:\n");
-       for_each_set_bit(fid, &sb->acr->managed_falcons,
-                        NVKM_SECBOOT_FALCON_END)
-               nvkm_debug(&sb->subdev, "- %s\n",
-                          nvkm_secboot_falcon_name[fid]);
-
-       return 0;
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gm200.c
deleted file mode 100644 (file)
index 5e91b3f..0000000
+++ /dev/null
@@ -1,262 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-
-#include "acr.h"
-#include "gm200.h"
-
-#include <core/gpuobj.h>
-#include <subdev/fb.h>
-#include <engine/falcon.h>
-#include <subdev/mc.h>
-
-/**
- * gm200_secboot_run_blob() - run the given high-secure blob
- *
- */
-int
-gm200_secboot_run_blob(struct nvkm_secboot *sb, struct nvkm_gpuobj *blob,
-                      struct nvkm_falcon *falcon)
-{
-       struct gm200_secboot *gsb = gm200_secboot(sb);
-       struct nvkm_subdev *subdev = &gsb->base.subdev;
-       struct nvkm_vma *vma = NULL;
-       u32 start_address;
-       int ret;
-
-       ret = nvkm_falcon_get(falcon, subdev);
-       if (ret)
-               return ret;
-
-       /* Map the HS firmware so the HS bootloader can see it */
-       ret = nvkm_vmm_get(gsb->vmm, 12, blob->size, &vma);
-       if (ret) {
-               nvkm_falcon_put(falcon, subdev);
-               return ret;
-       }
-
-       ret = nvkm_memory_map(blob, 0, gsb->vmm, vma, NULL, 0);
-       if (ret)
-               goto end;
-
-       /* Reset and set the falcon up */
-       ret = nvkm_falcon_reset(falcon);
-       if (ret)
-               goto end;
-       nvkm_falcon_bind_context(falcon, gsb->inst);
-
-       /* Load the HS bootloader into the falcon's IMEM/DMEM */
-       ret = sb->acr->func->load(sb->acr, falcon, blob, vma->addr);
-       if (ret < 0)
-               goto end;
-
-       start_address = ret;
-
-       /* Disable interrupts as we will poll for the HALT bit */
-       nvkm_mc_intr_mask(sb->subdev.device, falcon->owner->index, false);
-
-       /* Set default error value in mailbox register */
-       nvkm_falcon_wr32(falcon, 0x040, 0xdeada5a5);
-
-       /* Start the HS bootloader */
-       nvkm_falcon_set_start_addr(falcon, start_address);
-       nvkm_falcon_start(falcon);
-       ret = nvkm_falcon_wait_for_halt(falcon, 100);
-       if (ret)
-               goto end;
-
-       /*
-        * The mailbox register contains the (positive) error code - return this
-        * to the caller
-        */
-       ret = nvkm_falcon_rd32(falcon, 0x040);
-
-end:
-       /* Reenable interrupts */
-       nvkm_mc_intr_mask(sb->subdev.device, falcon->owner->index, true);
-
-       /* We don't need the ACR firmware anymore */
-       nvkm_vmm_put(gsb->vmm, &vma);
-       nvkm_falcon_put(falcon, subdev);
-
-       return ret;
-}
-
-int
-gm200_secboot_oneinit(struct nvkm_secboot *sb)
-{
-       struct gm200_secboot *gsb = gm200_secboot(sb);
-       struct nvkm_device *device = sb->subdev.device;
-       int ret;
-
-       /* Allocate instance block and VM */
-       ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, 0x1000, 0, true,
-                             &gsb->inst);
-       if (ret)
-               return ret;
-
-       ret = nvkm_vmm_new(device, 0, 600 * 1024, NULL, 0, NULL, "acr",
-                          &gsb->vmm);
-       if (ret)
-               return ret;
-
-       atomic_inc(&gsb->vmm->engref[NVKM_SUBDEV_PMU]);
-       gsb->vmm->debug = gsb->base.subdev.debug;
-
-       ret = nvkm_vmm_join(gsb->vmm, gsb->inst);
-       if (ret)
-               return ret;
-
-       if (sb->acr->func->oneinit) {
-               ret = sb->acr->func->oneinit(sb->acr, sb);
-               if (ret)
-                       return ret;
-       }
-
-       return 0;
-}
-
-int
-gm200_secboot_fini(struct nvkm_secboot *sb, bool suspend)
-{
-       int ret = 0;
-
-       if (sb->acr->func->fini)
-               ret = sb->acr->func->fini(sb->acr, sb, suspend);
-
-       return ret;
-}
-
-void *
-gm200_secboot_dtor(struct nvkm_secboot *sb)
-{
-       struct gm200_secboot *gsb = gm200_secboot(sb);
-
-       sb->acr->func->dtor(sb->acr);
-
-       nvkm_vmm_part(gsb->vmm, gsb->inst);
-       nvkm_vmm_unref(&gsb->vmm);
-       nvkm_memory_unref(&gsb->inst);
-
-       return gsb;
-}
-
-
-static const struct nvkm_secboot_func
-gm200_secboot = {
-       .dtor = gm200_secboot_dtor,
-       .oneinit = gm200_secboot_oneinit,
-       .fini = gm200_secboot_fini,
-       .run_blob = gm200_secboot_run_blob,
-};
-
-int
-gm200_secboot_new(struct nvkm_device *device, int index,
-                 struct nvkm_secboot **psb)
-{
-       int ret;
-       struct gm200_secboot *gsb;
-       struct nvkm_acr *acr;
-
-       acr = acr_r361_new(BIT(NVKM_SECBOOT_FALCON_FECS) |
-                          BIT(NVKM_SECBOOT_FALCON_GPCCS));
-       if (IS_ERR(acr))
-               return PTR_ERR(acr);
-
-       gsb = kzalloc(sizeof(*gsb), GFP_KERNEL);
-       if (!gsb) {
-               psb = NULL;
-               return -ENOMEM;
-       }
-       *psb = &gsb->base;
-
-       ret = nvkm_secboot_ctor(&gm200_secboot, acr, device, index, &gsb->base);
-       if (ret)
-               return ret;
-
-       return 0;
-}
-
-
-MODULE_FIRMWARE("nvidia/gm200/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gm200/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gm200/acr/ucode_unload.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/gpccs_bl.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/gpccs_sig.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gm200/gr/sw_method_init.bin");
-
-MODULE_FIRMWARE("nvidia/gm204/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gm204/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gm204/acr/ucode_unload.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/gpccs_bl.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/gpccs_sig.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gm204/gr/sw_method_init.bin");
-
-MODULE_FIRMWARE("nvidia/gm206/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gm206/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gm206/acr/ucode_unload.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/gpccs_bl.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/gpccs_sig.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gm206/gr/sw_method_init.bin");
-
-MODULE_FIRMWARE("nvidia/gp100/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gp100/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gp100/acr/ucode_unload.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/gpccs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/gpccs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gp100/gr/sw_method_init.bin");
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gm200.h b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gm200.h
deleted file mode 100644 (file)
index 62c5e16..0000000
+++ /dev/null
@@ -1,46 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#ifndef __NVKM_SECBOOT_GM200_H__
-#define __NVKM_SECBOOT_GM200_H__
-
-#include "priv.h"
-
-struct gm200_secboot {
-       struct nvkm_secboot base;
-
-       /* Instance block & address space used for HS FW execution */
-       struct nvkm_memory *inst;
-       struct nvkm_vmm *vmm;
-};
-#define gm200_secboot(sb) container_of(sb, struct gm200_secboot, base)
-
-int gm200_secboot_oneinit(struct nvkm_secboot *);
-int gm200_secboot_fini(struct nvkm_secboot *, bool);
-void *gm200_secboot_dtor(struct nvkm_secboot *);
-int gm200_secboot_run_blob(struct nvkm_secboot *, struct nvkm_gpuobj *,
-                          struct nvkm_falcon *);
-
-/* Tegra-only */
-int gm20b_secboot_tegra_read_wpr(struct gm200_secboot *, u32);
-
-#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gm20b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gm20b.c
deleted file mode 100644 (file)
index df8b919..0000000
+++ /dev/null
@@ -1,148 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#include "acr.h"
-#include "gm200.h"
-
-#define TEGRA210_MC_BASE                       0x70019000
-
-#ifdef CONFIG_ARCH_TEGRA
-#define MC_SECURITY_CARVEOUT2_CFG0             0xc58
-#define MC_SECURITY_CARVEOUT2_BOM_0            0xc5c
-#define MC_SECURITY_CARVEOUT2_BOM_HI_0         0xc60
-#define MC_SECURITY_CARVEOUT2_SIZE_128K                0xc64
-#define TEGRA_MC_SECURITY_CARVEOUT_CFG_LOCKED  (1 << 1)
-/**
- * gm20b_secboot_tegra_read_wpr() - read the WPR registers on Tegra
- *
- * On dGPU, we can manage the WPR region ourselves, but on Tegra the WPR region
- * is reserved from system memory by the bootloader and irreversibly locked.
- * This function reads the address and size of the pre-configured WPR region.
- */
-int
-gm20b_secboot_tegra_read_wpr(struct gm200_secboot *gsb, u32 mc_base)
-{
-       struct nvkm_secboot *sb = &gsb->base;
-       void __iomem *mc;
-       u32 cfg;
-
-       mc = ioremap(mc_base, 0xd00);
-       if (!mc) {
-               nvkm_error(&sb->subdev, "Cannot map Tegra MC registers\n");
-               return -ENOMEM;
-       }
-       sb->wpr_addr = ioread32_native(mc + MC_SECURITY_CARVEOUT2_BOM_0) |
-             ((u64)ioread32_native(mc + MC_SECURITY_CARVEOUT2_BOM_HI_0) << 32);
-       sb->wpr_size = ioread32_native(mc + MC_SECURITY_CARVEOUT2_SIZE_128K)
-               << 17;
-       cfg = ioread32_native(mc + MC_SECURITY_CARVEOUT2_CFG0);
-       iounmap(mc);
-
-       /* Check that WPR settings are valid */
-       if (sb->wpr_size == 0) {
-               nvkm_error(&sb->subdev, "WPR region is empty\n");
-               return -EINVAL;
-       }
-
-       if (!(cfg & TEGRA_MC_SECURITY_CARVEOUT_CFG_LOCKED)) {
-               nvkm_error(&sb->subdev, "WPR region not locked\n");
-               return -EINVAL;
-       }
-
-       return 0;
-}
-#else
-int
-gm20b_secboot_tegra_read_wpr(struct gm200_secboot *gsb, u32 mc_base)
-{
-       nvkm_error(&gsb->base.subdev, "Tegra support not compiled in\n");
-       return -EINVAL;
-}
-#endif
-
-static int
-gm20b_secboot_oneinit(struct nvkm_secboot *sb)
-{
-       struct gm200_secboot *gsb = gm200_secboot(sb);
-       int ret;
-
-       ret = gm20b_secboot_tegra_read_wpr(gsb, TEGRA210_MC_BASE);
-       if (ret)
-               return ret;
-
-       return gm200_secboot_oneinit(sb);
-}
-
-static const struct nvkm_secboot_func
-gm20b_secboot = {
-       .dtor = gm200_secboot_dtor,
-       .oneinit = gm20b_secboot_oneinit,
-       .fini = gm200_secboot_fini,
-       .run_blob = gm200_secboot_run_blob,
-};
-
-int
-gm20b_secboot_new(struct nvkm_device *device, int index,
-                 struct nvkm_secboot **psb)
-{
-       int ret;
-       struct gm200_secboot *gsb;
-       struct nvkm_acr *acr;
-
-       acr = acr_r352_new(BIT(NVKM_SECBOOT_FALCON_FECS) |
-                          BIT(NVKM_SECBOOT_FALCON_PMU));
-       if (IS_ERR(acr))
-               return PTR_ERR(acr);
-       /* Support the initial GM20B firmware release without PMU */
-       acr->optional_falcons = BIT(NVKM_SECBOOT_FALCON_PMU);
-
-       gsb = kzalloc(sizeof(*gsb), GFP_KERNEL);
-       if (!gsb) {
-               psb = NULL;
-               return -ENOMEM;
-       }
-       *psb = &gsb->base;
-
-       ret = nvkm_secboot_ctor(&gm20b_secboot, acr, device, index, &gsb->base);
-       if (ret)
-               return ret;
-
-       return 0;
-}
-
-#if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
-MODULE_FIRMWARE("nvidia/gm20b/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gm20b/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gm20b/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gm20b/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gm20b/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gm20b/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gm20b/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gm20b/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gm20b/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gm20b/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gm20b/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gm20b/gr/sw_method_init.bin");
-MODULE_FIRMWARE("nvidia/gm20b/pmu/desc.bin");
-MODULE_FIRMWARE("nvidia/gm20b/pmu/image.bin");
-MODULE_FIRMWARE("nvidia/gm20b/pmu/sig.bin");
-#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gp102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gp102.c
deleted file mode 100644 (file)
index 4695f1c..0000000
+++ /dev/null
@@ -1,264 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#include "acr.h"
-#include "gm200.h"
-
-#include "ls_ucode.h"
-#include "hs_ucode.h"
-#include <subdev/mc.h>
-#include <subdev/timer.h>
-#include <engine/falcon.h>
-#include <engine/nvdec.h>
-
-static bool
-gp102_secboot_scrub_required(struct nvkm_secboot *sb)
-{
-       struct nvkm_subdev *subdev = &sb->subdev;
-       struct nvkm_device *device = subdev->device;
-       u32 reg;
-
-       nvkm_wr32(device, 0x100cd0, 0x2);
-       reg = nvkm_rd32(device, 0x100cd0);
-
-       return (reg & BIT(4));
-}
-
-static int
-gp102_run_secure_scrub(struct nvkm_secboot *sb)
-{
-       struct nvkm_subdev *subdev = &sb->subdev;
-       struct nvkm_device *device = subdev->device;
-       struct nvkm_engine *engine;
-       struct nvkm_falcon *falcon;
-       void *scrub_image;
-       struct fw_bin_header *hsbin_hdr;
-       struct hsf_fw_header *fw_hdr;
-       struct hsf_load_header *lhdr;
-       void *scrub_data;
-       int ret;
-
-       nvkm_debug(subdev, "running VPR scrubber binary on NVDEC...\n");
-
-       engine = nvkm_engine_ref(&device->nvdec[0]->engine);
-       if (IS_ERR(engine))
-               return PTR_ERR(engine);
-       falcon = device->nvdec[0]->falcon;
-
-       nvkm_falcon_get(falcon, &sb->subdev);
-
-       scrub_image = hs_ucode_load_blob(subdev, falcon, "nvdec/scrubber");
-       if (IS_ERR(scrub_image))
-               return PTR_ERR(scrub_image);
-
-       nvkm_falcon_reset(falcon);
-       nvkm_falcon_bind_context(falcon, NULL);
-
-       hsbin_hdr = scrub_image;
-       fw_hdr = scrub_image + hsbin_hdr->header_offset;
-       lhdr = scrub_image + fw_hdr->hdr_offset;
-       scrub_data = scrub_image + hsbin_hdr->data_offset;
-
-       nvkm_falcon_load_imem(falcon, scrub_data, lhdr->non_sec_code_off,
-                             lhdr->non_sec_code_size,
-                             lhdr->non_sec_code_off >> 8, 0, false);
-       nvkm_falcon_load_imem(falcon, scrub_data + lhdr->apps[0],
-                             ALIGN(lhdr->apps[0], 0x100),
-                             lhdr->apps[1],
-                             lhdr->apps[0] >> 8, 0, true);
-       nvkm_falcon_load_dmem(falcon, scrub_data + lhdr->data_dma_base, 0,
-                             lhdr->data_size, 0);
-
-       kfree(scrub_image);
-
-       nvkm_falcon_set_start_addr(falcon, 0x0);
-       nvkm_falcon_start(falcon);
-
-       ret = nvkm_falcon_wait_for_halt(falcon, 500);
-       if (ret < 0) {
-               nvkm_error(subdev, "failed to run VPR scrubber binary!\n");
-               ret = -ETIMEDOUT;
-               goto end;
-       }
-
-       /* put nvdec in clean state - without reset it will remain in HS mode */
-       nvkm_falcon_reset(falcon);
-
-       if (gp102_secboot_scrub_required(sb)) {
-               nvkm_error(subdev, "VPR scrubber binary failed!\n");
-               ret = -EINVAL;
-               goto end;
-       }
-
-       nvkm_debug(subdev, "VPR scrub successfully completed\n");
-
-end:
-       nvkm_falcon_put(falcon, &sb->subdev);
-       nvkm_engine_unref(&engine);
-       return ret;
-}
-
-static int
-gp102_secboot_run_blob(struct nvkm_secboot *sb, struct nvkm_gpuobj *blob,
-                      struct nvkm_falcon *falcon)
-{
-       int ret;
-
-       /* make sure the VPR region is unlocked */
-       if (gp102_secboot_scrub_required(sb)) {
-               ret = gp102_run_secure_scrub(sb);
-               if (ret)
-                       return ret;
-       }
-
-       return gm200_secboot_run_blob(sb, blob, falcon);
-}
-
-const struct nvkm_secboot_func
-gp102_secboot = {
-       .dtor = gm200_secboot_dtor,
-       .oneinit = gm200_secboot_oneinit,
-       .fini = gm200_secboot_fini,
-       .run_blob = gp102_secboot_run_blob,
-};
-
-int
-gp102_secboot_new(struct nvkm_device *device, int index,
-                 struct nvkm_secboot **psb)
-{
-       int ret;
-       struct gm200_secboot *gsb;
-       struct nvkm_acr *acr;
-
-       acr = acr_r367_new(NVKM_SECBOOT_FALCON_SEC2,
-                          BIT(NVKM_SECBOOT_FALCON_FECS) |
-                          BIT(NVKM_SECBOOT_FALCON_GPCCS) |
-                          BIT(NVKM_SECBOOT_FALCON_SEC2));
-       if (IS_ERR(acr))
-               return PTR_ERR(acr);
-
-       gsb = kzalloc(sizeof(*gsb), GFP_KERNEL);
-       if (!gsb) {
-               psb = NULL;
-               return -ENOMEM;
-       }
-       *psb = &gsb->base;
-
-       ret = nvkm_secboot_ctor(&gp102_secboot, acr, device, index, &gsb->base);
-       if (ret)
-               return ret;
-
-       return 0;
-}
-
-MODULE_FIRMWARE("nvidia/gp102/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gp102/acr/unload_bl.bin");
-MODULE_FIRMWARE("nvidia/gp102/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gp102/acr/ucode_unload.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/gpccs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/gpccs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gp102/gr/sw_method_init.bin");
-MODULE_FIRMWARE("nvidia/gp102/nvdec/scrubber.bin");
-MODULE_FIRMWARE("nvidia/gp102/sec2/desc.bin");
-MODULE_FIRMWARE("nvidia/gp102/sec2/image.bin");
-MODULE_FIRMWARE("nvidia/gp102/sec2/sig.bin");
-MODULE_FIRMWARE("nvidia/gp102/sec2/desc-1.bin");
-MODULE_FIRMWARE("nvidia/gp102/sec2/image-1.bin");
-MODULE_FIRMWARE("nvidia/gp102/sec2/sig-1.bin");
-MODULE_FIRMWARE("nvidia/gp104/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gp104/acr/unload_bl.bin");
-MODULE_FIRMWARE("nvidia/gp104/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gp104/acr/ucode_unload.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/gpccs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/gpccs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gp104/gr/sw_method_init.bin");
-MODULE_FIRMWARE("nvidia/gp104/nvdec/scrubber.bin");
-MODULE_FIRMWARE("nvidia/gp104/sec2/desc.bin");
-MODULE_FIRMWARE("nvidia/gp104/sec2/image.bin");
-MODULE_FIRMWARE("nvidia/gp104/sec2/sig.bin");
-MODULE_FIRMWARE("nvidia/gp104/sec2/desc-1.bin");
-MODULE_FIRMWARE("nvidia/gp104/sec2/image-1.bin");
-MODULE_FIRMWARE("nvidia/gp104/sec2/sig-1.bin");
-MODULE_FIRMWARE("nvidia/gp106/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gp106/acr/unload_bl.bin");
-MODULE_FIRMWARE("nvidia/gp106/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gp106/acr/ucode_unload.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/gpccs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/gpccs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gp106/gr/sw_method_init.bin");
-MODULE_FIRMWARE("nvidia/gp106/nvdec/scrubber.bin");
-MODULE_FIRMWARE("nvidia/gp106/sec2/desc.bin");
-MODULE_FIRMWARE("nvidia/gp106/sec2/image.bin");
-MODULE_FIRMWARE("nvidia/gp106/sec2/sig.bin");
-MODULE_FIRMWARE("nvidia/gp106/sec2/desc-1.bin");
-MODULE_FIRMWARE("nvidia/gp106/sec2/image-1.bin");
-MODULE_FIRMWARE("nvidia/gp106/sec2/sig-1.bin");
-MODULE_FIRMWARE("nvidia/gp107/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gp107/acr/unload_bl.bin");
-MODULE_FIRMWARE("nvidia/gp107/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gp107/acr/ucode_unload.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/gpccs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/gpccs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gp107/gr/sw_method_init.bin");
-MODULE_FIRMWARE("nvidia/gp107/nvdec/scrubber.bin");
-MODULE_FIRMWARE("nvidia/gp107/sec2/desc.bin");
-MODULE_FIRMWARE("nvidia/gp107/sec2/image.bin");
-MODULE_FIRMWARE("nvidia/gp107/sec2/sig.bin");
-MODULE_FIRMWARE("nvidia/gp107/sec2/desc-1.bin");
-MODULE_FIRMWARE("nvidia/gp107/sec2/image-1.bin");
-MODULE_FIRMWARE("nvidia/gp107/sec2/sig-1.bin");
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gp108.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gp108.c
deleted file mode 100644 (file)
index 737a8d5..0000000
+++ /dev/null
@@ -1,88 +0,0 @@
-/*
- * Copyright 2017 Red Hat Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- */
-#include "gm200.h"
-#include "acr.h"
-
-int
-gp108_secboot_new(struct nvkm_device *device, int index,
-                 struct nvkm_secboot **psb)
-{
-       struct gm200_secboot *gsb;
-       struct nvkm_acr *acr;
-
-       acr = acr_r370_new(NVKM_SECBOOT_FALCON_SEC2,
-                          BIT(NVKM_SECBOOT_FALCON_FECS) |
-                          BIT(NVKM_SECBOOT_FALCON_GPCCS) |
-                          BIT(NVKM_SECBOOT_FALCON_SEC2));
-       if (IS_ERR(acr))
-               return PTR_ERR(acr);
-
-       if (!(gsb = kzalloc(sizeof(*gsb), GFP_KERNEL))) {
-               acr->func->dtor(acr);
-               return -ENOMEM;
-       }
-       *psb = &gsb->base;
-
-       return nvkm_secboot_ctor(&gp102_secboot, acr, device, index, &gsb->base);
-}
-
-MODULE_FIRMWARE("nvidia/gp108/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gp108/acr/unload_bl.bin");
-MODULE_FIRMWARE("nvidia/gp108/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gp108/acr/ucode_unload.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/gpccs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/gpccs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gp108/gr/sw_method_init.bin");
-MODULE_FIRMWARE("nvidia/gp108/nvdec/scrubber.bin");
-MODULE_FIRMWARE("nvidia/gp108/sec2/desc.bin");
-MODULE_FIRMWARE("nvidia/gp108/sec2/image.bin");
-MODULE_FIRMWARE("nvidia/gp108/sec2/sig.bin");
-
-MODULE_FIRMWARE("nvidia/gv100/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gv100/acr/unload_bl.bin");
-MODULE_FIRMWARE("nvidia/gv100/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gv100/acr/ucode_unload.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/gpccs_bl.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/gpccs_sig.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gv100/gr/sw_method_init.bin");
-MODULE_FIRMWARE("nvidia/gv100/nvdec/scrubber.bin");
-MODULE_FIRMWARE("nvidia/gv100/sec2/desc.bin");
-MODULE_FIRMWARE("nvidia/gv100/sec2/image.bin");
-MODULE_FIRMWARE("nvidia/gv100/sec2/sig.bin");
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gp10b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gp10b.c
deleted file mode 100644 (file)
index 28ca29d..0000000
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#include "acr.h"
-#include "gm200.h"
-
-#define TEGRA186_MC_BASE                       0x02c10000
-
-static int
-gp10b_secboot_oneinit(struct nvkm_secboot *sb)
-{
-       struct gm200_secboot *gsb = gm200_secboot(sb);
-       int ret;
-
-       ret = gm20b_secboot_tegra_read_wpr(gsb, TEGRA186_MC_BASE);
-       if (ret)
-               return ret;
-
-       return gm200_secboot_oneinit(sb);
-}
-
-static const struct nvkm_secboot_func
-gp10b_secboot = {
-       .dtor = gm200_secboot_dtor,
-       .oneinit = gp10b_secboot_oneinit,
-       .fini = gm200_secboot_fini,
-       .run_blob = gm200_secboot_run_blob,
-};
-
-int
-gp10b_secboot_new(struct nvkm_device *device, int index,
-                 struct nvkm_secboot **psb)
-{
-       int ret;
-       struct gm200_secboot *gsb;
-       struct nvkm_acr *acr;
-
-       acr = acr_r352_new(BIT(NVKM_SECBOOT_FALCON_FECS) |
-                          BIT(NVKM_SECBOOT_FALCON_GPCCS) |
-                          BIT(NVKM_SECBOOT_FALCON_PMU));
-       if (IS_ERR(acr))
-               return PTR_ERR(acr);
-
-       gsb = kzalloc(sizeof(*gsb), GFP_KERNEL);
-       if (!gsb) {
-               psb = NULL;
-               return -ENOMEM;
-       }
-       *psb = &gsb->base;
-
-       ret = nvkm_secboot_ctor(&gp10b_secboot, acr, device, index, &gsb->base);
-       if (ret)
-               return ret;
-
-       return 0;
-}
-
-#if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC)
-MODULE_FIRMWARE("nvidia/gp10b/acr/bl.bin");
-MODULE_FIRMWARE("nvidia/gp10b/acr/ucode_load.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/fecs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/fecs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/fecs_data.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/fecs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/gpccs_bl.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/gpccs_inst.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/gpccs_data.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/gpccs_sig.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/sw_ctx.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/sw_nonctx.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/sw_bundle_init.bin");
-MODULE_FIRMWARE("nvidia/gp10b/gr/sw_method_init.bin");
-MODULE_FIRMWARE("nvidia/gp10b/pmu/desc.bin");
-MODULE_FIRMWARE("nvidia/gp10b/pmu/image.bin");
-MODULE_FIRMWARE("nvidia/gp10b/pmu/sig.bin");
-#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/hs_ucode.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/hs_ucode.c
deleted file mode 100644 (file)
index 6b33182..0000000
+++ /dev/null
@@ -1,97 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#include "hs_ucode.h"
-#include "ls_ucode.h"
-#include "acr.h"
-
-#include <engine/falcon.h>
-
-/**
- * hs_ucode_patch_signature() - patch HS blob with correct signature for
- * specified falcon.
- */
-static void
-hs_ucode_patch_signature(const struct nvkm_falcon *falcon, void *acr_image,
-                        bool new_format)
-{
-       struct fw_bin_header *hsbin_hdr = acr_image;
-       struct hsf_fw_header *fw_hdr = acr_image + hsbin_hdr->header_offset;
-       void *hs_data = acr_image + hsbin_hdr->data_offset;
-       void *sig;
-       u32 sig_size;
-       u32 patch_loc, patch_sig;
-
-       /*
-        * I had the brilliant idea to "improve" the binary format by
-        * removing this useless indirection. However to make NVIDIA files
-        * directly compatible, let's support both format.
-        */
-       if (new_format) {
-               patch_loc = fw_hdr->patch_loc;
-               patch_sig = fw_hdr->patch_sig;
-       } else {
-               patch_loc = *(u32 *)(acr_image + fw_hdr->patch_loc);
-               patch_sig = *(u32 *)(acr_image + fw_hdr->patch_sig);
-       }
-
-       /* Falcon in debug or production mode? */
-       if (falcon->debug) {
-               sig = acr_image + fw_hdr->sig_dbg_offset;
-               sig_size = fw_hdr->sig_dbg_size;
-       } else {
-               sig = acr_image + fw_hdr->sig_prod_offset;
-               sig_size = fw_hdr->sig_prod_size;
-       }
-
-       /* Patch signature */
-       memcpy(hs_data + patch_loc, sig + patch_sig, sig_size);
-}
-
-void *
-hs_ucode_load_blob(struct nvkm_subdev *subdev, const struct nvkm_falcon *falcon,
-                  const char *fw)
-{
-       void *acr_image;
-       bool new_format;
-
-       acr_image = nvkm_acr_load_firmware(subdev, fw, 0);
-       if (IS_ERR(acr_image))
-               return acr_image;
-
-       /* detect the format to define how signature should be patched */
-       switch (((u32 *)acr_image)[0]) {
-       case 0x3b1d14f0:
-               new_format = true;
-               break;
-       case 0x000010de:
-               new_format = false;
-               break;
-       default:
-               nvkm_error(subdev, "unknown header for HS blob %s\n", fw);
-               return ERR_PTR(-EINVAL);
-       }
-
-       hs_ucode_patch_signature(falcon, acr_image, new_format);
-
-       return acr_image;
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/hs_ucode.h b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/hs_ucode.h
deleted file mode 100644 (file)
index d8cfc6f..0000000
+++ /dev/null
@@ -1,81 +0,0 @@
-/*
- * Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#ifndef __NVKM_SECBOOT_HS_UCODE_H__
-#define __NVKM_SECBOOT_HS_UCODE_H__
-
-#include <core/os.h>
-#include <core/subdev.h>
-
-struct nvkm_falcon;
-
-/**
- * struct hsf_fw_header - HS firmware descriptor
- * @sig_dbg_offset:    offset of the debug signature
- * @sig_dbg_size:      size of the debug signature
- * @sig_prod_offset:   offset of the production signature
- * @sig_prod_size:     size of the production signature
- * @patch_loc:         offset of the offset (sic) of where the signature is
- * @patch_sig:         offset of the offset (sic) to add to sig_*_offset
- * @hdr_offset:                offset of the load header (see struct hs_load_header)
- * @hdr_size:          size of above header
- *
- * This structure is embedded in the HS firmware image at
- * hs_bin_hdr.header_offset.
- */
-struct hsf_fw_header {
-       u32 sig_dbg_offset;
-       u32 sig_dbg_size;
-       u32 sig_prod_offset;
-       u32 sig_prod_size;
-       u32 patch_loc;
-       u32 patch_sig;
-       u32 hdr_offset;
-       u32 hdr_size;
-};
-
-/**
- * struct hsf_load_header - HS firmware load header
- */
-struct hsf_load_header {
-       u32 non_sec_code_off;
-       u32 non_sec_code_size;
-       u32 data_dma_base;
-       u32 data_size;
-       u32 num_apps;
-       /*
-        * Organized as follows:
-        * - app0_code_off
-        * - app1_code_off
-        * - ...
-        * - appn_code_off
-        * - app0_code_size
-        * - app1_code_size
-        * - ...
-        */
-       u32 apps[0];
-};
-
-void *hs_ucode_load_blob(struct nvkm_subdev *, const struct nvkm_falcon *,
-                        const char *);
-
-#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/ls_ucode.h
deleted file mode 100644 (file)
index d43f906..0000000
+++ /dev/null
@@ -1,161 +0,0 @@
-/*
- * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#ifndef __NVKM_SECBOOT_LS_UCODE_H__
-#define __NVKM_SECBOOT_LS_UCODE_H__
-
-#include <core/os.h>
-#include <core/subdev.h>
-#include <subdev/secboot.h>
-
-struct nvkm_acr;
-
-/**
- * struct ls_ucode_img_desc - descriptor of firmware image
- * @descriptor_size:           size of this descriptor
- * @image_size:                        size of the whole image
- * @bootloader_start_offset:   start offset of the bootloader in ucode image
- * @bootloader_size:           size of the bootloader
- * @bootloader_imem_offset:    start off set of the bootloader in IMEM
- * @bootloader_entry_point:    entry point of the bootloader in IMEM
- * @app_start_offset:          start offset of the LS firmware
- * @app_size:                  size of the LS firmware's code and data
- * @app_imem_offset:           offset of the app in IMEM
- * @app_imem_entry:            entry point of the app in IMEM
- * @app_dmem_offset:           offset of the data in DMEM
- * @app_resident_code_offset:  offset of app code from app_start_offset
- * @app_resident_code_size:    size of the code
- * @app_resident_data_offset:  offset of data from app_start_offset
- * @app_resident_data_size:    size of data
- *
- * A firmware image contains the code, data, and bootloader of a given LS
- * falcon in a single blob. This structure describes where everything is.
- *
- * This can be generated from a (bootloader, code, data) set if they have
- * been loaded separately, or come directly from a file.
- */
-struct ls_ucode_img_desc {
-       u32 descriptor_size;
-       u32 image_size;
-       u32 tools_version;
-       u32 app_version;
-       char date[64];
-       u32 bootloader_start_offset;
-       u32 bootloader_size;
-       u32 bootloader_imem_offset;
-       u32 bootloader_entry_point;
-       u32 app_start_offset;
-       u32 app_size;
-       u32 app_imem_offset;
-       u32 app_imem_entry;
-       u32 app_dmem_offset;
-       u32 app_resident_code_offset;
-       u32 app_resident_code_size;
-       u32 app_resident_data_offset;
-       u32 app_resident_data_size;
-       u32 nb_overlays;
-       struct {u32 start; u32 size; } load_ovl[64];
-       u32 compressed;
-};
-
-/**
- * struct ls_ucode_img - temporary storage for loaded LS firmwares
- * @node:              to link within lsf_ucode_mgr
- * @falcon_id:         ID of the falcon this LS firmware is for
- * @ucode_desc:                loaded or generated map of ucode_data
- * @ucode_data:                firmware payload (code and data)
- * @ucode_size:                size in bytes of data in ucode_data
- * @ucode_off:         offset of the ucode in ucode_data
- * @sig:               signature for this firmware
- * @sig:size:          size of the signature in bytes
- *
- * Preparing the WPR LS blob requires information about all the LS firmwares
- * (size, etc) to be known. This structure contains all the data of one LS
- * firmware.
- */
-struct ls_ucode_img {
-       struct list_head node;
-       enum nvkm_secboot_falcon falcon_id;
-
-       struct ls_ucode_img_desc ucode_desc;
-       u8 *ucode_data;
-       u32 ucode_size;
-       u32 ucode_off;
-
-       u8 *sig;
-       u32 sig_size;
-};
-
-/**
- * struct fw_bin_header - header of firmware files
- * @bin_magic:         always 0x3b1d14f0
- * @bin_ver:           version of the bin format
- * @bin_size:          entire image size including this header
- * @header_offset:     offset of the firmware/bootloader header in the file
- * @data_offset:       offset of the firmware/bootloader payload in the file
- * @data_size:         size of the payload
- *
- * This header is located at the beginning of the HS firmware and HS bootloader
- * files, to describe where the headers and data can be found.
- */
-struct fw_bin_header {
-       u32 bin_magic;
-       u32 bin_ver;
-       u32 bin_size;
-       u32 header_offset;
-       u32 data_offset;
-       u32 data_size;
-};
-
-/**
- * struct fw_bl_desc - firmware bootloader descriptor
- * @start_tag:         starting tag of bootloader
- * @desc_dmem_load_off:        DMEM offset of flcn_bl_dmem_desc
- * @code_off:          offset of code section
- * @code_size:         size of code section
- * @data_off:          offset of data section
- * @data_size:         size of data section
- *
- * This structure is embedded in bootloader firmware files at to describe the
- * IMEM and DMEM layout expected by the bootloader.
- */
-struct fw_bl_desc {
-       u32 start_tag;
-       u32 dmem_load_off;
-       u32 code_off;
-       u32 code_size;
-       u32 data_off;
-       u32 data_size;
-};
-
-int acr_ls_ucode_load_fecs(const struct nvkm_secboot *, int,
-                          struct ls_ucode_img *);
-int acr_ls_ucode_load_gpccs(const struct nvkm_secboot *, int,
-                           struct ls_ucode_img *);
-int acr_ls_ucode_load_pmu(const struct nvkm_secboot *, int,
-                         struct ls_ucode_img *);
-int acr_ls_pmu_post_run(const struct nvkm_acr *, const struct nvkm_secboot *);
-int acr_ls_ucode_load_sec2(const struct nvkm_secboot *, int,
-                          struct ls_ucode_img *);
-int acr_ls_sec2_post_run(const struct nvkm_acr *, const struct nvkm_secboot *);
-
-#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c
deleted file mode 100644 (file)
index 821d3b2..0000000
+++ /dev/null
@@ -1,160 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-
-#include "ls_ucode.h"
-#include "acr.h"
-
-#include <core/firmware.h>
-
-#define BL_DESC_BLK_SIZE 256
-/**
- * Build a ucode image and descriptor from provided bootloader, code and data.
- *
- * @bl:                bootloader image, including 16-bytes descriptor
- * @code:      LS firmware code segment
- * @data:      LS firmware data segment
- * @desc:      ucode descriptor to be written
- *
- * Return: allocated ucode image with corresponding descriptor information. desc
- *         is also updated to contain the right offsets within returned image.
- */
-static void *
-ls_ucode_img_build(const struct firmware *bl, const struct firmware *code,
-                  const struct firmware *data, struct ls_ucode_img_desc *desc)
-{
-       struct fw_bin_header *bin_hdr = (void *)bl->data;
-       struct fw_bl_desc *bl_desc = (void *)bl->data + bin_hdr->header_offset;
-       void *bl_data = (void *)bl->data + bin_hdr->data_offset;
-       u32 pos = 0;
-       void *image;
-
-       desc->bootloader_start_offset = pos;
-       desc->bootloader_size = ALIGN(bl_desc->code_size, sizeof(u32));
-       desc->bootloader_imem_offset = bl_desc->start_tag * 256;
-       desc->bootloader_entry_point = bl_desc->start_tag * 256;
-
-       pos = ALIGN(pos + desc->bootloader_size, BL_DESC_BLK_SIZE);
-       desc->app_start_offset = pos;
-       desc->app_size = ALIGN(code->size, BL_DESC_BLK_SIZE) +
-                        ALIGN(data->size, BL_DESC_BLK_SIZE);
-       desc->app_imem_offset = 0;
-       desc->app_imem_entry = 0;
-       desc->app_dmem_offset = 0;
-       desc->app_resident_code_offset = 0;
-       desc->app_resident_code_size = ALIGN(code->size, BL_DESC_BLK_SIZE);
-
-       pos = ALIGN(pos + desc->app_resident_code_size, BL_DESC_BLK_SIZE);
-       desc->app_resident_data_offset = pos - desc->app_start_offset;
-       desc->app_resident_data_size = ALIGN(data->size, BL_DESC_BLK_SIZE);
-
-       desc->image_size = ALIGN(bl_desc->code_size, BL_DESC_BLK_SIZE) +
-                          desc->app_size;
-
-       image = kzalloc(desc->image_size, GFP_KERNEL);
-       if (!image)
-               return ERR_PTR(-ENOMEM);
-
-       memcpy(image + desc->bootloader_start_offset, bl_data,
-              bl_desc->code_size);
-       memcpy(image + desc->app_start_offset, code->data, code->size);
-       memcpy(image + desc->app_start_offset + desc->app_resident_data_offset,
-              data->data, data->size);
-
-       return image;
-}
-
-/**
- * ls_ucode_img_load_gr() - load and prepare a LS GR ucode image
- *
- * Load the LS microcode, bootloader and signature and pack them into a single
- * blob. Also generate the corresponding ucode descriptor.
- */
-static int
-ls_ucode_img_load_gr(const struct nvkm_subdev *subdev, int maxver,
-                    struct ls_ucode_img *img, const char *falcon_name)
-{
-       const struct firmware *bl, *code, *data, *sig;
-       char f[64];
-       int ret;
-
-       snprintf(f, sizeof(f), "gr/%s_bl", falcon_name);
-       ret = nvkm_firmware_get(subdev, f, &bl);
-       if (ret)
-               goto error;
-
-       snprintf(f, sizeof(f), "gr/%s_inst", falcon_name);
-       ret = nvkm_firmware_get(subdev, f, &code);
-       if (ret)
-               goto free_bl;
-
-       snprintf(f, sizeof(f), "gr/%s_data", falcon_name);
-       ret = nvkm_firmware_get(subdev, f, &data);
-       if (ret)
-               goto free_inst;
-
-       snprintf(f, sizeof(f), "gr/%s_sig", falcon_name);
-       ret = nvkm_firmware_get(subdev, f, &sig);
-       if (ret)
-               goto free_data;
-
-       img->sig = kmemdup(sig->data, sig->size, GFP_KERNEL);
-       if (!img->sig) {
-               ret = -ENOMEM;
-               goto free_sig;
-       }
-       img->sig_size = sig->size;
-
-       img->ucode_data = ls_ucode_img_build(bl, code, data,
-                                            &img->ucode_desc);
-       if (IS_ERR(img->ucode_data)) {
-               kfree(img->sig);
-               ret = PTR_ERR(img->ucode_data);
-               goto free_sig;
-       }
-       img->ucode_size = img->ucode_desc.image_size;
-
-free_sig:
-       nvkm_firmware_put(sig);
-free_data:
-       nvkm_firmware_put(data);
-free_inst:
-       nvkm_firmware_put(code);
-free_bl:
-       nvkm_firmware_put(bl);
-error:
-       return ret;
-}
-
-int
-acr_ls_ucode_load_fecs(const struct nvkm_secboot *sb, int maxver,
-                      struct ls_ucode_img *img)
-{
-       return ls_ucode_img_load_gr(&sb->subdev, maxver, img, "fecs");
-}
-
-int
-acr_ls_ucode_load_gpccs(const struct nvkm_secboot *sb, int maxver,
-                       struct ls_ucode_img *img)
-{
-       return ls_ucode_img_load_gr(&sb->subdev, maxver, img, "gpccs");
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/ls_ucode_msgqueue.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/ls_ucode_msgqueue.c
deleted file mode 100644 (file)
index a84a999..0000000
+++ /dev/null
@@ -1,177 +0,0 @@
-/*
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-
-#include "ls_ucode.h"
-#include "acr.h"
-
-#include <core/firmware.h>
-#include <core/msgqueue.h>
-#include <subdev/pmu.h>
-#include <engine/sec2.h>
-#include <subdev/mc.h>
-#include <subdev/timer.h>
-
-/**
- * acr_ls_ucode_load_msgqueue - load and prepare a ucode img for a msgqueue fw
- *
- * Load the LS microcode, desc and signature and pack them into a single
- * blob.
- */
-static int
-acr_ls_ucode_load_msgqueue(const struct nvkm_subdev *subdev, const char *name,
-                          int maxver, struct ls_ucode_img *img)
-{
-       const struct firmware *image, *desc, *sig;
-       char f[64];
-       int ver, ret;
-
-       snprintf(f, sizeof(f), "%s/image", name);
-       ver = nvkm_firmware_get_version(subdev, f, 0, maxver, &image);
-       if (ver < 0)
-               return ver;
-       img->ucode_data = kmemdup(image->data, image->size, GFP_KERNEL);
-       nvkm_firmware_put(image);
-       if (!img->ucode_data)
-               return -ENOMEM;
-
-       snprintf(f, sizeof(f), "%s/desc", name);
-       ret = nvkm_firmware_get_version(subdev, f, ver, ver, &desc);
-       if (ret < 0)
-               return ret;
-       memcpy(&img->ucode_desc, desc->data, sizeof(img->ucode_desc));
-       img->ucode_size = ALIGN(img->ucode_desc.app_start_offset + img->ucode_desc.app_size, 256);
-       nvkm_firmware_put(desc);
-
-       snprintf(f, sizeof(f), "%s/sig", name);
-       ret = nvkm_firmware_get_version(subdev, f, ver, ver, &sig);
-       if (ret < 0)
-               return ret;
-       img->sig_size = sig->size;
-       img->sig = kmemdup(sig->data, sig->size, GFP_KERNEL);
-       nvkm_firmware_put(sig);
-       if (!img->sig)
-               return -ENOMEM;
-
-       return ver;
-}
-
-static int
-acr_ls_msgqueue_post_run(struct nvkm_msgqueue *queue,
-                        struct nvkm_falcon *falcon, u32 addr_args)
-{
-       struct nvkm_device *device = falcon->owner->device;
-       u8 buf[NVKM_MSGQUEUE_CMDLINE_SIZE];
-
-       memset(buf, 0, sizeof(buf));
-       nvkm_msgqueue_write_cmdline(queue, buf);
-       nvkm_falcon_load_dmem(falcon, buf, addr_args, sizeof(buf), 0);
-       /* rearm the queue so it will wait for the init message */
-       nvkm_msgqueue_reinit(queue);
-
-       /* Enable interrupts */
-       nvkm_falcon_wr32(falcon, 0x10, 0xff);
-       nvkm_mc_intr_mask(device, falcon->owner->index, true);
-
-       /* Start LS firmware on boot falcon */
-       nvkm_falcon_start(falcon);
-
-       return 0;
-}
-
-int
-acr_ls_ucode_load_pmu(const struct nvkm_secboot *sb, int maxver,
-                     struct ls_ucode_img *img)
-{
-       struct nvkm_pmu *pmu = sb->subdev.device->pmu;
-       int ret;
-
-       ret = acr_ls_ucode_load_msgqueue(&sb->subdev, "pmu", maxver, img);
-       if (ret)
-               return ret;
-
-       /* Allocate the PMU queue corresponding to the FW version */
-       ret = nvkm_msgqueue_new(img->ucode_desc.app_version, pmu->falcon,
-                               sb, &pmu->queue);
-       if (ret)
-               return ret;
-
-       return 0;
-}
-
-int
-acr_ls_pmu_post_run(const struct nvkm_acr *acr, const struct nvkm_secboot *sb)
-{
-       struct nvkm_device *device = sb->subdev.device;
-       struct nvkm_pmu *pmu = device->pmu;
-       u32 addr_args = pmu->falcon->data.limit - NVKM_MSGQUEUE_CMDLINE_SIZE;
-       int ret;
-
-       ret = acr_ls_msgqueue_post_run(pmu->queue, pmu->falcon, addr_args);
-       if (ret)
-               return ret;
-
-       nvkm_debug(&sb->subdev, "%s started\n",
-                  nvkm_secboot_falcon_name[acr->boot_falcon]);
-
-       return 0;
-}
-
-int
-acr_ls_ucode_load_sec2(const struct nvkm_secboot *sb, int maxver,
-                      struct ls_ucode_img *img)
-{
-       struct nvkm_sec2 *sec = sb->subdev.device->sec2;
-       int ver, ret;
-
-       ver = acr_ls_ucode_load_msgqueue(&sb->subdev, "sec2", maxver, img);
-       if (ver < 0)
-               return ver;
-
-       /* Allocate the PMU queue corresponding to the FW version */
-       ret = nvkm_msgqueue_new(img->ucode_desc.app_version, sec->falcon,
-                               sb, &sec->queue);
-       if (ret)
-               return ret;
-
-       return ver;
-}
-
-int
-acr_ls_sec2_post_run(const struct nvkm_acr *acr, const struct nvkm_secboot *sb)
-{
-       const struct nvkm_subdev *subdev = &sb->subdev;
-       struct nvkm_device *device = subdev->device;
-       struct nvkm_sec2 *sec = device->sec2;
-       /* on SEC arguments are always at the beginning of EMEM */
-       const u32 addr_args = 0x01000000;
-       int ret;
-
-       ret = acr_ls_msgqueue_post_run(sec->queue, sec->falcon, addr_args);
-       if (ret)
-               return ret;
-
-       nvkm_debug(&sb->subdev, "%s started\n",
-                  nvkm_secboot_falcon_name[acr->boot_falcon]);
-
-       return 0;
-}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/priv.h
deleted file mode 100644 (file)
index 959a7b2..0000000
+++ /dev/null
@@ -1,65 +0,0 @@
-/*
- * Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-
-#ifndef __NVKM_SECBOOT_PRIV_H__
-#define __NVKM_SECBOOT_PRIV_H__
-
-#include <subdev/secboot.h>
-#include <subdev/mmu.h>
-struct nvkm_gpuobj;
-
-struct nvkm_secboot_func {
-       int (*oneinit)(struct nvkm_secboot *);
-       int (*fini)(struct nvkm_secboot *, bool suspend);
-       void *(*dtor)(struct nvkm_secboot *);
-       int (*run_blob)(struct nvkm_secboot *, struct nvkm_gpuobj *,
-                       struct nvkm_falcon *);
-};
-
-int nvkm_secboot_ctor(const struct nvkm_secboot_func *, struct nvkm_acr *,
-                     struct nvkm_device *, int, struct nvkm_secboot *);
-int nvkm_secboot_falcon_reset(struct nvkm_secboot *);
-int nvkm_secboot_falcon_run(struct nvkm_secboot *);
-
-extern const struct nvkm_secboot_func gp102_secboot;
-
-struct flcn_u64 {
-       u32 lo;
-       u32 hi;
-};
-
-static inline u64 flcn64_to_u64(const struct flcn_u64 f)
-{
-       return ((u64)f.hi) << 32 | f.lo;
-}
-
-static inline struct flcn_u64 u64_to_flcn64(u64 u)
-{
-       struct flcn_u64 ret;
-
-       ret.hi = upper_32_bits(u);
-       ret.lo = lower_32_bits(u);
-
-       return ret;
-}
-
-#endif
index 413dbdd..dbb90f2 100644 (file)
@@ -393,8 +393,7 @@ static void dispc_get_reg_field(struct dispc_device *dispc,
                                enum dispc_feat_reg_field id,
                                u8 *start, u8 *end)
 {
-       if (id >= dispc->feat->num_reg_fields)
-               BUG();
+       BUG_ON(id >= dispc->feat->num_reg_fields);
 
        *start = dispc->feat->reg_fields[id].start;
        *end = dispc->feat->reg_fields[id].end;
index 41f796b..ae44ac2 100644 (file)
@@ -338,6 +338,17 @@ config DRM_PANEL_SITRONIX_ST7789V
          Say Y here if you want to enable support for the Sitronix
          ST7789V controller for 240x320 LCD panels
 
+config DRM_PANEL_SONY_ACX424AKP
+       tristate "Sony ACX424AKP DSI command mode panel"
+       depends on OF
+       depends on DRM_MIPI_DSI
+       depends on BACKLIGHT_CLASS_DEVICE
+       select VIDEOMODE_HELPERS
+       help
+         Say Y here if you want to enable the Sony ACX424 display
+         panel. This panel supports DSI in both command and video
+         mode.
+
 config DRM_PANEL_SONY_ACX565AKM
        tristate "Sony ACX565AKM panel"
        depends on GPIOLIB && OF && SPI
index 4dc7acf..7c4d3c5 100644 (file)
@@ -35,6 +35,7 @@ obj-$(CONFIG_DRM_PANEL_SHARP_LS037V7DW01) += panel-sharp-ls037v7dw01.o
 obj-$(CONFIG_DRM_PANEL_SHARP_LS043T1LE01) += panel-sharp-ls043t1le01.o
 obj-$(CONFIG_DRM_PANEL_SITRONIX_ST7701) += panel-sitronix-st7701.o
 obj-$(CONFIG_DRM_PANEL_SITRONIX_ST7789V) += panel-sitronix-st7789v.o
+obj-$(CONFIG_DRM_PANEL_SONY_ACX424AKP) += panel-sony-acx424akp.o
 obj-$(CONFIG_DRM_PANEL_SONY_ACX565AKM) += panel-sony-acx565akm.o
 obj-$(CONFIG_DRM_PANEL_TPO_TD028TTEC1) += panel-tpo-td028ttec1.o
 obj-$(CONFIG_DRM_PANEL_TPO_TD043MTEA1) += panel-tpo-td043mtea1.o
index ba3f85f..e14c14a 100644 (file)
@@ -629,6 +629,35 @@ static const struct panel_desc auo_b101xtn01 = {
        },
 };
 
+static const struct drm_display_mode auo_b116xak01_mode = {
+       .clock = 69300,
+       .hdisplay = 1366,
+       .hsync_start = 1366 + 48,
+       .hsync_end = 1366 + 48 + 32,
+       .htotal = 1366 + 48 + 32 + 10,
+       .vdisplay = 768,
+       .vsync_start = 768 + 4,
+       .vsync_end = 768 + 4 + 6,
+       .vtotal = 768 + 4 + 6 + 15,
+       .vrefresh = 60,
+       .flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC,
+};
+
+static const struct panel_desc auo_b116xak01 = {
+       .modes = &auo_b116xak01_mode,
+       .num_modes = 1,
+       .bpc = 6,
+       .size = {
+               .width = 256,
+               .height = 144,
+       },
+       .delay = {
+               .hpd_absent_delay = 200,
+       },
+       .bus_format = MEDIA_BUS_FMT_RGB666_1X18,
+       .connector_type = DRM_MODE_CONNECTOR_eDP,
+};
+
 static const struct drm_display_mode auo_b116xw03_mode = {
        .clock = 70589,
        .hdisplay = 1366,
@@ -1008,6 +1037,38 @@ static const struct panel_desc boe_nv101wxmn51 = {
        },
 };
 
+static const struct drm_display_mode boe_nv140fhmn49_modes[] = {
+       {
+               .clock = 148500,
+               .hdisplay = 1920,
+               .hsync_start = 1920 + 48,
+               .hsync_end = 1920 + 48 + 32,
+               .htotal = 2200,
+               .vdisplay = 1080,
+               .vsync_start = 1080 + 3,
+               .vsync_end = 1080 + 3 + 5,
+               .vtotal = 1125,
+               .vrefresh = 60,
+       },
+};
+
+static const struct panel_desc boe_nv140fhmn49 = {
+       .modes = boe_nv140fhmn49_modes,
+       .num_modes = ARRAY_SIZE(boe_nv140fhmn49_modes),
+       .bpc = 6,
+       .size = {
+               .width = 309,
+               .height = 174,
+       },
+       .delay = {
+               .prepare = 210,
+               .enable = 50,
+               .unprepare = 160,
+       },
+       .bus_format = MEDIA_BUS_FMT_RGB666_1X18,
+       .connector_type = DRM_MODE_CONNECTOR_eDP,
+};
+
 static const struct drm_display_mode cdtech_s043wq26h_ct7_mode = {
        .clock = 9000,
        .hdisplay = 480,
@@ -2553,6 +2614,30 @@ static const struct panel_desc samsung_ltn140at29_301 = {
        },
 };
 
+static const struct display_timing satoz_sat050at40h12r2_timing = {
+       .pixelclock = {33300000, 33300000, 50000000},
+       .hactive = {800, 800, 800},
+       .hfront_porch = {16, 210, 354},
+       .hback_porch = {46, 46, 46},
+       .hsync_len = {1, 1, 40},
+       .vactive = {480, 480, 480},
+       .vfront_porch = {7, 22, 147},
+       .vback_porch = {23, 23, 23},
+       .vsync_len = {1, 1, 20},
+};
+
+static const struct panel_desc satoz_sat050at40h12r2 = {
+       .timings = &satoz_sat050at40h12r2_timing,
+       .num_timings = 1,
+       .bpc = 8,
+       .size = {
+               .width = 108,
+               .height = 65,
+       },
+       .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+       .connector_type = DRM_MODE_CONNECTOR_LVDS,
+};
+
 static const struct drm_display_mode sharp_ld_d5116z01b_mode = {
        .clock = 168480,
        .hdisplay = 1920,
@@ -3126,6 +3211,9 @@ static const struct of_device_id platform_of_match[] = {
                .compatible = "auo,b101xtn01",
                .data = &auo_b101xtn01,
        }, {
+               .compatible = "auo,b116xa01",
+               .data = &auo_b116xak01,
+       }, {
                .compatible = "auo,b116xw03",
                .data = &auo_b116xw03,
        }, {
@@ -3168,6 +3256,9 @@ static const struct of_device_id platform_of_match[] = {
                .compatible = "boe,nv101wxmn51",
                .data = &boe_nv101wxmn51,
        }, {
+               .compatible = "boe,nv140fhmn49",
+               .data = &boe_nv140fhmn49,
+       }, {
                .compatible = "cdtech,s043wq26h-ct7",
                .data = &cdtech_s043wq26h_ct7,
        }, {
@@ -3357,6 +3448,9 @@ static const struct of_device_id platform_of_match[] = {
                .compatible = "samsung,ltn140at29-301",
                .data = &samsung_ltn140at29_301,
        }, {
+               .compatible = "satoz,sat050at40h12r2",
+               .data = &satoz_sat050at40h12r2,
+       }, {
                .compatible = "sharp,ld-d5116z01b",
                .data = &sharp_ld_d5116z01b,
        }, {
diff --git a/drivers/gpu/drm/panel/panel-sony-acx424akp.c b/drivers/gpu/drm/panel/panel-sony-acx424akp.c
new file mode 100644 (file)
index 0000000..de0abf7
--- /dev/null
@@ -0,0 +1,550 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * MIPI-DSI Sony ACX424AKP panel driver. This is a 480x864
+ * AMOLED panel with a command-only DSI interface.
+ *
+ * Copyright (C) Linaro Ltd. 2019
+ * Author: Linus Walleij
+ * Based on code and know-how from Marcus Lorentzon
+ * Copyright (C) ST-Ericsson SA 2010
+ */
+#include <linux/backlight.h>
+#include <linux/delay.h>
+#include <linux/gpio/consumer.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/regulator/consumer.h>
+
+#include <video/mipi_display.h>
+
+#include <drm/drm_mipi_dsi.h>
+#include <drm/drm_modes.h>
+#include <drm/drm_panel.h>
+#include <drm/drm_print.h>
+
+#define ACX424_DCS_READ_ID1            0xDA
+#define ACX424_DCS_READ_ID2            0xDB
+#define ACX424_DCS_READ_ID3            0xDC
+#define ACX424_DCS_SET_MDDI            0xAE
+
+/*
+ * Sony seems to use vendor ID 0x81
+ */
+#define DISPLAY_SONY_ACX424AKP_ID1     0x811b
+#define DISPLAY_SONY_ACX424AKP_ID2     0x811a
+/*
+ * The third ID looks like a bug, vendor IDs begin at 0x80
+ * and panel 00 ... seems like default values.
+ */
+#define DISPLAY_SONY_ACX424AKP_ID3     0x8000
+
+struct acx424akp {
+       struct drm_panel panel;
+       struct device *dev;
+       struct backlight_device *bl;
+       struct regulator *supply;
+       struct gpio_desc *reset_gpio;
+       bool video_mode;
+};
+
+static const struct drm_display_mode sony_acx424akp_vid_mode = {
+       .clock = 330000,
+       .hdisplay = 480,
+       .hsync_start = 480 + 15,
+       .hsync_end = 480 + 15 + 0,
+       .htotal = 480 + 15 + 0 + 15,
+       .vdisplay = 864,
+       .vsync_start = 864 + 14,
+       .vsync_end = 864 + 14 + 1,
+       .vtotal = 864 + 14 + 1 + 11,
+       .vrefresh = 60,
+       .width_mm = 48,
+       .height_mm = 84,
+       .flags = DRM_MODE_FLAG_PVSYNC,
+};
+
+/*
+ * The timings are not very helpful as the display is used in
+ * command mode using the maximum HS frequency.
+ */
+static const struct drm_display_mode sony_acx424akp_cmd_mode = {
+       .clock = 420160,
+       .hdisplay = 480,
+       .hsync_start = 480 + 154,
+       .hsync_end = 480 + 154 + 16,
+       .htotal = 480 + 154 + 16 + 32,
+       .vdisplay = 864,
+       .vsync_start = 864 + 1,
+       .vsync_end = 864 + 1 + 1,
+       .vtotal = 864 + 1 + 1 + 1,
+       /*
+        * Some desired refresh rate, experiments at the maximum "pixel"
+        * clock speed (HS clock 420 MHz) yields around 117Hz.
+        */
+       .vrefresh = 60,
+       .width_mm = 48,
+       .height_mm = 84,
+};
+
+static inline struct acx424akp *panel_to_acx424akp(struct drm_panel *panel)
+{
+       return container_of(panel, struct acx424akp, panel);
+}
+
+#define FOSC                   20 /* 20Mhz */
+#define SCALE_FACTOR_NS_DIV_MHZ        1000
+
+static int acx424akp_set_brightness(struct backlight_device *bl)
+{
+       struct acx424akp *acx = bl_get_data(bl);
+       struct mipi_dsi_device *dsi = to_mipi_dsi_device(acx->dev);
+       int period_ns = 1023;
+       int duty_ns = bl->props.brightness;
+       u8 pwm_ratio;
+       u8 pwm_div;
+       u8 par;
+       int ret;
+
+       /* Calculate the PWM duty cycle in n/256's */
+       pwm_ratio = max(((duty_ns * 256) / period_ns) - 1, 1);
+       pwm_div = max(1,
+                     ((FOSC * period_ns) / 256) /
+                     SCALE_FACTOR_NS_DIV_MHZ);
+
+       /* Set up PWM dutycycle ONE byte (differs from the standard) */
+       DRM_DEV_DEBUG(acx->dev, "calculated duty cycle %02x\n", pwm_ratio);
+       ret = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_DISPLAY_BRIGHTNESS,
+                                &pwm_ratio, 1);
+       if (ret < 0) {
+               DRM_DEV_ERROR(acx->dev,
+                             "failed to set display PWM ratio (%d)\n",
+                             ret);
+               return ret;
+       }
+
+       /*
+        * Sequence to write PWMDIV:
+        *      address         data
+        *      0xF3            0xAA   CMD2 Unlock
+        *      0x00            0x01   Enter CMD2 page 0
+        *      0X7D            0x01   No reload MTP of CMD2 P1
+        *      0x22            PWMDIV
+        *      0x7F            0xAA   CMD2 page 1 lock
+        */
+       par = 0xaa;
+       ret = mipi_dsi_dcs_write(dsi, 0xf3, &par, 1);
+       if (ret < 0) {
+               DRM_DEV_ERROR(acx->dev,
+                             "failed to unlock CMD 2 (%d)\n",
+                             ret);
+               return ret;
+       }
+       par = 0x01;
+       ret = mipi_dsi_dcs_write(dsi, 0x00, &par, 1);
+       if (ret < 0) {
+               DRM_DEV_ERROR(acx->dev,
+                             "failed to enter page 1 (%d)\n",
+                             ret);
+               return ret;
+       }
+       par = 0x01;
+       ret = mipi_dsi_dcs_write(dsi, 0x7d, &par, 1);
+       if (ret < 0) {
+               DRM_DEV_ERROR(acx->dev,
+                             "failed to disable MTP reload (%d)\n",
+                             ret);
+               return ret;
+       }
+       ret = mipi_dsi_dcs_write(dsi, 0x22, &pwm_div, 1);
+       if (ret < 0) {
+               DRM_DEV_ERROR(acx->dev,
+                             "failed to set PWM divisor (%d)\n",
+                             ret);
+               return ret;
+       }
+       par = 0xaa;
+       ret = mipi_dsi_dcs_write(dsi, 0x7f, &par, 1);
+       if (ret < 0) {
+               DRM_DEV_ERROR(acx->dev,
+                             "failed to lock CMD 2 (%d)\n",
+                             ret);
+               return ret;
+       }
+
+       /* Enable backlight */
+       par = 0x24;
+       ret = mipi_dsi_dcs_write(dsi, MIPI_DCS_WRITE_CONTROL_DISPLAY,
+                                &par, 1);
+       if (ret < 0) {
+               DRM_DEV_ERROR(acx->dev,
+                             "failed to enable display backlight (%d)\n",
+                             ret);
+               return ret;
+       }
+
+       return 0;
+}
+
+static const struct backlight_ops acx424akp_bl_ops = {
+       .update_status = acx424akp_set_brightness,
+};
+
+static int acx424akp_read_id(struct acx424akp *acx)
+{
+       struct mipi_dsi_device *dsi = to_mipi_dsi_device(acx->dev);
+       u8 vendor, version, panel;
+       u16 val;
+       int ret;
+
+       ret = mipi_dsi_dcs_read(dsi, ACX424_DCS_READ_ID1, &vendor, 1);
+       if (ret < 0) {
+               DRM_DEV_ERROR(acx->dev, "could not vendor ID byte\n");
+               return ret;
+       }
+       ret = mipi_dsi_dcs_read(dsi, ACX424_DCS_READ_ID2, &version, 1);
+       if (ret < 0) {
+               DRM_DEV_ERROR(acx->dev, "could not read device version byte\n");
+               return ret;
+       }
+       ret = mipi_dsi_dcs_read(dsi, ACX424_DCS_READ_ID3, &panel, 1);
+       if (ret < 0) {
+               DRM_DEV_ERROR(acx->dev, "could not read panel ID byte\n");
+               return ret;
+       }
+
+       if (vendor == 0x00) {
+               DRM_DEV_ERROR(acx->dev, "device vendor ID is zero\n");
+               return -ENODEV;
+       }
+
+       val = (vendor << 8) | panel;
+       switch (val) {
+       case DISPLAY_SONY_ACX424AKP_ID1:
+       case DISPLAY_SONY_ACX424AKP_ID2:
+       case DISPLAY_SONY_ACX424AKP_ID3:
+               DRM_DEV_INFO(acx->dev,
+                            "MTP vendor: %02x, version: %02x, panel: %02x\n",
+                            vendor, version, panel);
+               break;
+       default:
+               DRM_DEV_INFO(acx->dev,
+                            "unknown vendor: %02x, version: %02x, panel: %02x\n",
+                            vendor, version, panel);
+               break;
+       }
+
+       return 0;
+}
+
+static int acx424akp_power_on(struct acx424akp *acx)
+{
+       int ret;
+
+       ret = regulator_enable(acx->supply);
+       if (ret) {
+               DRM_DEV_ERROR(acx->dev, "failed to enable supply (%d)\n", ret);
+               return ret;
+       }
+
+       /* Assert RESET */
+       gpiod_set_value_cansleep(acx->reset_gpio, 1);
+       udelay(20);
+       /* De-assert RESET */
+       gpiod_set_value_cansleep(acx->reset_gpio, 0);
+       usleep_range(11000, 20000);
+
+       return 0;
+}
+
+static void acx424akp_power_off(struct acx424akp *acx)
+{
+       /* Assert RESET */
+       gpiod_set_value_cansleep(acx->reset_gpio, 1);
+       usleep_range(11000, 20000);
+
+       regulator_disable(acx->supply);
+}
+
+static int acx424akp_prepare(struct drm_panel *panel)
+{
+       struct acx424akp *acx = panel_to_acx424akp(panel);
+       struct mipi_dsi_device *dsi = to_mipi_dsi_device(acx->dev);
+       const u8 mddi = 3;
+       int ret;
+
+       ret = acx424akp_power_on(acx);
+       if (ret)
+               return ret;
+
+       ret = acx424akp_read_id(acx);
+       if (ret) {
+               DRM_DEV_ERROR(acx->dev, "failed to read panel ID (%d)\n", ret);
+               goto err_power_off;
+       }
+
+       /* Enabe tearing mode: send TE (tearing effect) at VBLANK */
+       ret = mipi_dsi_dcs_set_tear_on(dsi,
+                                      MIPI_DSI_DCS_TEAR_MODE_VBLANK);
+       if (ret) {
+               DRM_DEV_ERROR(acx->dev, "failed to enable vblank TE (%d)\n",
+                             ret);
+               goto err_power_off;
+       }
+
+       /*
+        * Set MDDI
+        *
+        * This presumably deactivates the Qualcomm MDDI interface and
+        * selects DSI, similar code is found in other drivers such as the
+        * Sharp LS043T1LE01 which makes us suspect that this panel may be
+        * using a Novatek NT35565 or similar display driver chip that shares
+        * this command. Due to the lack of documentation we cannot know for
+        * sure.
+        */
+       ret = mipi_dsi_dcs_write(dsi, ACX424_DCS_SET_MDDI,
+                                &mddi, sizeof(mddi));
+       if (ret < 0) {
+               DRM_DEV_ERROR(acx->dev, "failed to set MDDI (%d)\n", ret);
+               goto err_power_off;
+       }
+
+       /* Exit sleep mode */
+       ret = mipi_dsi_dcs_exit_sleep_mode(dsi);
+       if (ret) {
+               DRM_DEV_ERROR(acx->dev, "failed to exit sleep mode (%d)\n",
+                             ret);
+               goto err_power_off;
+       }
+       msleep(140);
+
+       ret = mipi_dsi_dcs_set_display_on(dsi);
+       if (ret) {
+               DRM_DEV_ERROR(acx->dev, "failed to turn display on (%d)\n",
+                             ret);
+               goto err_power_off;
+       }
+       if (acx->video_mode) {
+               /* In video mode turn peripheral on */
+               ret = mipi_dsi_turn_on_peripheral(dsi);
+               if (ret) {
+                       dev_err(acx->dev, "failed to turn on peripheral\n");
+                       goto err_power_off;
+               }
+       }
+
+       acx->bl->props.power = FB_BLANK_NORMAL;
+
+       return 0;
+
+err_power_off:
+       acx424akp_power_off(acx);
+       return ret;
+}
+
+static int acx424akp_unprepare(struct drm_panel *panel)
+{
+       struct acx424akp *acx = panel_to_acx424akp(panel);
+       struct mipi_dsi_device *dsi = to_mipi_dsi_device(acx->dev);
+       u8 par;
+       int ret;
+
+       /* Disable backlight */
+       par = 0x00;
+       ret = mipi_dsi_dcs_write(dsi, MIPI_DCS_WRITE_CONTROL_DISPLAY,
+                                &par, 1);
+       if (ret) {
+               DRM_DEV_ERROR(acx->dev,
+                             "failed to disable display backlight (%d)\n",
+                             ret);
+               return ret;
+       }
+
+       ret = mipi_dsi_dcs_set_display_off(dsi);
+       if (ret) {
+               DRM_DEV_ERROR(acx->dev, "failed to turn display off (%d)\n",
+                             ret);
+               return ret;
+       }
+
+       /* Enter sleep mode */
+       ret = mipi_dsi_dcs_enter_sleep_mode(dsi);
+       if (ret) {
+               DRM_DEV_ERROR(acx->dev, "failed to enter sleep mode (%d)\n",
+                             ret);
+               return ret;
+       }
+       msleep(85);
+
+       acx424akp_power_off(acx);
+       acx->bl->props.power = FB_BLANK_POWERDOWN;
+
+       return 0;
+}
+
+static int acx424akp_enable(struct drm_panel *panel)
+{
+       struct acx424akp *acx = panel_to_acx424akp(panel);
+
+       /*
+        * The backlight is on as long as the display is on
+        * so no use to call backlight_enable() here.
+        */
+       acx->bl->props.power = FB_BLANK_UNBLANK;
+
+       return 0;
+}
+
+static int acx424akp_disable(struct drm_panel *panel)
+{
+       struct acx424akp *acx = panel_to_acx424akp(panel);
+
+       /*
+        * The backlight is on as long as the display is on
+        * so no use to call backlight_disable() here.
+        */
+       acx->bl->props.power = FB_BLANK_NORMAL;
+
+       return 0;
+}
+
+static int acx424akp_get_modes(struct drm_panel *panel,
+                              struct drm_connector *connector)
+{
+       struct acx424akp *acx = panel_to_acx424akp(panel);
+       struct drm_display_mode *mode;
+
+       if (acx->video_mode)
+               mode = drm_mode_duplicate(connector->dev,
+                                         &sony_acx424akp_vid_mode);
+       else
+               mode = drm_mode_duplicate(connector->dev,
+                                         &sony_acx424akp_cmd_mode);
+       if (!mode) {
+               DRM_ERROR("bad mode or failed to add mode\n");
+               return -EINVAL;
+       }
+       drm_mode_set_name(mode);
+       mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
+
+       connector->display_info.width_mm = mode->width_mm;
+       connector->display_info.height_mm = mode->height_mm;
+
+       drm_mode_probed_add(connector, mode);
+
+       return 1; /* Number of modes */
+}
+
+static const struct drm_panel_funcs acx424akp_drm_funcs = {
+       .disable = acx424akp_disable,
+       .unprepare = acx424akp_unprepare,
+       .prepare = acx424akp_prepare,
+       .enable = acx424akp_enable,
+       .get_modes = acx424akp_get_modes,
+};
+
+static int acx424akp_probe(struct mipi_dsi_device *dsi)
+{
+       struct device *dev = &dsi->dev;
+       struct acx424akp *acx;
+       int ret;
+
+       acx = devm_kzalloc(dev, sizeof(struct acx424akp), GFP_KERNEL);
+       if (!acx)
+               return -ENOMEM;
+       acx->video_mode = of_property_read_bool(dev->of_node,
+                                               "enforce-video-mode");
+
+       mipi_dsi_set_drvdata(dsi, acx);
+       acx->dev = dev;
+
+       dsi->lanes = 2;
+       dsi->format = MIPI_DSI_FMT_RGB888;
+       /*
+        * FIXME: these come from the ST-Ericsson vendor driver for the
+        * HREF520 and seems to reflect limitations in the PLLs on that
+        * platform, if you have the datasheet, please cross-check the
+        * actual max rates.
+        */
+       dsi->lp_rate = 19200000;
+       dsi->hs_rate = 420160000;
+
+       if (acx->video_mode)
+               /* Burst mode using event for sync */
+               dsi->mode_flags =
+                       MIPI_DSI_MODE_VIDEO |
+                       MIPI_DSI_MODE_VIDEO_BURST;
+       else
+               dsi->mode_flags =
+                       MIPI_DSI_CLOCK_NON_CONTINUOUS |
+                       MIPI_DSI_MODE_EOT_PACKET;
+
+       acx->supply = devm_regulator_get(dev, "vddi");
+       if (IS_ERR(acx->supply))
+               return PTR_ERR(acx->supply);
+
+       /* This asserts RESET by default */
+       acx->reset_gpio = devm_gpiod_get_optional(dev, "reset",
+                                                 GPIOD_OUT_HIGH);
+       if (IS_ERR(acx->reset_gpio)) {
+               ret = PTR_ERR(acx->reset_gpio);
+               if (ret != -EPROBE_DEFER)
+                       DRM_DEV_ERROR(dev, "failed to request GPIO (%d)\n",
+                                     ret);
+               return ret;
+       }
+
+       drm_panel_init(&acx->panel, dev, &acx424akp_drm_funcs,
+                      DRM_MODE_CONNECTOR_DSI);
+
+       acx->bl = devm_backlight_device_register(dev, "acx424akp", dev, acx,
+                                                &acx424akp_bl_ops, NULL);
+       if (IS_ERR(acx->bl)) {
+               DRM_DEV_ERROR(dev, "failed to register backlight device\n");
+               return PTR_ERR(acx->bl);
+       }
+       acx->bl->props.max_brightness = 1023;
+       acx->bl->props.brightness = 512;
+       acx->bl->props.power = FB_BLANK_POWERDOWN;
+
+       ret = drm_panel_add(&acx->panel);
+       if (ret < 0)
+               return ret;
+
+       ret = mipi_dsi_attach(dsi);
+       if (ret < 0) {
+               drm_panel_remove(&acx->panel);
+               return ret;
+       }
+
+       return 0;
+}
+
+static int acx424akp_remove(struct mipi_dsi_device *dsi)
+{
+       struct acx424akp *acx = mipi_dsi_get_drvdata(dsi);
+
+       mipi_dsi_detach(dsi);
+       drm_panel_remove(&acx->panel);
+
+       return 0;
+}
+
+static const struct of_device_id acx424akp_of_match[] = {
+       { .compatible = "sony,acx424akp" },
+       { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, acx424akp_of_match);
+
+static struct mipi_dsi_driver acx424akp_driver = {
+       .probe = acx424akp_probe,
+       .remove = acx424akp_remove,
+       .driver = {
+               .name = "panel-sony-acx424akp",
+               .of_match_table = acx424akp_of_match,
+       },
+};
+module_mipi_dsi_driver(acx424akp_driver);
+
+MODULE_AUTHOR("Linus Wallei <linus.walleij@linaro.org>");
+MODULE_DESCRIPTION("MIPI-DSI Sony acx424akp Panel Driver");
+MODULE_LICENSE("GPL v2");
index d411eb6..a9ed088 100644 (file)
@@ -542,12 +542,14 @@ int panfrost_job_open(struct panfrost_file_priv *panfrost_priv)
 {
        struct panfrost_device *pfdev = panfrost_priv->pfdev;
        struct panfrost_job_slot *js = pfdev->js;
-       struct drm_sched_rq *rq;
+       struct drm_gpu_scheduler *sched;
        int ret, i;
 
        for (i = 0; i < NUM_JOB_SLOTS; i++) {
-               rq = &js->queue[i].sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];
-               ret = drm_sched_entity_init(&panfrost_priv->sched_entity[i], &rq, 1, NULL);
+               sched = &js->queue[i].sched;
+               ret = drm_sched_entity_init(&panfrost_priv->sched_entity[i],
+                                           DRM_SCHED_PRIORITY_NORMAL, &sched,
+                                           1, NULL);
                if (WARN_ON(ret))
                        return ret;
        }
index da2c9e2..be58369 100644 (file)
@@ -244,9 +244,8 @@ static void atombios_blank_crtc(struct drm_crtc *crtc, int state)
 
        atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
 
-       if (ASIC_IS_DCE8(rdev)) {
+       if (ASIC_IS_DCE8(rdev))
                WREG32(vga_control_regs[radeon_crtc->crtc_id], vga_control);
-       }
 }
 
 static void atombios_powergate_crtc(struct drm_crtc *crtc, int state)
index 911735f..15b00a3 100644 (file)
@@ -813,9 +813,8 @@ void radeon_dp_link_train(struct drm_encoder *encoder,
        dp_info.use_dpencoder = true;
        index = GetIndexIntoMasterTable(COMMAND, DPEncoderService);
        if (atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) {
-               if (crev > 1) {
+               if (crev > 1)
                        dp_info.use_dpencoder = false;
-               }
        }
 
        dp_info.enc_id = 0;
index 2a7be5d..cc5ee1b 100644 (file)
@@ -1885,11 +1885,10 @@ atombios_set_encoder_crtc_source(struct drm_encoder *encoder)
                        if (ASIC_IS_AVIVO(rdev))
                                args.v1.ucCRTC = radeon_crtc->crtc_id;
                        else {
-                               if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_DAC1) {
+                               if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_DAC1)
                                        args.v1.ucCRTC = radeon_crtc->crtc_id;
-                               } else {
+                               else
                                        args.v1.ucCRTC = radeon_crtc->crtc_id << 2;
-                               }
                        }
                        switch (radeon_encoder->encoder_id) {
                        case ENCODER_OBJECT_ID_INTERNAL_TMDS1:
@@ -2234,9 +2233,9 @@ assigned:
                DRM_ERROR("Got encoder index incorrect - returning 0\n");
                return 0;
        }
-       if (rdev->mode_info.active_encoders & (1 << enc_idx)) {
+       if (rdev->mode_info.active_encoders & (1 << enc_idx))
                DRM_ERROR("chosen encoder in use %d\n", enc_idx);
-       }
+
        rdev->mode_info.active_encoders |= (1 << enc_idx);
        return enc_idx;
 }
index a570ce4..ab4d210 100644 (file)
@@ -68,11 +68,6 @@ static int radeon_process_i2c_ch(struct radeon_i2c_chan *chan,
                        memcpy(&out, &buf[1], num);
                args.lpI2CDataOut = cpu_to_le16(out);
        } else {
-               if (num > ATOM_MAX_HW_I2C_READ) {
-                       DRM_ERROR("hw i2c: tried to read too many bytes (%d vs 255)\n", num);
-                       r = -EINVAL;
-                       goto done;
-               }
                args.ucRegIndex = 0;
                args.lpI2CDataOut = 0;
        }
index 4fa488c..5c42877 100644 (file)
@@ -8137,7 +8137,7 @@ static void cik_uvd_init(struct radeon_device *rdev)
                 * there. So it is pointless to try to go through that code
                 * hence why we disable uvd here.
                 */
-               rdev->has_uvd = 0;
+               rdev->has_uvd = false;
                return;
        }
        rdev->ring[R600_RING_TYPE_UVD_INDEX].ring_obj = NULL;
@@ -8209,7 +8209,7 @@ static void cik_vce_init(struct radeon_device *rdev)
                 * there. So it is pointless to try to go through that code
                 * hence why we disable vce here.
                 */
-               rdev->has_vce = 0;
+               rdev->has_vce = false;
                return;
        }
        rdev->ring[TN_RING_TYPE_VCE1_INDEX].ring_obj = NULL;
index 35b9dc6..68403e7 100644 (file)
@@ -333,7 +333,7 @@ void cik_sdma_enable(struct radeon_device *rdev, bool enable)
        u32 me_cntl, reg_offset;
        int i;
 
-       if (enable == false) {
+       if (!enable) {
                cik_sdma_gfx_stop(rdev);
                cik_sdma_rlc_stop(rdev);
        }
index 683c790..14d90dc 100644 (file)
@@ -4945,7 +4945,7 @@ static void evergreen_uvd_init(struct radeon_device *rdev)
                 * there. So it is pointless to try to go through that code
                 * hence why we disable uvd here.
                 */
-               rdev->has_uvd = 0;
+               rdev->has_uvd = false;
                return;
        }
        rdev->ring[R600_RING_TYPE_UVD_INDEX].ring_obj = NULL;
index a99442b..02feb08 100644 (file)
@@ -2017,7 +2017,7 @@ static void cayman_uvd_init(struct radeon_device *rdev)
                 * there. So it is pointless to try to go through that code
                 * hence why we disable uvd here.
                 */
-               rdev->has_uvd = 0;
+               rdev->has_uvd = false;
                return;
        }
        rdev->ring[R600_RING_TYPE_UVD_INDEX].ring_obj = NULL;
@@ -2085,7 +2085,7 @@ static void cayman_vce_init(struct radeon_device *rdev)
                 * there. So it is pointless to try to go through that code
                 * hence why we disable vce here.
                 */
-               rdev->has_vce = 0;
+               rdev->has_vce = false;
                return;
        }
        rdev->ring[TN_RING_TYPE_VCE1_INDEX].ring_obj = NULL;
index 29c966f..24c8db6 100644 (file)
@@ -1823,9 +1823,9 @@ static int r100_packet0_check(struct radeon_cs_parser *p,
        case RADEON_PP_TXFORMAT_2:
                i = (reg - RADEON_PP_TXFORMAT_0) / 24;
                if (idx_value & RADEON_TXFORMAT_NON_POWER2) {
-                       track->textures[i].use_pitch = 1;
+                       track->textures[i].use_pitch = true;
                } else {
-                       track->textures[i].use_pitch = 0;
+                       track->textures[i].use_pitch = false;
                        track->textures[i].width = 1 << ((idx_value & RADEON_TXFORMAT_WIDTH_MASK) >> RADEON_TXFORMAT_WIDTH_SHIFT);
                        track->textures[i].height = 1 << ((idx_value & RADEON_TXFORMAT_HEIGHT_MASK) >> RADEON_TXFORMAT_HEIGHT_SHIFT);
                }
@@ -2387,12 +2387,12 @@ void r100_cs_track_clear(struct radeon_device *rdev, struct r100_cs_track *track
                else
                        track->num_texture = 6;
                track->maxy = 2048;
-               track->separate_cube = 1;
+               track->separate_cube = true;
        } else {
                track->num_cb = 4;
                track->num_texture = 16;
                track->maxy = 4096;
-               track->separate_cube = 0;
+               track->separate_cube = false;
                track->aaresolve = false;
                track->aa.robj = NULL;
        }
@@ -2815,7 +2815,7 @@ void r100_vga_set_state(struct radeon_device *rdev, bool state)
        uint32_t temp;
 
        temp = RREG32(RADEON_CONFIG_CNTL);
-       if (state == false) {
+       if (!state) {
                temp &= ~RADEON_CFG_VGA_RAM_EN;
                temp |= RADEON_CFG_VGA_IO_DIS;
        } else {
index d2e51a9..d9a33ca 100644 (file)
@@ -3053,7 +3053,7 @@ static void r600_uvd_init(struct radeon_device *rdev)
                 * there. So it is pointless to try to go through that code
                 * hence why we disable uvd here.
                 */
-               rdev->has_uvd = 0;
+               rdev->has_uvd = false;
                return;
        }
        rdev->ring[R600_RING_TYPE_UVD_INDEX].ring_obj = NULL;
@@ -3191,7 +3191,7 @@ void r600_vga_set_state(struct radeon_device *rdev, bool state)
        uint32_t temp;
 
        temp = RREG32(CONFIG_CNTL);
-       if (state == false) {
+       if (!state) {
                temp &= ~(1<<0);
                temp |= (1<<1);
        } else {
index 072e6da..848ef68 100644 (file)
@@ -570,7 +570,7 @@ bool radeon_get_atom_connector_info_from_object_table(struct drm_device *dev)
                path_size += le16_to_cpu(path->usSize);
 
                if (device_support & le16_to_cpu(path->usDeviceTag)) {
-                       uint8_t con_obj_id, con_obj_num, con_obj_type;
+                       uint8_t con_obj_id, con_obj_num;
 
                        con_obj_id =
                            (le16_to_cpu(path->usConnObjectId) & OBJECT_ID_MASK)
@@ -578,9 +578,6 @@ bool radeon_get_atom_connector_info_from_object_table(struct drm_device *dev)
                        con_obj_num =
                            (le16_to_cpu(path->usConnObjectId) & ENUM_ID_MASK)
                            >> ENUM_ID_SHIFT;
-                       con_obj_type =
-                           (le16_to_cpu(path->usConnObjectId) &
-                            OBJECT_TYPE_MASK) >> OBJECT_TYPE_SHIFT;
 
                        /* TODO CV support */
                        if (le16_to_cpu(path->usDeviceTag) ==
@@ -648,15 +645,7 @@ bool radeon_get_atom_connector_info_from_object_table(struct drm_device *dev)
                        router.ddc_valid = false;
                        router.cd_valid = false;
                        for (j = 0; j < ((le16_to_cpu(path->usSize) - 8) / 2); j++) {
-                               uint8_t grph_obj_id, grph_obj_num, grph_obj_type;
-
-                               grph_obj_id =
-                                   (le16_to_cpu(path->usGraphicObjIds[j]) &
-                                    OBJECT_ID_MASK) >> OBJECT_ID_SHIFT;
-                               grph_obj_num =
-                                   (le16_to_cpu(path->usGraphicObjIds[j]) &
-                                    ENUM_ID_MASK) >> ENUM_ID_SHIFT;
-                               grph_obj_type =
+                               uint8_t grph_obj_type =
                                    (le16_to_cpu(path->usGraphicObjIds[j]) &
                                     OBJECT_TYPE_MASK) >> OBJECT_TYPE_SHIFT;
 
index c84d965..c42f73f 100644 (file)
@@ -664,17 +664,17 @@ bool radeon_get_bios(struct radeon_device *rdev)
        uint16_t tmp;
 
        r = radeon_atrm_get_bios(rdev);
-       if (r == false)
+       if (!r)
                r = radeon_acpi_vfct_bios(rdev);
-       if (r == false)
+       if (!r)
                r = igp_read_bios_from_vram(rdev);
-       if (r == false)
+       if (!r)
                r = radeon_read_bios(rdev);
-       if (r == false)
+       if (!r)
                r = radeon_read_disabled_bios(rdev);
-       if (r == false)
+       if (!r)
                r = radeon_read_platform_bios(rdev);
-       if (r == false || rdev->bios == NULL) {
+       if (!r || rdev->bios == NULL) {
                DRM_ERROR("Unable to locate a BIOS ROM\n");
                rdev->bios = NULL;
                return false;
index 0851e68..fe12d9d 100644 (file)
@@ -440,7 +440,7 @@ radeon_connector_analog_encoder_conflict_solve(struct drm_connector *connector,
                                if (radeon_conflict->use_digital)
                                        continue;
 
-                               if (priority == true) {
+                               if (priority) {
                                        DRM_DEBUG_KMS("1: conflicting encoders switching off %s\n",
                                                      conflict->name);
                                        DRM_DEBUG_KMS("in favor of %s\n",
@@ -700,9 +700,9 @@ static int radeon_connector_set_property(struct drm_connector *connector, struct
                        else
                                ret = radeon_legacy_get_tmds_info_from_combios(radeon_encoder, tmds);
                }
-               if (val == 1 || ret == false) {
+               if (val == 1 || !ret)
                        radeon_legacy_get_tmds_info_from_table(radeon_encoder, tmds);
-               }
+
                radeon_property_change_mode(&radeon_encoder->base);
        }
 
index 962575e..856526c 100644 (file)
@@ -847,11 +847,11 @@ static bool radeon_setup_enc_conn(struct drm_device *dev)
        if (rdev->bios) {
                if (rdev->is_atom_bios) {
                        ret = radeon_get_atom_connector_info_from_supported_devices_table(dev);
-                       if (ret == false)
+                       if (!ret)
                                ret = radeon_get_atom_connector_info_from_object_table(dev);
                } else {
                        ret = radeon_get_legacy_connector_info_from_bios(dev);
-                       if (ret == false)
+                       if (!ret)
                                ret = radeon_get_legacy_connector_info_from_table(dev);
                }
        } else {
index ee28f5b..28eef92 100644 (file)
@@ -518,7 +518,7 @@ static bool radeon_mst_mode_fixup(struct drm_encoder *encoder,
 
        mst_enc = radeon_encoder->enc_priv;
 
-       mst_enc->pbn = drm_dp_calc_pbn_mode(adjusted_mode->clock, bpp);
+       mst_enc->pbn = drm_dp_calc_pbn_mode(adjusted_mode->clock, bpp, false);
 
        mst_enc->primary->active_device = mst_enc->primary->devices & mst_enc->connector->devices;
        DRM_DEBUG_KMS("setting active device to %08x from %08x %08x for encoder %d\n",
index a33b195..44d060f 100644 (file)
@@ -1712,7 +1712,7 @@ static struct radeon_encoder_int_tmds *radeon_legacy_get_tmds_info(struct radeon
        else
                ret = radeon_legacy_get_tmds_info_from_combios(encoder, tmds);
 
-       if (ret == false)
+       if (!ret)
                radeon_legacy_get_tmds_info_from_table(encoder, tmds);
 
        return tmds;
@@ -1735,7 +1735,7 @@ static struct radeon_encoder_ext_tmds *radeon_legacy_get_ext_tmds_info(struct ra
 
        ret = radeon_legacy_get_ext_tmds_info_from_combios(encoder, tmds);
 
-       if (ret == false)
+       if (!ret)
                radeon_legacy_get_ext_tmds_info_from_table(encoder, tmds);
 
        return tmds;
index b37121f..8c5d6fd 100644 (file)
@@ -1789,7 +1789,7 @@ static bool radeon_pm_debug_check_in_vbl(struct radeon_device *rdev, bool finish
        u32 stat_crtc = 0;
        bool in_vbl = radeon_pm_in_vbl(rdev);
 
-       if (in_vbl == false)
+       if (!in_vbl)
                DRM_DEBUG_DRIVER("not in vbl for pm change %08x at %s\n", stat_crtc,
                         finish ? "exit" : "entry");
        return in_vbl;
index 59db54a..5e80064 100644 (file)
@@ -388,9 +388,9 @@ int radeon_vce_get_create_msg(struct radeon_device *rdev, int ring,
                ib.ptr[i] = cpu_to_le32(0x0);
 
        r = radeon_ib_schedule(rdev, &ib, NULL, false);
-       if (r) {
+       if (r)
                DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
-       }
+
 
        if (fence)
                *fence = radeon_fence_ref(ib.fence);
index e0ad547..f60fae0 100644 (file)
@@ -296,9 +296,9 @@ struct radeon_bo_va *radeon_vm_bo_find(struct radeon_vm *vm,
        struct radeon_bo_va *bo_va;
 
        list_for_each_entry(bo_va, &bo->va, bo_list) {
-               if (bo_va->vm == vm) {
+               if (bo_va->vm == vm)
                        return bo_va;
-               }
+
        }
        return NULL;
 }
@@ -323,9 +323,9 @@ struct radeon_bo_va *radeon_vm_bo_add(struct radeon_device *rdev,
        struct radeon_bo_va *bo_va;
 
        bo_va = kzalloc(sizeof(struct radeon_bo_va), GFP_KERNEL);
-       if (bo_va == NULL) {
+       if (bo_va == NULL)
                return NULL;
-       }
+
        bo_va->vm = vm;
        bo_va->bo = bo;
        bo_va->it.start = 0;
@@ -947,9 +947,9 @@ int radeon_vm_bo_update(struct radeon_device *rdev,
 
        if (mem) {
                addr = (u64)mem->start << PAGE_SHIFT;
-               if (mem->mem_type != TTM_PL_SYSTEM) {
+               if (mem->mem_type != TTM_PL_SYSTEM)
                        bo_va->flags |= RADEON_VM_PAGE_VALID;
-               }
+
                if (mem->mem_type == TTM_PL_TT) {
                        bo_va->flags |= RADEON_VM_PAGE_SYSTEM;
                        if (!(bo_va->bo->flags & (RADEON_GEM_GTT_WC | RADEON_GEM_GTT_UC)))
@@ -1233,9 +1233,9 @@ void radeon_vm_fini(struct radeon_device *rdev, struct radeon_vm *vm)
        struct radeon_bo_va *bo_va, *tmp;
        int i, r;
 
-       if (!RB_EMPTY_ROOT(&vm->va.rb_root)) {
+       if (!RB_EMPTY_ROOT(&vm->va.rb_root))
                dev_err(rdev->dev, "still active bo inside vm\n");
-       }
+
        rbtree_postorder_for_each_entry_safe(bo_va, tmp,
                                             &vm->va.rb_root, it.rb) {
                interval_tree_remove(&bo_va->it, &vm->va);
index 3fc461d..21f653a 100644 (file)
@@ -1703,7 +1703,7 @@ static void rv770_uvd_init(struct radeon_device *rdev)
                 * there. So it is pointless to try to go through that code
                 * hence why we disable uvd here.
                 */
-               rdev->has_uvd = 0;
+               rdev->has_uvd = false;
                return;
        }
        rdev->ring[R600_RING_TYPE_UVD_INDEX].ring_obj = NULL;
index 8788a05..93dcab5 100644 (file)
@@ -6472,7 +6472,7 @@ static void si_uvd_init(struct radeon_device *rdev)
                 * there. So it is pointless to try to go through that code
                 * hence why we disable uvd here.
                 */
-               rdev->has_uvd = 0;
+               rdev->has_uvd = false;
                return;
        }
        rdev->ring[R600_RING_TYPE_UVD_INDEX].ring_obj = NULL;
@@ -6539,7 +6539,7 @@ static void si_vce_init(struct radeon_device *rdev)
                 * there. So it is pointless to try to go through that code
                 * hence why we disable vce here.
                 */
-               rdev->has_vce = 0;
+               rdev->has_vce = false;
                return;
        }
        rdev->ring[TN_RING_TYPE_VCE1_INDEX].ring_obj = NULL;
index 961519c..8ffa4fb 100644 (file)
@@ -590,9 +590,8 @@ static void __rcar_lvds_atomic_enable(struct drm_bridge *bridge,
 }
 
 static void rcar_lvds_atomic_enable(struct drm_bridge *bridge,
-                                   struct drm_bridge_state *old_bridge_state)
+                                   struct drm_atomic_state *state)
 {
-       struct drm_atomic_state *state = old_bridge_state->base.state;
        struct drm_connector *connector;
        struct drm_crtc *crtc;
 
@@ -604,7 +603,7 @@ static void rcar_lvds_atomic_enable(struct drm_bridge *bridge,
 }
 
 static void rcar_lvds_atomic_disable(struct drm_bridge *bridge,
-                                    struct drm_bridge_state *old_bridge_state)
+                                    struct drm_atomic_state *state)
 {
        struct rcar_lvds *lvds = bridge_to_rcar_lvds(bridge);
 
@@ -619,8 +618,7 @@ static void rcar_lvds_atomic_disable(struct drm_bridge *bridge,
 
        /* Disable the companion LVDS encoder in dual-link mode. */
        if (lvds->link_type != RCAR_LVDS_SINGLE_LINK && lvds->companion)
-               lvds->companion->funcs->atomic_disable(lvds->companion,
-                                                      old_bridge_state);
+               lvds->companion->funcs->atomic_disable(lvds->companion, state);
 
        clk_disable_unprepare(lvds->clocks.mod);
 }
index 461a7a8..2e3a058 100644 (file)
  * submit to HW ring.
  *
  * @entity: scheduler entity to init
- * @rq_list: the list of run queue on which jobs from this
+ * @priority: priority of the entity
+ * @sched_list: the list of drm scheds on which jobs from this
  *           entity can be submitted
- * @num_rq_list: number of run queue in rq_list
+ * @num_sched_list: number of drm sched in sched_list
  * @guilty: atomic_t set to 1 when a job on this queue
  *          is found to be guilty causing a timeout
  *
  * Returns 0 on success or a negative error code on failure.
  */
 int drm_sched_entity_init(struct drm_sched_entity *entity,
-                         struct drm_sched_rq **rq_list,
-                         unsigned int num_rq_list,
+                         enum drm_sched_priority priority,
+                         struct drm_gpu_scheduler **sched_list,
+                         unsigned int num_sched_list,
                          atomic_t *guilty)
 {
-       int i;
-
-       if (!(entity && rq_list && (num_rq_list == 0 || rq_list[0])))
+       if (!(entity && sched_list && (num_sched_list == 0 || sched_list[0])))
                return -EINVAL;
 
        memset(entity, 0, sizeof(struct drm_sched_entity));
        INIT_LIST_HEAD(&entity->list);
        entity->rq = NULL;
        entity->guilty = guilty;
-       entity->num_rq_list = num_rq_list;
-       entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *),
-                               GFP_KERNEL);
-       if (!entity->rq_list)
-               return -ENOMEM;
-
-       init_completion(&entity->entity_idle);
-
-       for (i = 0; i < num_rq_list; ++i)
-               entity->rq_list[i] = rq_list[i];
+       entity->num_sched_list = num_sched_list;
+       entity->priority = priority;
+       entity->sched_list = num_sched_list > 1 ? sched_list : NULL;
+       entity->last_scheduled = NULL;
 
-       if (num_rq_list)
-               entity->rq = rq_list[0];
+       if(num_sched_list)
+               entity->rq = &sched_list[0]->sched_rq[entity->priority];
 
-       entity->last_scheduled = NULL;
+       init_completion(&entity->entity_idle);
 
        spin_lock_init(&entity->rq_lock);
        spsc_queue_init(&entity->job_queue);
@@ -139,10 +133,10 @@ drm_sched_entity_get_free_sched(struct drm_sched_entity *entity)
        unsigned int min_jobs = UINT_MAX, num_jobs;
        int i;
 
-       for (i = 0; i < entity->num_rq_list; ++i) {
-               struct drm_gpu_scheduler *sched = entity->rq_list[i]->sched;
+       for (i = 0; i < entity->num_sched_list; ++i) {
+               struct drm_gpu_scheduler *sched = entity->sched_list[i];
 
-               if (!entity->rq_list[i]->sched->ready) {
+               if (!entity->sched_list[i]->ready) {
                        DRM_WARN("sched%s is not ready, skipping", sched->name);
                        continue;
                }
@@ -150,7 +144,7 @@ drm_sched_entity_get_free_sched(struct drm_sched_entity *entity)
                num_jobs = atomic_read(&sched->num_jobs);
                if (num_jobs < min_jobs) {
                        min_jobs = num_jobs;
-                       rq = entity->rq_list[i];
+                       rq = &entity->sched_list[i]->sched_rq[entity->priority];
                }
        }
 
@@ -308,7 +302,6 @@ void drm_sched_entity_fini(struct drm_sched_entity *entity)
 
        dma_fence_put(entity->last_scheduled);
        entity->last_scheduled = NULL;
-       kfree(entity->rq_list);
 }
 EXPORT_SYMBOL(drm_sched_entity_fini);
 
@@ -354,15 +347,6 @@ static void drm_sched_entity_wakeup(struct dma_fence *f,
 }
 
 /**
- * drm_sched_entity_set_rq_priority - helper for drm_sched_entity_set_priority
- */
-static void drm_sched_entity_set_rq_priority(struct drm_sched_rq **rq,
-                                            enum drm_sched_priority priority)
-{
-       *rq = &(*rq)->sched->sched_rq[priority];
-}
-
-/**
  * drm_sched_entity_set_priority - Sets priority of the entity
  *
  * @entity: scheduler entity
@@ -373,19 +357,8 @@ static void drm_sched_entity_set_rq_priority(struct drm_sched_rq **rq,
 void drm_sched_entity_set_priority(struct drm_sched_entity *entity,
                                   enum drm_sched_priority priority)
 {
-       unsigned int i;
-
        spin_lock(&entity->rq_lock);
-
-       for (i = 0; i < entity->num_rq_list; ++i)
-               drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority);
-
-       if (entity->rq) {
-               drm_sched_rq_remove_entity(entity->rq, entity);
-               drm_sched_entity_set_rq_priority(&entity->rq, priority);
-               drm_sched_rq_add_entity(entity->rq, entity);
-       }
-
+       entity->priority = priority;
        spin_unlock(&entity->rq_lock);
 }
 EXPORT_SYMBOL(drm_sched_entity_set_priority);
@@ -490,20 +463,20 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
        struct dma_fence *fence;
        struct drm_sched_rq *rq;
 
-       if (spsc_queue_count(&entity->job_queue) || entity->num_rq_list <= 1)
+       if (spsc_queue_count(&entity->job_queue) || entity->num_sched_list <= 1)
                return;
 
        fence = READ_ONCE(entity->last_scheduled);
        if (fence && !dma_fence_is_signaled(fence))
                return;
 
+       spin_lock(&entity->rq_lock);
        rq = drm_sched_entity_get_free_sched(entity);
-       if (rq == entity->rq)
-               return;
+       if (rq != entity->rq) {
+               drm_sched_rq_remove_entity(entity->rq, entity);
+               entity->rq = rq;
+       }
 
-       spin_lock(&entity->rq_lock);
-       drm_sched_rq_remove_entity(entity->rq, entity);
-       entity->rq = rq;
        spin_unlock(&entity->rq_lock);
 }
 
index af2b2de..bd990d1 100644 (file)
@@ -18,15 +18,19 @@ int igt_dp_mst_calc_pbn_mode(void *ignored)
                int rate;
                int bpp;
                int expected;
+               bool dsc;
        } test_params[] = {
-               { 154000, 30, 689 },
-               { 234000, 30, 1047 },
-               { 297000, 24, 1063 },
+               { 154000, 30, 689, false },
+               { 234000, 30, 1047, false },
+               { 297000, 24, 1063, false },
+               { 332880, 24, 50, true },
+               { 324540, 24, 49, true },
        };
 
        for (i = 0; i < ARRAY_SIZE(test_params); i++) {
                pbn = drm_dp_calc_pbn_mode(test_params[i].rate,
-                                          test_params[i].bpp);
+                                          test_params[i].bpp,
+                                          test_params[i].dsc);
                FAIL(pbn != test_params[i].expected,
                     "Expected PBN %d for clock %d bpp %d, got %d\n",
                     test_params[i].expected, test_params[i].rate,
index 4e29f4f..072ea11 100644 (file)
@@ -856,6 +856,13 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
                ret = PTR_ERR(backend->mod_clk);
                goto err_disable_bus_clk;
        }
+
+       ret = clk_set_rate_exclusive(backend->mod_clk, 300000000);
+       if (ret) {
+               dev_err(dev, "Couldn't set the module clock frequency\n");
+               goto err_disable_bus_clk;
+       }
+
        clk_prepare_enable(backend->mod_clk);
 
        backend->ram_clk = devm_clk_get(dev, "ram");
@@ -932,6 +939,7 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
 err_disable_ram_clk:
        clk_disable_unprepare(backend->ram_clk);
 err_disable_mod_clk:
+       clk_rate_exclusive_put(backend->mod_clk);
        clk_disable_unprepare(backend->mod_clk);
 err_disable_bus_clk:
        clk_disable_unprepare(backend->bus_clk);
@@ -952,6 +960,7 @@ static void sun4i_backend_unbind(struct device *dev, struct device *master,
                sun4i_backend_free_sat(dev);
 
        clk_disable_unprepare(backend->ram_clk);
+       clk_rate_exclusive_put(backend->mod_clk);
        clk_disable_unprepare(backend->mod_clk);
        clk_disable_unprepare(backend->bus_clk);
        reset_control_assert(backend->reset);
index f7ab722..4fbe9a6 100644 (file)
@@ -56,6 +56,13 @@ static int sun6i_drc_bind(struct device *dev, struct device *master,
                ret = PTR_ERR(drc->mod_clk);
                goto err_disable_bus_clk;
        }
+
+       ret = clk_set_rate_exclusive(drc->mod_clk, 300000000);
+       if (ret) {
+               dev_err(dev, "Couldn't set the module clock frequency\n");
+               goto err_disable_bus_clk;
+       }
+
        clk_prepare_enable(drc->mod_clk);
 
        return 0;
@@ -72,6 +79,7 @@ static void sun6i_drc_unbind(struct device *dev, struct device *master,
 {
        struct sun6i_drc *drc = dev_get_drvdata(dev);
 
+       clk_rate_exclusive_put(drc->mod_clk);
        clk_disable_unprepare(drc->mod_clk);
        clk_disable_unprepare(drc->bus_clk);
        reset_control_assert(drc->reset);
index 714af05..7c70fd3 100644 (file)
@@ -1727,6 +1727,7 @@ static void tegra_crtc_atomic_disable(struct drm_crtc *crtc,
 {
        struct tegra_dc *dc = to_tegra_dc(crtc);
        u32 value;
+       int err;
 
        if (!tegra_dc_idle(dc)) {
                tegra_dc_stop(dc);
@@ -1773,7 +1774,9 @@ static void tegra_crtc_atomic_disable(struct drm_crtc *crtc,
 
        spin_unlock_irq(&crtc->dev->event_lock);
 
-       pm_runtime_put_sync(dc->dev);
+       err = host1x_client_suspend(&dc->client);
+       if (err < 0)
+               dev_err(dc->dev, "failed to suspend: %d\n", err);
 }
 
 static void tegra_crtc_atomic_enable(struct drm_crtc *crtc,
@@ -1783,8 +1786,13 @@ static void tegra_crtc_atomic_enable(struct drm_crtc *crtc,
        struct tegra_dc_state *state = to_dc_state(crtc->state);
        struct tegra_dc *dc = to_tegra_dc(crtc);
        u32 value;
+       int err;
 
-       pm_runtime_get_sync(dc->dev);
+       err = host1x_client_resume(&dc->client);
+       if (err < 0) {
+               dev_err(dc->dev, "failed to resume: %d\n", err);
+               return;
+       }
 
        /* initialize display controller */
        if (dc->syncpt) {
@@ -1996,7 +2004,7 @@ static bool tegra_dc_has_window_groups(struct tegra_dc *dc)
 
 static int tegra_dc_init(struct host1x_client *client)
 {
-       struct drm_device *drm = dev_get_drvdata(client->parent);
+       struct drm_device *drm = dev_get_drvdata(client->host);
        unsigned long flags = HOST1X_SYNCPT_CLIENT_MANAGED;
        struct tegra_dc *dc = host1x_client_to_dc(client);
        struct tegra_drm *tegra = drm->dev_private;
@@ -2012,6 +2020,15 @@ static int tegra_dc_init(struct host1x_client *client)
        if (!tegra_dc_has_window_groups(dc))
                return 0;
 
+       /*
+        * Set the display hub as the host1x client parent for the display
+        * controller. This is needed for the runtime reference counting that
+        * ensures the display hub is always powered when any of the display
+        * controllers are.
+        */
+       if (dc->soc->has_nvdisplay)
+               client->parent = &tegra->hub->client;
+
        dc->syncpt = host1x_syncpt_request(client, flags);
        if (!dc->syncpt)
                dev_warn(dc->dev, "failed to allocate syncpoint\n");
@@ -2077,9 +2094,9 @@ static int tegra_dc_init(struct host1x_client *client)
 
        /*
         * Inherit the DMA parameters (such as maximum segment size) from the
-        * parent device.
+        * parent host1x device.
         */
-       client->dev->dma_parms = client->parent->dma_parms;
+       client->dev->dma_parms = client->host->dma_parms;
 
        return 0;
 
@@ -2121,9 +2138,74 @@ static int tegra_dc_exit(struct host1x_client *client)
        return 0;
 }
 
+static int tegra_dc_runtime_suspend(struct host1x_client *client)
+{
+       struct tegra_dc *dc = host1x_client_to_dc(client);
+       struct device *dev = client->dev;
+       int err;
+
+       err = reset_control_assert(dc->rst);
+       if (err < 0) {
+               dev_err(dev, "failed to assert reset: %d\n", err);
+               return err;
+       }
+
+       if (dc->soc->has_powergate)
+               tegra_powergate_power_off(dc->powergate);
+
+       clk_disable_unprepare(dc->clk);
+       pm_runtime_put_sync(dev);
+
+       return 0;
+}
+
+static int tegra_dc_runtime_resume(struct host1x_client *client)
+{
+       struct tegra_dc *dc = host1x_client_to_dc(client);
+       struct device *dev = client->dev;
+       int err;
+
+       err = pm_runtime_get_sync(dev);
+       if (err < 0) {
+               dev_err(dev, "failed to get runtime PM: %d\n", err);
+               return err;
+       }
+
+       if (dc->soc->has_powergate) {
+               err = tegra_powergate_sequence_power_up(dc->powergate, dc->clk,
+                                                       dc->rst);
+               if (err < 0) {
+                       dev_err(dev, "failed to power partition: %d\n", err);
+                       goto put_rpm;
+               }
+       } else {
+               err = clk_prepare_enable(dc->clk);
+               if (err < 0) {
+                       dev_err(dev, "failed to enable clock: %d\n", err);
+                       goto put_rpm;
+               }
+
+               err = reset_control_deassert(dc->rst);
+               if (err < 0) {
+                       dev_err(dev, "failed to deassert reset: %d\n", err);
+                       goto disable_clk;
+               }
+       }
+
+       return 0;
+
+disable_clk:
+       clk_disable_unprepare(dc->clk);
+put_rpm:
+       pm_runtime_put_sync(dev);
+       return err;
+}
+
 static const struct host1x_client_ops dc_client_ops = {
        .init = tegra_dc_init,
        .exit = tegra_dc_exit,
+       .suspend = tegra_dc_runtime_suspend,
+       .resume = tegra_dc_runtime_resume,
 };
 
 static const struct tegra_dc_soc_info tegra20_dc_soc_info = {
@@ -2535,65 +2617,10 @@ static int tegra_dc_remove(struct platform_device *pdev)
        return 0;
 }
 
-#ifdef CONFIG_PM
-static int tegra_dc_suspend(struct device *dev)
-{
-       struct tegra_dc *dc = dev_get_drvdata(dev);
-       int err;
-
-       err = reset_control_assert(dc->rst);
-       if (err < 0) {
-               dev_err(dev, "failed to assert reset: %d\n", err);
-               return err;
-       }
-
-       if (dc->soc->has_powergate)
-               tegra_powergate_power_off(dc->powergate);
-
-       clk_disable_unprepare(dc->clk);
-
-       return 0;
-}
-
-static int tegra_dc_resume(struct device *dev)
-{
-       struct tegra_dc *dc = dev_get_drvdata(dev);
-       int err;
-
-       if (dc->soc->has_powergate) {
-               err = tegra_powergate_sequence_power_up(dc->powergate, dc->clk,
-                                                       dc->rst);
-               if (err < 0) {
-                       dev_err(dev, "failed to power partition: %d\n", err);
-                       return err;
-               }
-       } else {
-               err = clk_prepare_enable(dc->clk);
-               if (err < 0) {
-                       dev_err(dev, "failed to enable clock: %d\n", err);
-                       return err;
-               }
-
-               err = reset_control_deassert(dc->rst);
-               if (err < 0) {
-                       dev_err(dev, "failed to deassert reset: %d\n", err);
-                       return err;
-               }
-       }
-
-       return 0;
-}
-#endif
-
-static const struct dev_pm_ops tegra_dc_pm_ops = {
-       SET_RUNTIME_PM_OPS(tegra_dc_suspend, tegra_dc_resume, NULL)
-};
-
 struct platform_driver tegra_dc_driver = {
        .driver = {
                .name = "tegra-dc",
                .of_match_table = tegra_dc_of_match,
-               .pm = &tegra_dc_pm_ops,
        },
        .probe = tegra_dc_probe,
        .remove = tegra_dc_remove,
index 622cdf1..7dfb50f 100644 (file)
@@ -588,7 +588,7 @@ static int tegra_dpaux_remove(struct platform_device *pdev)
        /* make sure pads are powered down when not in use */
        tegra_dpaux_pad_power_down(dpaux);
 
-       pm_runtime_put(&pdev->dev);
+       pm_runtime_put_sync(&pdev->dev);
        pm_runtime_disable(&pdev->dev);
 
        drm_dp_aux_unregister(&dpaux->aux);
index f455ce7..aa9e49f 100644 (file)
@@ -905,7 +905,7 @@ int tegra_drm_unregister_client(struct tegra_drm *tegra,
 int host1x_client_iommu_attach(struct host1x_client *client)
 {
        struct iommu_domain *domain = iommu_get_domain_for_dev(client->dev);
-       struct drm_device *drm = dev_get_drvdata(client->parent);
+       struct drm_device *drm = dev_get_drvdata(client->host);
        struct tegra_drm *tegra = drm->dev_private;
        struct iommu_group *group = NULL;
        int err;
@@ -941,7 +941,7 @@ int host1x_client_iommu_attach(struct host1x_client *client)
 
 void host1x_client_iommu_detach(struct host1x_client *client)
 {
-       struct drm_device *drm = dev_get_drvdata(client->parent);
+       struct drm_device *drm = dev_get_drvdata(client->host);
        struct tegra_drm *tegra = drm->dev_private;
        struct iommu_domain *domain;
 
index d941553..ed99b67 100644 (file)
@@ -144,6 +144,8 @@ int tegra_output_init(struct drm_device *drm, struct tegra_output *output);
 void tegra_output_exit(struct tegra_output *output);
 void tegra_output_find_possible_crtcs(struct tegra_output *output,
                                      struct drm_device *drm);
+int tegra_output_suspend(struct tegra_output *output);
+int tegra_output_resume(struct tegra_output *output);
 
 int tegra_output_connector_get_modes(struct drm_connector *connector);
 enum drm_connector_status
index a5d47e3..88b9d64 100644 (file)
@@ -840,7 +840,9 @@ static void tegra_dsi_unprepare(struct tegra_dsi *dsi)
                dev_err(dsi->dev, "failed to disable MIPI calibration: %d\n",
                        err);
 
-       pm_runtime_put(dsi->dev);
+       err = host1x_client_suspend(&dsi->client);
+       if (err < 0)
+               dev_err(dsi->dev, "failed to suspend: %d\n", err);
 }
 
 static void tegra_dsi_encoder_disable(struct drm_encoder *encoder)
@@ -882,11 +884,15 @@ static void tegra_dsi_encoder_disable(struct drm_encoder *encoder)
        tegra_dsi_unprepare(dsi);
 }
 
-static void tegra_dsi_prepare(struct tegra_dsi *dsi)
+static int tegra_dsi_prepare(struct tegra_dsi *dsi)
 {
        int err;
 
-       pm_runtime_get_sync(dsi->dev);
+       err = host1x_client_resume(&dsi->client);
+       if (err < 0) {
+               dev_err(dsi->dev, "failed to resume: %d\n", err);
+               return err;
+       }
 
        err = tegra_mipi_enable(dsi->mipi);
        if (err < 0)
@@ -899,6 +905,8 @@ static void tegra_dsi_prepare(struct tegra_dsi *dsi)
 
        if (dsi->slave)
                tegra_dsi_prepare(dsi->slave);
+
+       return 0;
 }
 
 static void tegra_dsi_encoder_enable(struct drm_encoder *encoder)
@@ -909,8 +917,13 @@ static void tegra_dsi_encoder_enable(struct drm_encoder *encoder)
        struct tegra_dsi *dsi = to_dsi(output);
        struct tegra_dsi_state *state;
        u32 value;
+       int err;
 
-       tegra_dsi_prepare(dsi);
+       err = tegra_dsi_prepare(dsi);
+       if (err < 0) {
+               dev_err(dsi->dev, "failed to prepare: %d\n", err);
+               return;
+       }
 
        state = tegra_dsi_get_state(dsi);
 
@@ -1030,7 +1043,7 @@ static const struct drm_encoder_helper_funcs tegra_dsi_encoder_helper_funcs = {
 
 static int tegra_dsi_init(struct host1x_client *client)
 {
-       struct drm_device *drm = dev_get_drvdata(client->parent);
+       struct drm_device *drm = dev_get_drvdata(client->host);
        struct tegra_dsi *dsi = host1x_client_to_dsi(client);
        int err;
 
@@ -1075,9 +1088,89 @@ static int tegra_dsi_exit(struct host1x_client *client)
        return 0;
 }
 
+static int tegra_dsi_runtime_suspend(struct host1x_client *client)
+{
+       struct tegra_dsi *dsi = host1x_client_to_dsi(client);
+       struct device *dev = client->dev;
+       int err;
+
+       if (dsi->rst) {
+               err = reset_control_assert(dsi->rst);
+               if (err < 0) {
+                       dev_err(dev, "failed to assert reset: %d\n", err);
+                       return err;
+               }
+       }
+
+       usleep_range(1000, 2000);
+
+       clk_disable_unprepare(dsi->clk_lp);
+       clk_disable_unprepare(dsi->clk);
+
+       regulator_disable(dsi->vdd);
+       pm_runtime_put_sync(dev);
+
+       return 0;
+}
+
+static int tegra_dsi_runtime_resume(struct host1x_client *client)
+{
+       struct tegra_dsi *dsi = host1x_client_to_dsi(client);
+       struct device *dev = client->dev;
+       int err;
+
+       err = pm_runtime_get_sync(dev);
+       if (err < 0) {
+               dev_err(dev, "failed to get runtime PM: %d\n", err);
+               return err;
+       }
+
+       err = regulator_enable(dsi->vdd);
+       if (err < 0) {
+               dev_err(dev, "failed to enable VDD supply: %d\n", err);
+               goto put_rpm;
+       }
+
+       err = clk_prepare_enable(dsi->clk);
+       if (err < 0) {
+               dev_err(dev, "cannot enable DSI clock: %d\n", err);
+               goto disable_vdd;
+       }
+
+       err = clk_prepare_enable(dsi->clk_lp);
+       if (err < 0) {
+               dev_err(dev, "cannot enable low-power clock: %d\n", err);
+               goto disable_clk;
+       }
+
+       usleep_range(1000, 2000);
+
+       if (dsi->rst) {
+               err = reset_control_deassert(dsi->rst);
+               if (err < 0) {
+                       dev_err(dev, "cannot assert reset: %d\n", err);
+                       goto disable_clk_lp;
+               }
+       }
+
+       return 0;
+
+disable_clk_lp:
+       clk_disable_unprepare(dsi->clk_lp);
+disable_clk:
+       clk_disable_unprepare(dsi->clk);
+disable_vdd:
+       regulator_disable(dsi->vdd);
+put_rpm:
+       pm_runtime_put_sync(dev);
+       return err;
+}
+
 static const struct host1x_client_ops dsi_client_ops = {
        .init = tegra_dsi_init,
        .exit = tegra_dsi_exit,
+       .suspend = tegra_dsi_runtime_suspend,
+       .resume = tegra_dsi_runtime_resume,
 };
 
 static int tegra_dsi_setup_clocks(struct tegra_dsi *dsi)
@@ -1596,79 +1689,6 @@ static int tegra_dsi_remove(struct platform_device *pdev)
        return 0;
 }
 
-#ifdef CONFIG_PM
-static int tegra_dsi_suspend(struct device *dev)
-{
-       struct tegra_dsi *dsi = dev_get_drvdata(dev);
-       int err;
-
-       if (dsi->rst) {
-               err = reset_control_assert(dsi->rst);
-               if (err < 0) {
-                       dev_err(dev, "failed to assert reset: %d\n", err);
-                       return err;
-               }
-       }
-
-       usleep_range(1000, 2000);
-
-       clk_disable_unprepare(dsi->clk_lp);
-       clk_disable_unprepare(dsi->clk);
-
-       regulator_disable(dsi->vdd);
-
-       return 0;
-}
-
-static int tegra_dsi_resume(struct device *dev)
-{
-       struct tegra_dsi *dsi = dev_get_drvdata(dev);
-       int err;
-
-       err = regulator_enable(dsi->vdd);
-       if (err < 0) {
-               dev_err(dsi->dev, "failed to enable VDD supply: %d\n", err);
-               return err;
-       }
-
-       err = clk_prepare_enable(dsi->clk);
-       if (err < 0) {
-               dev_err(dev, "cannot enable DSI clock: %d\n", err);
-               goto disable_vdd;
-       }
-
-       err = clk_prepare_enable(dsi->clk_lp);
-       if (err < 0) {
-               dev_err(dev, "cannot enable low-power clock: %d\n", err);
-               goto disable_clk;
-       }
-
-       usleep_range(1000, 2000);
-
-       if (dsi->rst) {
-               err = reset_control_deassert(dsi->rst);
-               if (err < 0) {
-                       dev_err(dev, "cannot assert reset: %d\n", err);
-                       goto disable_clk_lp;
-               }
-       }
-
-       return 0;
-
-disable_clk_lp:
-       clk_disable_unprepare(dsi->clk_lp);
-disable_clk:
-       clk_disable_unprepare(dsi->clk);
-disable_vdd:
-       regulator_disable(dsi->vdd);
-       return err;
-}
-#endif
-
-static const struct dev_pm_ops tegra_dsi_pm_ops = {
-       SET_RUNTIME_PM_OPS(tegra_dsi_suspend, tegra_dsi_resume, NULL)
-};
-
 static const struct of_device_id tegra_dsi_of_match[] = {
        { .compatible = "nvidia,tegra210-dsi", },
        { .compatible = "nvidia,tegra132-dsi", },
@@ -1682,7 +1702,6 @@ struct platform_driver tegra_dsi_driver = {
        .driver = {
                .name = "tegra-dsi",
                .of_match_table = tegra_dsi_of_match,
-               .pm = &tegra_dsi_pm_ops,
        },
        .probe = tegra_dsi_probe,
        .remove = tegra_dsi_remove,
index 1fc4e56..48363f7 100644 (file)
@@ -34,7 +34,7 @@ static inline struct gr2d *to_gr2d(struct tegra_drm_client *client)
 static int gr2d_init(struct host1x_client *client)
 {
        struct tegra_drm_client *drm = host1x_to_drm_client(client);
-       struct drm_device *dev = dev_get_drvdata(client->parent);
+       struct drm_device *dev = dev_get_drvdata(client->host);
        unsigned long flags = HOST1X_SYNCPT_HAS_BASE;
        struct gr2d *gr2d = to_gr2d(drm);
        int err;
@@ -76,7 +76,7 @@ put:
 static int gr2d_exit(struct host1x_client *client)
 {
        struct tegra_drm_client *drm = host1x_to_drm_client(client);
-       struct drm_device *dev = dev_get_drvdata(client->parent);
+       struct drm_device *dev = dev_get_drvdata(client->host);
        struct tegra_drm *tegra = dev->dev_private;
        struct gr2d *gr2d = to_gr2d(drm);
        int err;
index 24fae0f..c0a528b 100644 (file)
@@ -43,7 +43,7 @@ static inline struct gr3d *to_gr3d(struct tegra_drm_client *client)
 static int gr3d_init(struct host1x_client *client)
 {
        struct tegra_drm_client *drm = host1x_to_drm_client(client);
-       struct drm_device *dev = dev_get_drvdata(client->parent);
+       struct drm_device *dev = dev_get_drvdata(client->host);
        unsigned long flags = HOST1X_SYNCPT_HAS_BASE;
        struct gr3d *gr3d = to_gr3d(drm);
        int err;
@@ -85,7 +85,7 @@ put:
 static int gr3d_exit(struct host1x_client *client)
 {
        struct tegra_drm_client *drm = host1x_to_drm_client(client);
-       struct drm_device *dev = dev_get_drvdata(client->parent);
+       struct drm_device *dev = dev_get_drvdata(client->host);
        struct gr3d *gr3d = to_gr3d(drm);
        int err;
 
index 50269ff..6f11762 100644 (file)
@@ -1146,6 +1146,7 @@ static void tegra_hdmi_encoder_disable(struct drm_encoder *encoder)
        struct tegra_dc *dc = to_tegra_dc(encoder->crtc);
        struct tegra_hdmi *hdmi = to_hdmi(output);
        u32 value;
+       int err;
 
        /*
         * The following accesses registers of the display controller, so make
@@ -1171,7 +1172,9 @@ static void tegra_hdmi_encoder_disable(struct drm_encoder *encoder)
        tegra_hdmi_writel(hdmi, 0, HDMI_NV_PDISP_INT_ENABLE);
        tegra_hdmi_writel(hdmi, 0, HDMI_NV_PDISP_INT_MASK);
 
-       pm_runtime_put(hdmi->dev);
+       err = host1x_client_suspend(&hdmi->client);
+       if (err < 0)
+               dev_err(hdmi->dev, "failed to suspend: %d\n", err);
 }
 
 static void tegra_hdmi_encoder_enable(struct drm_encoder *encoder)
@@ -1186,7 +1189,11 @@ static void tegra_hdmi_encoder_enable(struct drm_encoder *encoder)
        u32 value;
        int err;
 
-       pm_runtime_get_sync(hdmi->dev);
+       err = host1x_client_resume(&hdmi->client);
+       if (err < 0) {
+               dev_err(hdmi->dev, "failed to resume: %d\n", err);
+               return;
+       }
 
        /*
         * Enable and unmask the HDA codec SCRATCH0 register interrupt. This
@@ -1424,15 +1431,16 @@ static const struct drm_encoder_helper_funcs tegra_hdmi_encoder_helper_funcs = {
 
 static int tegra_hdmi_init(struct host1x_client *client)
 {
-       struct drm_device *drm = dev_get_drvdata(client->parent);
        struct tegra_hdmi *hdmi = host1x_client_to_hdmi(client);
+       struct drm_device *drm = dev_get_drvdata(client->host);
        int err;
 
        hdmi->output.dev = client->dev;
 
-       drm_connector_init(drm, &hdmi->output.connector,
-                          &tegra_hdmi_connector_funcs,
-                          DRM_MODE_CONNECTOR_HDMIA);
+       drm_connector_init_with_ddc(drm, &hdmi->output.connector,
+                                   &tegra_hdmi_connector_funcs,
+                                   DRM_MODE_CONNECTOR_HDMIA,
+                                   hdmi->output.ddc);
        drm_connector_helper_add(&hdmi->output.connector,
                                 &tegra_hdmi_connector_helper_funcs);
        hdmi->output.connector.dpms = DRM_MODE_DPMS_OFF;
@@ -1489,9 +1497,66 @@ static int tegra_hdmi_exit(struct host1x_client *client)
        return 0;
 }
 
+static int tegra_hdmi_runtime_suspend(struct host1x_client *client)
+{
+       struct tegra_hdmi *hdmi = host1x_client_to_hdmi(client);
+       struct device *dev = client->dev;
+       int err;
+
+       err = reset_control_assert(hdmi->rst);
+       if (err < 0) {
+               dev_err(dev, "failed to assert reset: %d\n", err);
+               return err;
+       }
+
+       usleep_range(1000, 2000);
+
+       clk_disable_unprepare(hdmi->clk);
+       pm_runtime_put_sync(dev);
+
+       return 0;
+}
+
+static int tegra_hdmi_runtime_resume(struct host1x_client *client)
+{
+       struct tegra_hdmi *hdmi = host1x_client_to_hdmi(client);
+       struct device *dev = client->dev;
+       int err;
+
+       err = pm_runtime_get_sync(dev);
+       if (err < 0) {
+               dev_err(dev, "failed to get runtime PM: %d\n", err);
+               return err;
+       }
+
+       err = clk_prepare_enable(hdmi->clk);
+       if (err < 0) {
+               dev_err(dev, "failed to enable clock: %d\n", err);
+               goto put_rpm;
+       }
+
+       usleep_range(1000, 2000);
+
+       err = reset_control_deassert(hdmi->rst);
+       if (err < 0) {
+               dev_err(dev, "failed to deassert reset: %d\n", err);
+               goto disable_clk;
+       }
+
+       return 0;
+
+disable_clk:
+       clk_disable_unprepare(hdmi->clk);
+put_rpm:
+       pm_runtime_put_sync(dev);
+       return err;
+}
+
 static const struct host1x_client_ops hdmi_client_ops = {
        .init = tegra_hdmi_init,
        .exit = tegra_hdmi_exit,
+       .suspend = tegra_hdmi_runtime_suspend,
+       .resume = tegra_hdmi_runtime_resume,
 };
 
 static const struct tegra_hdmi_config tegra20_hdmi_config = {
@@ -1699,58 +1764,10 @@ static int tegra_hdmi_remove(struct platform_device *pdev)
        return 0;
 }
 
-#ifdef CONFIG_PM
-static int tegra_hdmi_suspend(struct device *dev)
-{
-       struct tegra_hdmi *hdmi = dev_get_drvdata(dev);
-       int err;
-
-       err = reset_control_assert(hdmi->rst);
-       if (err < 0) {
-               dev_err(dev, "failed to assert reset: %d\n", err);
-               return err;
-       }
-
-       usleep_range(1000, 2000);
-
-       clk_disable_unprepare(hdmi->clk);
-
-       return 0;
-}
-
-static int tegra_hdmi_resume(struct device *dev)
-{
-       struct tegra_hdmi *hdmi = dev_get_drvdata(dev);
-       int err;
-
-       err = clk_prepare_enable(hdmi->clk);
-       if (err < 0) {
-               dev_err(dev, "failed to enable clock: %d\n", err);
-               return err;
-       }
-
-       usleep_range(1000, 2000);
-
-       err = reset_control_deassert(hdmi->rst);
-       if (err < 0) {
-               dev_err(dev, "failed to deassert reset: %d\n", err);
-               clk_disable_unprepare(hdmi->clk);
-               return err;
-       }
-
-       return 0;
-}
-#endif
-
-static const struct dev_pm_ops tegra_hdmi_pm_ops = {
-       SET_RUNTIME_PM_OPS(tegra_hdmi_suspend, tegra_hdmi_resume, NULL)
-};
-
 struct platform_driver tegra_hdmi_driver = {
        .driver = {
                .name = "tegra-hdmi",
                .of_match_table = tegra_hdmi_of_match,
-               .pm = &tegra_hdmi_pm_ops,
        },
        .probe = tegra_hdmi_probe,
        .remove = tegra_hdmi_remove,
index 47d985a..8183e61 100644 (file)
@@ -95,17 +95,25 @@ static inline void tegra_plane_writel(struct tegra_plane *plane, u32 value,
 
 static int tegra_windowgroup_enable(struct tegra_windowgroup *wgrp)
 {
+       int err = 0;
+
        mutex_lock(&wgrp->lock);
 
        if (wgrp->usecount == 0) {
-               pm_runtime_get_sync(wgrp->parent);
+               err = host1x_client_resume(wgrp->parent);
+               if (err < 0) {
+                       dev_err(wgrp->parent->dev, "failed to resume: %d\n", err);
+                       goto unlock;
+               }
+
                reset_control_deassert(wgrp->rst);
        }
 
        wgrp->usecount++;
-       mutex_unlock(&wgrp->lock);
 
-       return 0;
+unlock:
+       mutex_unlock(&wgrp->lock);
+       return err;
 }
 
 static void tegra_windowgroup_disable(struct tegra_windowgroup *wgrp)
@@ -121,7 +129,7 @@ static void tegra_windowgroup_disable(struct tegra_windowgroup *wgrp)
                               wgrp->index);
                }
 
-               pm_runtime_put(wgrp->parent);
+               host1x_client_suspend(wgrp->parent);
        }
 
        wgrp->usecount--;
@@ -379,6 +387,7 @@ static void tegra_shared_plane_atomic_disable(struct drm_plane *plane,
        struct tegra_plane *p = to_tegra_plane(plane);
        struct tegra_dc *dc;
        u32 value;
+       int err;
 
        /* rien ne va plus */
        if (!old_state || !old_state->crtc)
@@ -386,6 +395,12 @@ static void tegra_shared_plane_atomic_disable(struct drm_plane *plane,
 
        dc = to_tegra_dc(old_state->crtc);
 
+       err = host1x_client_resume(&dc->client);
+       if (err < 0) {
+               dev_err(dc->dev, "failed to resume: %d\n", err);
+               return;
+       }
+
        /*
         * XXX Legacy helpers seem to sometimes call ->atomic_disable() even
         * on planes that are already disabled. Make sure we fallback to the
@@ -394,15 +409,13 @@ static void tegra_shared_plane_atomic_disable(struct drm_plane *plane,
        if (WARN_ON(p->dc == NULL))
                p->dc = dc;
 
-       pm_runtime_get_sync(dc->dev);
-
        value = tegra_plane_readl(p, DC_WIN_WIN_OPTIONS);
        value &= ~WIN_ENABLE;
        tegra_plane_writel(p, value, DC_WIN_WIN_OPTIONS);
 
        tegra_dc_remove_shared_plane(dc, p);
 
-       pm_runtime_put(dc->dev);
+       host1x_client_suspend(&dc->client);
 }
 
 static void tegra_shared_plane_atomic_update(struct drm_plane *plane,
@@ -415,6 +428,7 @@ static void tegra_shared_plane_atomic_update(struct drm_plane *plane,
        struct tegra_plane *p = to_tegra_plane(plane);
        dma_addr_t base;
        u32 value;
+       int err;
 
        /* rien ne va plus */
        if (!plane->state->crtc || !plane->state->fb)
@@ -425,7 +439,11 @@ static void tegra_shared_plane_atomic_update(struct drm_plane *plane,
                return;
        }
 
-       pm_runtime_get_sync(dc->dev);
+       err = host1x_client_resume(&dc->client);
+       if (err < 0) {
+               dev_err(dc->dev, "failed to resume: %d\n", err);
+               return;
+       }
 
        tegra_dc_assign_shared_plane(dc, p);
 
@@ -515,7 +533,7 @@ static void tegra_shared_plane_atomic_update(struct drm_plane *plane,
        value &= ~CONTROL_CSC_ENABLE;
        tegra_plane_writel(p, value, DC_WIN_WINDOW_SET_CONTROL);
 
-       pm_runtime_put(dc->dev);
+       host1x_client_suspend(&dc->client);
 }
 
 static const struct drm_plane_helper_funcs tegra_shared_plane_helper_funcs = {
@@ -551,7 +569,7 @@ struct drm_plane *tegra_shared_plane_create(struct drm_device *drm,
        plane->base.index = index;
 
        plane->wgrp = &hub->wgrps[wgrp];
-       plane->wgrp->parent = dc->dev;
+       plane->wgrp->parent = &dc->client;
 
        p = &plane->base.base;
 
@@ -656,8 +674,13 @@ int tegra_display_hub_atomic_check(struct drm_device *drm,
 static void tegra_display_hub_update(struct tegra_dc *dc)
 {
        u32 value;
+       int err;
 
-       pm_runtime_get_sync(dc->dev);
+       err = host1x_client_resume(&dc->client);
+       if (err < 0) {
+               dev_err(dc->dev, "failed to resume: %d\n", err);
+               return;
+       }
 
        value = tegra_dc_readl(dc, DC_CMD_IHUB_COMMON_MISC_CTL);
        value &= ~LATENCY_EVENT;
@@ -672,7 +695,7 @@ static void tegra_display_hub_update(struct tegra_dc *dc)
        tegra_dc_writel(dc, COMMON_ACTREQ, DC_CMD_STATE_CONTROL);
        tegra_dc_readl(dc, DC_CMD_STATE_CONTROL);
 
-       pm_runtime_put(dc->dev);
+       host1x_client_suspend(&dc->client);
 }
 
 void tegra_display_hub_atomic_commit(struct drm_device *drm,
@@ -705,7 +728,7 @@ void tegra_display_hub_atomic_commit(struct drm_device *drm,
 static int tegra_display_hub_init(struct host1x_client *client)
 {
        struct tegra_display_hub *hub = to_tegra_display_hub(client);
-       struct drm_device *drm = dev_get_drvdata(client->parent);
+       struct drm_device *drm = dev_get_drvdata(client->host);
        struct tegra_drm *tegra = drm->dev_private;
        struct tegra_display_hub_state *state;
 
@@ -723,7 +746,7 @@ static int tegra_display_hub_init(struct host1x_client *client)
 
 static int tegra_display_hub_exit(struct host1x_client *client)
 {
-       struct drm_device *drm = dev_get_drvdata(client->parent);
+       struct drm_device *drm = dev_get_drvdata(client->host);
        struct tegra_drm *tegra = drm->dev_private;
 
        drm_atomic_private_obj_fini(&tegra->hub->base);
@@ -732,9 +755,85 @@ static int tegra_display_hub_exit(struct host1x_client *client)
        return 0;
 }
 
+static int tegra_display_hub_runtime_suspend(struct host1x_client *client)
+{
+       struct tegra_display_hub *hub = to_tegra_display_hub(client);
+       struct device *dev = client->dev;
+       unsigned int i = hub->num_heads;
+       int err;
+
+       err = reset_control_assert(hub->rst);
+       if (err < 0)
+               return err;
+
+       while (i--)
+               clk_disable_unprepare(hub->clk_heads[i]);
+
+       clk_disable_unprepare(hub->clk_hub);
+       clk_disable_unprepare(hub->clk_dsc);
+       clk_disable_unprepare(hub->clk_disp);
+
+       pm_runtime_put_sync(dev);
+
+       return 0;
+}
+
+static int tegra_display_hub_runtime_resume(struct host1x_client *client)
+{
+       struct tegra_display_hub *hub = to_tegra_display_hub(client);
+       struct device *dev = client->dev;
+       unsigned int i;
+       int err;
+
+       err = pm_runtime_get_sync(dev);
+       if (err < 0) {
+               dev_err(dev, "failed to get runtime PM: %d\n", err);
+               return err;
+       }
+
+       err = clk_prepare_enable(hub->clk_disp);
+       if (err < 0)
+               goto put_rpm;
+
+       err = clk_prepare_enable(hub->clk_dsc);
+       if (err < 0)
+               goto disable_disp;
+
+       err = clk_prepare_enable(hub->clk_hub);
+       if (err < 0)
+               goto disable_dsc;
+
+       for (i = 0; i < hub->num_heads; i++) {
+               err = clk_prepare_enable(hub->clk_heads[i]);
+               if (err < 0)
+                       goto disable_heads;
+       }
+
+       err = reset_control_deassert(hub->rst);
+       if (err < 0)
+               goto disable_heads;
+
+       return 0;
+
+disable_heads:
+       while (i--)
+               clk_disable_unprepare(hub->clk_heads[i]);
+
+       clk_disable_unprepare(hub->clk_hub);
+disable_dsc:
+       clk_disable_unprepare(hub->clk_dsc);
+disable_disp:
+       clk_disable_unprepare(hub->clk_disp);
+put_rpm:
+       pm_runtime_put_sync(dev);
+       return err;
+}
+
 static const struct host1x_client_ops tegra_display_hub_ops = {
        .init = tegra_display_hub_init,
        .exit = tegra_display_hub_exit,
+       .suspend = tegra_display_hub_runtime_suspend,
+       .resume = tegra_display_hub_runtime_resume,
 };
 
 static int tegra_display_hub_probe(struct platform_device *pdev)
@@ -851,6 +950,7 @@ static int tegra_display_hub_probe(struct platform_device *pdev)
 static int tegra_display_hub_remove(struct platform_device *pdev)
 {
        struct tegra_display_hub *hub = platform_get_drvdata(pdev);
+       unsigned int i;
        int err;
 
        err = host1x_client_unregister(&hub->client);
@@ -859,78 +959,17 @@ static int tegra_display_hub_remove(struct platform_device *pdev)
                        err);
        }
 
-       pm_runtime_disable(&pdev->dev);
-
-       return err;
-}
-
-static int __maybe_unused tegra_display_hub_suspend(struct device *dev)
-{
-       struct tegra_display_hub *hub = dev_get_drvdata(dev);
-       unsigned int i = hub->num_heads;
-       int err;
-
-       err = reset_control_assert(hub->rst);
-       if (err < 0)
-               return err;
-
-       while (i--)
-               clk_disable_unprepare(hub->clk_heads[i]);
-
-       clk_disable_unprepare(hub->clk_hub);
-       clk_disable_unprepare(hub->clk_dsc);
-       clk_disable_unprepare(hub->clk_disp);
-
-       return 0;
-}
-
-static int __maybe_unused tegra_display_hub_resume(struct device *dev)
-{
-       struct tegra_display_hub *hub = dev_get_drvdata(dev);
-       unsigned int i;
-       int err;
-
-       err = clk_prepare_enable(hub->clk_disp);
-       if (err < 0)
-               return err;
-
-       err = clk_prepare_enable(hub->clk_dsc);
-       if (err < 0)
-               goto disable_disp;
-
-       err = clk_prepare_enable(hub->clk_hub);
-       if (err < 0)
-               goto disable_dsc;
+       for (i = 0; i < hub->soc->num_wgrps; i++) {
+               struct tegra_windowgroup *wgrp = &hub->wgrps[i];
 
-       for (i = 0; i < hub->num_heads; i++) {
-               err = clk_prepare_enable(hub->clk_heads[i]);
-               if (err < 0)
-                       goto disable_heads;
+               mutex_destroy(&wgrp->lock);
        }
 
-       err = reset_control_deassert(hub->rst);
-       if (err < 0)
-               goto disable_heads;
-
-       return 0;
-
-disable_heads:
-       while (i--)
-               clk_disable_unprepare(hub->clk_heads[i]);
+       pm_runtime_disable(&pdev->dev);
 
-       clk_disable_unprepare(hub->clk_hub);
-disable_dsc:
-       clk_disable_unprepare(hub->clk_dsc);
-disable_disp:
-       clk_disable_unprepare(hub->clk_disp);
        return err;
 }
 
-static const struct dev_pm_ops tegra_display_hub_pm_ops = {
-       SET_RUNTIME_PM_OPS(tegra_display_hub_suspend,
-                          tegra_display_hub_resume, NULL)
-};
-
 static const struct tegra_display_hub_soc tegra186_display_hub = {
        .num_wgrps = 6,
        .supports_dsc = true,
@@ -958,7 +997,6 @@ struct platform_driver tegra_display_hub_driver = {
        .driver = {
                .name = "tegra-display-hub",
                .of_match_table = tegra_display_hub_of_match,
-               .pm = &tegra_display_hub_pm_ops,
        },
        .probe = tegra_display_hub_probe,
        .remove = tegra_display_hub_remove,
index 767a60d..3efa1be 100644 (file)
@@ -17,7 +17,7 @@ struct tegra_windowgroup {
        struct mutex lock;
 
        unsigned int index;
-       struct device *parent;
+       struct host1x_client *parent;
        struct reset_control *rst;
 };
 
index 80ddde4..a264259 100644 (file)
@@ -250,3 +250,19 @@ void tegra_output_find_possible_crtcs(struct tegra_output *output,
 
        output->encoder.possible_crtcs = mask;
 }
+
+int tegra_output_suspend(struct tegra_output *output)
+{
+       if (output->hpd_irq)
+               disable_irq(output->hpd_irq);
+
+       return 0;
+}
+
+int tegra_output_resume(struct tegra_output *output)
+{
+       if (output->hpd_irq)
+               enable_irq(output->hpd_irq);
+
+       return 0;
+}
index a68d3b3..41d2494 100644 (file)
@@ -2255,7 +2255,7 @@ static void tegra_sor_hdmi_disable(struct drm_encoder *encoder)
        if (err < 0)
                dev_err(sor->dev, "failed to power off I/O pad: %d\n", err);
 
-       pm_runtime_put(sor->dev);
+       host1x_client_suspend(&sor->client);
 }
 
 static void tegra_sor_hdmi_enable(struct drm_encoder *encoder)
@@ -2276,7 +2276,11 @@ static void tegra_sor_hdmi_enable(struct drm_encoder *encoder)
        mode = &encoder->crtc->state->adjusted_mode;
        pclk = mode->clock * 1000;
 
-       pm_runtime_get_sync(sor->dev);
+       err = host1x_client_resume(&sor->client);
+       if (err < 0) {
+               dev_err(sor->dev, "failed to resume: %d\n", err);
+               return;
+       }
 
        /* switch to safe parent clock */
        err = tegra_sor_set_parent_clock(sor, sor->clk_safe);
@@ -2722,7 +2726,7 @@ static void tegra_sor_dp_disable(struct drm_encoder *encoder)
        if (output->panel)
                drm_panel_unprepare(output->panel);
 
-       pm_runtime_put(sor->dev);
+       host1x_client_suspend(&sor->client);
 }
 
 static void tegra_sor_dp_enable(struct drm_encoder *encoder)
@@ -2742,7 +2746,11 @@ static void tegra_sor_dp_enable(struct drm_encoder *encoder)
        mode = &encoder->crtc->state->adjusted_mode;
        info = &output->connector.display_info;
 
-       pm_runtime_get_sync(sor->dev);
+       err = host1x_client_resume(&sor->client);
+       if (err < 0) {
+               dev_err(sor->dev, "failed to resume: %d\n", err);
+               return;
+       }
 
        /* switch to safe parent clock */
        err = tegra_sor_set_parent_clock(sor, sor->clk_safe);
@@ -3053,7 +3061,7 @@ static const struct tegra_sor_ops tegra_sor_dp_ops = {
 
 static int tegra_sor_init(struct host1x_client *client)
 {
-       struct drm_device *drm = dev_get_drvdata(client->parent);
+       struct drm_device *drm = dev_get_drvdata(client->host);
        const struct drm_encoder_helper_funcs *helpers = NULL;
        struct tegra_sor *sor = host1x_client_to_sor(client);
        int connector = DRM_MODE_CONNECTOR_Unknown;
@@ -3086,9 +3094,10 @@ static int tegra_sor_init(struct host1x_client *client)
 
        sor->output.dev = sor->dev;
 
-       drm_connector_init(drm, &sor->output.connector,
-                          &tegra_sor_connector_funcs,
-                          connector);
+       drm_connector_init_with_ddc(drm, &sor->output.connector,
+                                   &tegra_sor_connector_funcs,
+                                   connector,
+                                   sor->output.ddc);
        drm_connector_helper_add(&sor->output.connector,
                                 &tegra_sor_connector_helper_funcs);
        sor->output.connector.dpms = DRM_MODE_DPMS_OFF;
@@ -3189,9 +3198,80 @@ static int tegra_sor_exit(struct host1x_client *client)
        return 0;
 }
 
+static int tegra_sor_runtime_suspend(struct host1x_client *client)
+{
+       struct tegra_sor *sor = host1x_client_to_sor(client);
+       struct device *dev = client->dev;
+       int err;
+
+       if (sor->rst) {
+               err = reset_control_assert(sor->rst);
+               if (err < 0) {
+                       dev_err(dev, "failed to assert reset: %d\n", err);
+                       return err;
+               }
+
+               reset_control_release(sor->rst);
+       }
+
+       usleep_range(1000, 2000);
+
+       clk_disable_unprepare(sor->clk);
+       pm_runtime_put_sync(dev);
+
+       return 0;
+}
+
+static int tegra_sor_runtime_resume(struct host1x_client *client)
+{
+       struct tegra_sor *sor = host1x_client_to_sor(client);
+       struct device *dev = client->dev;
+       int err;
+
+       err = pm_runtime_get_sync(dev);
+       if (err < 0) {
+               dev_err(dev, "failed to get runtime PM: %d\n", err);
+               return err;
+       }
+
+       err = clk_prepare_enable(sor->clk);
+       if (err < 0) {
+               dev_err(dev, "failed to enable clock: %d\n", err);
+               goto put_rpm;
+       }
+
+       usleep_range(1000, 2000);
+
+       if (sor->rst) {
+               err = reset_control_acquire(sor->rst);
+               if (err < 0) {
+                       dev_err(dev, "failed to acquire reset: %d\n", err);
+                       goto disable_clk;
+               }
+
+               err = reset_control_deassert(sor->rst);
+               if (err < 0) {
+                       dev_err(dev, "failed to deassert reset: %d\n", err);
+                       goto release_reset;
+               }
+       }
+
+       return 0;
+
+release_reset:
+       reset_control_release(sor->rst);
+disable_clk:
+       clk_disable_unprepare(sor->clk);
+put_rpm:
+       pm_runtime_put_sync(dev);
+       return err;
+}
+
 static const struct host1x_client_ops sor_client_ops = {
        .init = tegra_sor_init,
        .exit = tegra_sor_exit,
+       .suspend = tegra_sor_runtime_suspend,
+       .resume = tegra_sor_runtime_resume,
 };
 
 static const u8 tegra124_sor_xbar_cfg[5] = {
@@ -3842,10 +3922,9 @@ static int tegra_sor_probe(struct platform_device *pdev)
        if (!sor->clk_pad) {
                char *name;
 
-               err = pm_runtime_get_sync(&pdev->dev);
+               err = host1x_client_resume(&sor->client);
                if (err < 0) {
-                       dev_err(&pdev->dev, "failed to get runtime PM: %d\n",
-                               err);
+                       dev_err(sor->dev, "failed to resume: %d\n", err);
                        goto remove;
                }
 
@@ -3856,7 +3935,7 @@ static int tegra_sor_probe(struct platform_device *pdev)
                }
 
                sor->clk_pad = tegra_clk_sor_pad_register(sor, name);
-               pm_runtime_put(&pdev->dev);
+               host1x_client_suspend(&sor->client);
        }
 
        if (IS_ERR(sor->clk_pad)) {
@@ -3912,54 +3991,21 @@ static int tegra_sor_remove(struct platform_device *pdev)
        return 0;
 }
 
-static int tegra_sor_runtime_suspend(struct device *dev)
-{
-       struct tegra_sor *sor = dev_get_drvdata(dev);
-       int err;
-
-       if (sor->rst) {
-               err = reset_control_assert(sor->rst);
-               if (err < 0) {
-                       dev_err(dev, "failed to assert reset: %d\n", err);
-                       return err;
-               }
-
-               reset_control_release(sor->rst);
-       }
-
-       usleep_range(1000, 2000);
-
-       clk_disable_unprepare(sor->clk);
-
-       return 0;
-}
-
-static int tegra_sor_runtime_resume(struct device *dev)
+static int __maybe_unused tegra_sor_suspend(struct device *dev)
 {
        struct tegra_sor *sor = dev_get_drvdata(dev);
        int err;
 
-       err = clk_prepare_enable(sor->clk);
+       err = tegra_output_suspend(&sor->output);
        if (err < 0) {
-               dev_err(dev, "failed to enable clock: %d\n", err);
+               dev_err(dev, "failed to suspend output: %d\n", err);
                return err;
        }
 
-       usleep_range(1000, 2000);
-
-       if (sor->rst) {
-               err = reset_control_acquire(sor->rst);
-               if (err < 0) {
-                       dev_err(dev, "failed to acquire reset: %d\n", err);
-                       clk_disable_unprepare(sor->clk);
-                       return err;
-               }
-
-               err = reset_control_deassert(sor->rst);
+       if (sor->hdmi_supply) {
+               err = regulator_disable(sor->hdmi_supply);
                if (err < 0) {
-                       dev_err(dev, "failed to deassert reset: %d\n", err);
-                       reset_control_release(sor->rst);
-                       clk_disable_unprepare(sor->clk);
+                       tegra_output_resume(&sor->output);
                        return err;
                }
        }
@@ -3967,37 +4013,31 @@ static int tegra_sor_runtime_resume(struct device *dev)
        return 0;
 }
 
-static int tegra_sor_suspend(struct device *dev)
+static int __maybe_unused tegra_sor_resume(struct device *dev)
 {
        struct tegra_sor *sor = dev_get_drvdata(dev);
        int err;
 
        if (sor->hdmi_supply) {
-               err = regulator_disable(sor->hdmi_supply);
+               err = regulator_enable(sor->hdmi_supply);
                if (err < 0)
                        return err;
        }
 
-       return 0;
-}
+       err = tegra_output_resume(&sor->output);
+       if (err < 0) {
+               dev_err(dev, "failed to resume output: %d\n", err);
 
-static int tegra_sor_resume(struct device *dev)
-{
-       struct tegra_sor *sor = dev_get_drvdata(dev);
-       int err;
+               if (sor->hdmi_supply)
+                       regulator_disable(sor->hdmi_supply);
 
-       if (sor->hdmi_supply) {
-               err = regulator_enable(sor->hdmi_supply);
-               if (err < 0)
-                       return err;
+               return err;
        }
 
        return 0;
 }
 
 static const struct dev_pm_ops tegra_sor_pm_ops = {
-       SET_RUNTIME_PM_OPS(tegra_sor_runtime_suspend, tegra_sor_runtime_resume,
-                          NULL)
        SET_SYSTEM_SLEEP_PM_OPS(tegra_sor_suspend, tegra_sor_resume)
 };
 
index 3526c28..ade56b8 100644 (file)
@@ -161,7 +161,7 @@ static int vic_boot(struct vic *vic)
 static int vic_init(struct host1x_client *client)
 {
        struct tegra_drm_client *drm = host1x_to_drm_client(client);
-       struct drm_device *dev = dev_get_drvdata(client->parent);
+       struct drm_device *dev = dev_get_drvdata(client->host);
        struct tegra_drm *tegra = dev->dev_private;
        struct vic *vic = to_vic(drm);
        int err;
@@ -190,9 +190,9 @@ static int vic_init(struct host1x_client *client)
 
        /*
         * Inherit the DMA parameters (such as maximum segment size) from the
-        * parent device.
+        * parent host1x device.
         */
-       client->dev->dma_parms = client->parent->dma_parms;
+       client->dev->dma_parms = client->host->dma_parms;
 
        return 0;
 
@@ -209,7 +209,7 @@ detach:
 static int vic_exit(struct host1x_client *client)
 {
        struct tegra_drm_client *drm = host1x_to_drm_client(client);
-       struct drm_device *dev = dev_get_drvdata(client->parent);
+       struct drm_device *dev = dev_get_drvdata(client->host);
        struct tegra_drm *tegra = dev->dev_private;
        struct vic *vic = to_vic(drm);
        int err;
index 065974b..1f497d8 100644 (file)
@@ -2,9 +2,8 @@
 config DRM_UDL
        tristate "DisplayLink"
        depends on DRM
-       depends on USB_SUPPORT
+       depends on USB
        depends on USB_ARCH_HAS_HCD
-       select USB
        select DRM_GEM_SHMEM_HELPER
        select DRM_KMS_HELPER
        help
index 1a07462..eaa8e96 100644 (file)
@@ -140,7 +140,7 @@ v3d_open(struct drm_device *dev, struct drm_file *file)
 {
        struct v3d_dev *v3d = to_v3d_dev(dev);
        struct v3d_file_priv *v3d_priv;
-       struct drm_sched_rq *rq;
+       struct drm_gpu_scheduler *sched;
        int i;
 
        v3d_priv = kzalloc(sizeof(*v3d_priv), GFP_KERNEL);
@@ -150,8 +150,10 @@ v3d_open(struct drm_device *dev, struct drm_file *file)
        v3d_priv->v3d = v3d;
 
        for (i = 0; i < V3D_MAX_QUEUES; i++) {
-               rq = &v3d->queue[i].sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];
-               drm_sched_entity_init(&v3d_priv->sched_entity[i], &rq, 1, NULL);
+               sched = &v3d->queue[i].sched;
+               drm_sched_entity_init(&v3d_priv->sched_entity[i],
+                                     DRM_SCHED_PRIORITY_NORMAL, &sched,
+                                     1, NULL);
        }
 
        file->driver_priv = v3d_priv;
index 6c5b80a..fd8a2eb 100644 (file)
@@ -753,10 +753,19 @@ static void vc4_dsi_encoder_disable(struct drm_encoder *encoder)
        struct vc4_dsi_encoder *vc4_encoder = to_vc4_dsi_encoder(encoder);
        struct vc4_dsi *dsi = vc4_encoder->dsi;
        struct device *dev = &dsi->pdev->dev;
+       struct drm_bridge *iter;
+
+       list_for_each_entry_reverse(iter, &dsi->bridge_chain, chain_node) {
+               if (iter->funcs->disable)
+                       iter->funcs->disable(iter);
+       }
 
-       drm_bridge_chain_disable(dsi->bridge);
        vc4_dsi_ulps(dsi, true);
-       drm_bridge_chain_post_disable(dsi->bridge);
+
+       list_for_each_entry_from(iter, &dsi->bridge_chain, chain_node) {
+               if (iter->funcs->post_disable)
+                       iter->funcs->post_disable(iter);
+       }
 
        clk_disable_unprepare(dsi->pll_phy_clock);
        clk_disable_unprepare(dsi->escape_clock);
@@ -824,6 +833,7 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
        struct vc4_dsi *dsi = vc4_encoder->dsi;
        struct device *dev = &dsi->pdev->dev;
        bool debug_dump_regs = false;
+       struct drm_bridge *iter;
        unsigned long hs_clock;
        u32 ui_ns;
        /* Minimum LP state duration in escape clock cycles. */
@@ -1056,7 +1066,10 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
 
        vc4_dsi_ulps(dsi, false);
 
-       drm_bridge_chain_pre_enable(dsi->bridge);
+       list_for_each_entry_reverse(iter, &dsi->bridge_chain, chain_node) {
+               if (iter->funcs->pre_enable)
+                       iter->funcs->pre_enable(iter);
+       }
 
        if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO) {
                DSI_PORT_WRITE(DISP0_CTRL,
@@ -1073,7 +1086,10 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
                               DSI_DISP0_ENABLE);
        }
 
-       drm_bridge_chain_enable(dsi->bridge);
+       list_for_each_entry(iter, &dsi->bridge_chain, chain_node) {
+               if (iter->funcs->enable)
+                       iter->funcs->enable(iter);
+       }
 
        if (debug_dump_regs) {
                struct drm_printer p = drm_info_printer(&dsi->pdev->dev);
@@ -1613,7 +1629,7 @@ static int vc4_dsi_bind(struct device *dev, struct device *master, void *data)
         * from our driver, since we need to sequence them within the
         * encoder's enable/disable paths.
         */
-       list_splice(&dsi->encoder->bridge_chain, &dsi->bridge_chain);
+       list_splice_init(&dsi->encoder->bridge_chain, &dsi->bridge_chain);
 
        if (dsi->port == 0)
                vc4_debugfs_add_regset32(drm, "dsi0_regs", &dsi->regset);
@@ -1639,7 +1655,7 @@ static void vc4_dsi_unbind(struct device *dev, struct device *master,
         * Restore the bridge_chain so the bridge detach procedure can happen
         * normally.
         */
-       list_splice(&dsi->bridge_chain, &dsi->encoder->bridge_chain);
+       list_splice_init(&dsi->bridge_chain, &dsi->encoder->bridge_chain);
        vc4_dsi_encoder_destroy(dsi->encoder);
 
        if (dsi->port == 1)
index 1c62c6c..cea18dc 100644 (file)
@@ -267,7 +267,8 @@ static const struct drm_connector_helper_funcs vc4_hdmi_connector_helper_funcs =
 };
 
 static struct drm_connector *vc4_hdmi_connector_init(struct drm_device *dev,
-                                                    struct drm_encoder *encoder)
+                                                    struct drm_encoder *encoder,
+                                                    struct i2c_adapter *ddc)
 {
        struct drm_connector *connector;
        struct vc4_hdmi_connector *hdmi_connector;
@@ -281,8 +282,10 @@ static struct drm_connector *vc4_hdmi_connector_init(struct drm_device *dev,
 
        hdmi_connector->encoder = encoder;
 
-       drm_connector_init(dev, connector, &vc4_hdmi_connector_funcs,
-                          DRM_MODE_CONNECTOR_HDMIA);
+       drm_connector_init_with_ddc(dev, connector,
+                                   &vc4_hdmi_connector_funcs,
+                                   DRM_MODE_CONNECTOR_HDMIA,
+                                   ddc);
        drm_connector_helper_add(connector, &vc4_hdmi_connector_helper_funcs);
 
        /* Create and attach TV margin props to this connector. */
@@ -1395,7 +1398,8 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
                         DRM_MODE_ENCODER_TMDS, NULL);
        drm_encoder_helper_add(hdmi->encoder, &vc4_hdmi_encoder_helper_funcs);
 
-       hdmi->connector = vc4_hdmi_connector_init(drm, hdmi->encoder);
+       hdmi->connector =
+               vc4_hdmi_connector_init(drm, hdmi->encoder, hdmi->ddc);
        if (IS_ERR(hdmi->connector)) {
                ret = PTR_ERR(hdmi->connector);
                goto err_destroy_encoder;
index a50f5a1..b98a142 100644 (file)
@@ -319,8 +319,10 @@ static int zx_hdmi_register(struct drm_device *drm, struct zx_hdmi *hdmi)
 
        hdmi->connector.polled = DRM_CONNECTOR_POLL_HPD;
 
-       drm_connector_init(drm, &hdmi->connector, &zx_hdmi_connector_funcs,
-                          DRM_MODE_CONNECTOR_HDMIA);
+       drm_connector_init_with_ddc(drm, &hdmi->connector,
+                                   &zx_hdmi_connector_funcs,
+                                   DRM_MODE_CONNECTOR_HDMIA,
+                                   &hdmi->ddc->adap);
        drm_connector_helper_add(&hdmi->connector,
                                 &zx_hdmi_connector_helper_funcs);
 
index 9b67e41..c4fa3bb 100644 (file)
@@ -165,8 +165,10 @@ static int zx_vga_register(struct drm_device *drm, struct zx_vga *vga)
 
        vga->connector.polled = DRM_CONNECTOR_POLL_HPD;
 
-       ret = drm_connector_init(drm, connector, &zx_vga_connector_funcs,
-                                DRM_MODE_CONNECTOR_VGA);
+       ret = drm_connector_init_with_ddc(drm, connector,
+                                         &zx_vga_connector_funcs,
+                                         DRM_MODE_CONNECTOR_VGA,
+                                         &vga->ddc->adap);
        if (ret) {
                DRM_DEV_ERROR(dev, "failed to init connector: %d\n", ret);
                goto clean_encoder;
index 2c8559f..6a995db 100644 (file)
@@ -120,7 +120,7 @@ static void host1x_subdev_register(struct host1x_device *device,
        mutex_lock(&device->clients_lock);
        list_move_tail(&client->list, &device->clients);
        list_move_tail(&subdev->list, &device->active);
-       client->parent = &device->dev;
+       client->host = &device->dev;
        subdev->client = client;
        mutex_unlock(&device->clients_lock);
        mutex_unlock(&device->subdevs_lock);
@@ -156,7 +156,7 @@ static void __host1x_subdev_unregister(struct host1x_device *device,
         */
        mutex_lock(&device->clients_lock);
        subdev->client = NULL;
-       client->parent = NULL;
+       client->host = NULL;
        list_move_tail(&subdev->list, &device->subdevs);
        /*
         * XXX: Perhaps don't do this here, but rather explicitly remove it
@@ -710,6 +710,10 @@ int host1x_client_register(struct host1x_client *client)
        struct host1x *host1x;
        int err;
 
+       INIT_LIST_HEAD(&client->list);
+       mutex_init(&client->lock);
+       client->usecount = 0;
+
        mutex_lock(&devices_lock);
 
        list_for_each_entry(host1x, &devices, list) {
@@ -768,3 +772,74 @@ int host1x_client_unregister(struct host1x_client *client)
        return 0;
 }
 EXPORT_SYMBOL(host1x_client_unregister);
+
+int host1x_client_suspend(struct host1x_client *client)
+{
+       int err = 0;
+
+       mutex_lock(&client->lock);
+
+       if (client->usecount == 1) {
+               if (client->ops && client->ops->suspend) {
+                       err = client->ops->suspend(client);
+                       if (err < 0)
+                               goto unlock;
+               }
+       }
+
+       client->usecount--;
+       dev_dbg(client->dev, "use count: %u\n", client->usecount);
+
+       if (client->parent) {
+               err = host1x_client_suspend(client->parent);
+               if (err < 0)
+                       goto resume;
+       }
+
+       goto unlock;
+
+resume:
+       if (client->usecount == 0)
+               if (client->ops && client->ops->resume)
+                       client->ops->resume(client);
+
+       client->usecount++;
+unlock:
+       mutex_unlock(&client->lock);
+       return err;
+}
+EXPORT_SYMBOL(host1x_client_suspend);
+
+int host1x_client_resume(struct host1x_client *client)
+{
+       int err = 0;
+
+       mutex_lock(&client->lock);
+
+       if (client->parent) {
+               err = host1x_client_resume(client->parent);
+               if (err < 0)
+                       goto unlock;
+       }
+
+       if (client->usecount == 0) {
+               if (client->ops && client->ops->resume) {
+                       err = client->ops->resume(client);
+                       if (err < 0)
+                               goto suspend;
+               }
+       }
+
+       client->usecount++;
+       dev_dbg(client->dev, "use count: %u\n", client->usecount);
+
+       goto unlock;
+
+suspend:
+       if (client->parent)
+               host1x_client_suspend(client->parent);
+unlock:
+       mutex_unlock(&client->lock);
+       return err;
+}
+EXPORT_SYMBOL(host1x_client_resume);
index a738ea5..388bcc2 100644 (file)
@@ -339,10 +339,8 @@ static int host1x_probe(struct platform_device *pdev)
        }
 
        syncpt_irq = platform_get_irq(pdev, 0);
-       if (syncpt_irq < 0) {
-               dev_err(&pdev->dev, "failed to get IRQ: %d\n", syncpt_irq);
+       if (syncpt_irq < 0)
                return syncpt_irq;
-       }
 
        mutex_init(&host->devices_lock);
        INIT_LIST_HEAD(&host->devices);
index dd1cd01..fce7892 100644 (file)
@@ -421,7 +421,7 @@ int host1x_syncpt_init(struct host1x *host)
 struct host1x_syncpt *host1x_syncpt_request(struct host1x_client *client,
                                            unsigned long flags)
 {
-       struct host1x *host = dev_get_drvdata(client->parent->parent);
+       struct host1x *host = dev_get_drvdata(client->host->parent);
 
        return host1x_syncpt_alloc(host, client, flags);
 }
index 3c82de5..9add0fd 100644 (file)
@@ -9,12 +9,54 @@
 #include <linux/mailbox_controller.h>
 #include <linux/soc/mediatek/mtk-cmdq.h>
 
-#define CMDQ_ARG_A_WRITE_MASK  0xffff
 #define CMDQ_WRITE_ENABLE_MASK BIT(0)
+#define CMDQ_POLL_ENABLE_MASK  BIT(0)
 #define CMDQ_EOC_IRQ_EN                BIT(0)
 #define CMDQ_EOC_CMD           ((u64)((CMDQ_CODE_EOC << CMDQ_OP_CODE_SHIFT)) \
                                << 32 | CMDQ_EOC_IRQ_EN)
 
+struct cmdq_instruction {
+       union {
+               u32 value;
+               u32 mask;
+       };
+       union {
+               u16 offset;
+               u16 event;
+       };
+       u8 subsys;
+       u8 op;
+};
+
+int cmdq_dev_get_client_reg(struct device *dev,
+                           struct cmdq_client_reg *client_reg, int idx)
+{
+       struct of_phandle_args spec;
+       int err;
+
+       if (!client_reg)
+               return -ENOENT;
+
+       err = of_parse_phandle_with_fixed_args(dev->of_node,
+                                              "mediatek,gce-client-reg",
+                                              3, idx, &spec);
+       if (err < 0) {
+               dev_err(dev,
+                       "error %d can't parse gce-client-reg property (%d)",
+                       err, idx);
+
+               return err;
+       }
+
+       client_reg->subsys = (u8)spec.args[0];
+       client_reg->offset = (u16)spec.args[1];
+       client_reg->size = (u16)spec.args[2];
+       of_node_put(spec.np);
+
+       return 0;
+}
+EXPORT_SYMBOL(cmdq_dev_get_client_reg);
+
 static void cmdq_client_timeout(struct timer_list *t)
 {
        struct cmdq_client *client = from_timer(client, t, timer);
@@ -110,10 +152,10 @@ void cmdq_pkt_destroy(struct cmdq_pkt *pkt)
 }
 EXPORT_SYMBOL(cmdq_pkt_destroy);
 
-static int cmdq_pkt_append_command(struct cmdq_pkt *pkt, enum cmdq_code code,
-                                  u32 arg_a, u32 arg_b)
+static int cmdq_pkt_append_command(struct cmdq_pkt *pkt,
+                                  struct cmdq_instruction inst)
 {
-       u64 *cmd_ptr;
+       struct cmdq_instruction *cmd_ptr;
 
        if (unlikely(pkt->cmd_buf_size + CMDQ_INST_SIZE > pkt->buf_size)) {
                /*
@@ -129,8 +171,9 @@ static int cmdq_pkt_append_command(struct cmdq_pkt *pkt, enum cmdq_code code,
                        __func__, (u32)pkt->buf_size);
                return -ENOMEM;
        }
+
        cmd_ptr = pkt->va_base + pkt->cmd_buf_size;
-       (*cmd_ptr) = (u64)((code << CMDQ_OP_CODE_SHIFT) | arg_a) << 32 | arg_b;
+       *cmd_ptr = inst;
        pkt->cmd_buf_size += CMDQ_INST_SIZE;
 
        return 0;
@@ -138,24 +181,34 @@ static int cmdq_pkt_append_command(struct cmdq_pkt *pkt, enum cmdq_code code,
 
 int cmdq_pkt_write(struct cmdq_pkt *pkt, u8 subsys, u16 offset, u32 value)
 {
-       u32 arg_a = (offset & CMDQ_ARG_A_WRITE_MASK) |
-                   (subsys << CMDQ_SUBSYS_SHIFT);
+       struct cmdq_instruction inst;
 
-       return cmdq_pkt_append_command(pkt, CMDQ_CODE_WRITE, arg_a, value);
+       inst.op = CMDQ_CODE_WRITE;
+       inst.value = value;
+       inst.offset = offset;
+       inst.subsys = subsys;
+
+       return cmdq_pkt_append_command(pkt, inst);
 }
 EXPORT_SYMBOL(cmdq_pkt_write);
 
 int cmdq_pkt_write_mask(struct cmdq_pkt *pkt, u8 subsys,
                        u16 offset, u32 value, u32 mask)
 {
-       u32 offset_mask = offset;
-       int err = 0;
+       struct cmdq_instruction inst = { {0} };
+       u16 offset_mask = offset;
+       int err;
 
        if (mask != 0xffffffff) {
-               err = cmdq_pkt_append_command(pkt, CMDQ_CODE_MASK, 0, ~mask);
+               inst.op = CMDQ_CODE_MASK;
+               inst.mask = ~mask;
+               err = cmdq_pkt_append_command(pkt, inst);
+               if (err < 0)
+                       return err;
+
                offset_mask |= CMDQ_WRITE_ENABLE_MASK;
        }
-       err |= cmdq_pkt_write(pkt, subsys, offset_mask, value);
+       err = cmdq_pkt_write(pkt, subsys, offset_mask, value);
 
        return err;
 }
@@ -163,43 +216,85 @@ EXPORT_SYMBOL(cmdq_pkt_write_mask);
 
 int cmdq_pkt_wfe(struct cmdq_pkt *pkt, u16 event)
 {
-       u32 arg_b;
+       struct cmdq_instruction inst = { {0} };
 
        if (event >= CMDQ_MAX_EVENT)
                return -EINVAL;
 
-       /*
-        * WFE arg_b
-        * bit 0-11: wait value
-        * bit 15: 1 - wait, 0 - no wait
-        * bit 16-27: update value
-        * bit 31: 1 - update, 0 - no update
-        */
-       arg_b = CMDQ_WFE_UPDATE | CMDQ_WFE_WAIT | CMDQ_WFE_WAIT_VALUE;
+       inst.op = CMDQ_CODE_WFE;
+       inst.value = CMDQ_WFE_OPTION;
+       inst.event = event;
 
-       return cmdq_pkt_append_command(pkt, CMDQ_CODE_WFE, event, arg_b);
+       return cmdq_pkt_append_command(pkt, inst);
 }
 EXPORT_SYMBOL(cmdq_pkt_wfe);
 
 int cmdq_pkt_clear_event(struct cmdq_pkt *pkt, u16 event)
 {
+       struct cmdq_instruction inst = { {0} };
+
        if (event >= CMDQ_MAX_EVENT)
                return -EINVAL;
 
-       return cmdq_pkt_append_command(pkt, CMDQ_CODE_WFE, event,
-                                      CMDQ_WFE_UPDATE);
+       inst.op = CMDQ_CODE_WFE;
+       inst.value = CMDQ_WFE_UPDATE;
+       inst.event = event;
+
+       return cmdq_pkt_append_command(pkt, inst);
 }
 EXPORT_SYMBOL(cmdq_pkt_clear_event);
 
+int cmdq_pkt_poll(struct cmdq_pkt *pkt, u8 subsys,
+                 u16 offset, u32 value)
+{
+       struct cmdq_instruction inst = { {0} };
+       int err;
+
+       inst.op = CMDQ_CODE_POLL;
+       inst.value = value;
+       inst.offset = offset;
+       inst.subsys = subsys;
+       err = cmdq_pkt_append_command(pkt, inst);
+
+       return err;
+}
+EXPORT_SYMBOL(cmdq_pkt_poll);
+
+int cmdq_pkt_poll_mask(struct cmdq_pkt *pkt, u8 subsys,
+                      u16 offset, u32 value, u32 mask)
+{
+       struct cmdq_instruction inst = { {0} };
+       int err;
+
+       inst.op = CMDQ_CODE_MASK;
+       inst.mask = ~mask;
+       err = cmdq_pkt_append_command(pkt, inst);
+       if (err < 0)
+               return err;
+
+       offset = offset | CMDQ_POLL_ENABLE_MASK;
+       err = cmdq_pkt_poll(pkt, subsys, offset, value);
+
+       return err;
+}
+EXPORT_SYMBOL(cmdq_pkt_poll_mask);
+
 static int cmdq_pkt_finalize(struct cmdq_pkt *pkt)
 {
+       struct cmdq_instruction inst = { {0} };
        int err;
 
        /* insert EOC and generate IRQ for each command iteration */
-       err = cmdq_pkt_append_command(pkt, CMDQ_CODE_EOC, 0, CMDQ_EOC_IRQ_EN);
+       inst.op = CMDQ_CODE_EOC;
+       inst.value = CMDQ_EOC_IRQ_EN;
+       err = cmdq_pkt_append_command(pkt, inst);
+       if (err < 0)
+               return err;
 
        /* JUMP to end */
-       err |= cmdq_pkt_append_command(pkt, CMDQ_CODE_JUMP, 0, CMDQ_JUMP_PASS);
+       inst.op = CMDQ_CODE_JUMP;
+       inst.value = CMDQ_JUMP_PASS;
+       err = cmdq_pkt_append_command(pkt, inst);
 
        return err;
 }
index b877a60..88c137f 100644 (file)
@@ -456,7 +456,6 @@ static int mmphw_probe(struct platform_device *pdev)
 
        irq = platform_get_irq(pdev, 0);
        if (irq < 0) {
-               dev_err(&pdev->dev, "%s: no IRQ defined\n", __func__);
                ret = -ENOENT;
                goto failed;
        }
index ccce65e..951dfb1 100644 (file)
@@ -670,9 +670,6 @@ __drm_atomic_get_current_plane_state(struct drm_atomic_state *state,
 }
 
 int __must_check
-drm_atomic_add_encoder_bridges(struct drm_atomic_state *state,
-                              struct drm_encoder *encoder);
-int __must_check
 drm_atomic_add_affected_connectors(struct drm_atomic_state *state,
                                   struct drm_crtc *crtc);
 int __must_check
index 46e1552..694e153 100644 (file)
@@ -25,8 +25,6 @@
 
 #include <linux/list.h>
 #include <linux/ctype.h>
-
-#include <drm/drm_atomic.h>
 #include <drm/drm_encoder.h>
 #include <drm/drm_mode_object.h>
 #include <drm/drm_modes.h>
@@ -36,65 +34,6 @@ struct drm_bridge_timings;
 struct drm_panel;
 
 /**
- * struct drm_bus_cfg - bus configuration
- *
- * This structure stores the configuration of a physical bus between two
- * components in an output pipeline, usually between two bridges, an encoder
- * and a bridge, or a bridge and a connector.
- *
- * The bus configuration is stored in &drm_bridge_state separately for the
- * input and output buses, as seen from the point of view of each bridge. The
- * bus configuration of a bridge output is usually identical to the
- * configuration of the next bridge's input, but may differ if the signals are
- * modified between the two bridges, for instance by an inverter on the board.
- * The input and output configurations of a bridge may differ if the bridge
- * modifies the signals internally, for instance by performing format
- * conversion, or modifying signals polarities.
- */
-struct drm_bus_cfg {
-       /**
-        * @format: format used on this bus (one of the MEDIA_BUS_FMT_* format)
-        *
-        * This field should not be directly modified by drivers
-        * (&drm_atomic_bridge_chain_select_bus_fmts() takes care of the bus
-        * format negotiation).
-        */
-       u32 format;
-
-       /**
-        * @flags: DRM_BUS_* flags used on this bus
-        */
-       u32 flags;
-};
-
-/**
- * struct drm_bridge_state - Atomic bridge state object
- * @base: inherit from &drm_private_state
- * @bridge: the bridge this state refers to
- */
-struct drm_bridge_state {
-       struct drm_private_state base;
-
-       struct drm_bridge *bridge;
-
-       /**
-        * @input_bus_cfg: input bus configuration
-        */
-       struct drm_bus_cfg input_bus_cfg;
-
-       /**
-        * @output_bus_cfg: input bus configuration
-        */
-       struct drm_bus_cfg output_bus_cfg;
-};
-
-static inline struct drm_bridge_state *
-drm_priv_to_bridge_state(struct drm_private_state *priv)
-{
-       return container_of(priv, struct drm_bridge_state, base);
-}
-
-/**
  * struct drm_bridge_funcs - drm_bridge control functions
  */
 struct drm_bridge_funcs {
@@ -170,9 +109,7 @@ struct drm_bridge_funcs {
         * this function passes all other callbacks must succeed for this
         * configuration.
         *
-        * The mode_fixup callback is optional. &drm_bridge_funcs.mode_fixup()
-        * is not called when &drm_bridge_funcs.atomic_check() is implemented,
-        * so only one of them should be provided.
+        * The @mode_fixup callback is optional.
         *
         * NOTE:
         *
@@ -326,7 +263,7 @@ struct drm_bridge_funcs {
         * The @atomic_pre_enable callback is optional.
         */
        void (*atomic_pre_enable)(struct drm_bridge *bridge,
-                                 struct drm_bridge_state *old_bridge_state);
+                                 struct drm_atomic_state *old_state);
 
        /**
         * @atomic_enable:
@@ -351,7 +288,7 @@ struct drm_bridge_funcs {
         * The @atomic_enable callback is optional.
         */
        void (*atomic_enable)(struct drm_bridge *bridge,
-                             struct drm_bridge_state *old_bridge_state);
+                             struct drm_atomic_state *old_state);
        /**
         * @atomic_disable:
         *
@@ -374,7 +311,7 @@ struct drm_bridge_funcs {
         * The @atomic_disable callback is optional.
         */
        void (*atomic_disable)(struct drm_bridge *bridge,
-                              struct drm_bridge_state *old_bridge_state);
+                              struct drm_atomic_state *old_state);
 
        /**
         * @atomic_post_disable:
@@ -400,146 +337,7 @@ struct drm_bridge_funcs {
         * The @atomic_post_disable callback is optional.
         */
        void (*atomic_post_disable)(struct drm_bridge *bridge,
-                                   struct drm_bridge_state *old_bridge_state);
-
-       /**
-        * @atomic_duplicate_state:
-        *
-        * Duplicate the current bridge state object (which is guaranteed to be
-        * non-NULL).
-        *
-        * The atomic_duplicate_state() is optional. When not implemented the
-        * core allocates a drm_bridge_state object and calls
-        * &__drm_atomic_helper_bridge_duplicate_state() to initialize it.
-        *
-        * RETURNS:
-        * A valid drm_bridge_state object or NULL if the allocation fails.
-        */
-       struct drm_bridge_state *(*atomic_duplicate_state)(struct drm_bridge *bridge);
-
-       /**
-        * @atomic_destroy_state:
-        *
-        * Destroy a bridge state object previously allocated by
-        * &drm_bridge_funcs.atomic_duplicate_state().
-        *
-        * The atomic_destroy_state hook is optional. When not implemented the
-        * core calls kfree() on the state.
-        */
-       void (*atomic_destroy_state)(struct drm_bridge *bridge,
-                                    struct drm_bridge_state *state);
-
-       /**
-        * @atomic_get_output_bus_fmts:
-        *
-        * Return the supported bus formats on the output end of a bridge.
-        * The returned array must be allocated with kmalloc() and will be
-        * freed by the caller. If the allocation fails, NULL should be
-        * returned. num_output_fmts must be set to the returned array size.
-        * Formats listed in the returned array should be listed in decreasing
-        * preference order (the core will try all formats until it finds one
-        * that works).
-        *
-        * This method is only called on the last element of the bridge chain
-        * as part of the bus format negotiation process that happens in
-        * &drm_atomic_bridge_chain_select_bus_fmts().
-        * This method is optional. When not implemented, the core will
-        * fall back to &drm_connector.display_info.bus_formats[0] if
-        * &drm_connector.display_info.num_bus_formats > 0,
-        * or to MEDIA_BUS_FMT_FIXED otherwise.
-        */
-       u32 *(*atomic_get_output_bus_fmts)(struct drm_bridge *bridge,
-                                          struct drm_bridge_state *bridge_state,
-                                          struct drm_crtc_state *crtc_state,
-                                          struct drm_connector_state *conn_state,
-                                          unsigned int *num_output_fmts);
-
-       /**
-        * @atomic_get_input_bus_fmts:
-        *
-        * Return the supported bus formats on the input end of a bridge for
-        * a specific output bus format.
-        *
-        * The returned array must be allocated with kmalloc() and will be
-        * freed by the caller. If the allocation fails, NULL should be
-        * returned. num_output_fmts must be set to the returned array size.
-        * Formats listed in the returned array should be listed in decreasing
-        * preference order (the core will try all formats until it finds one
-        * that works). When the format is not supported NULL should be
-        * returned and *num_output_fmts should be set to 0.
-        *
-        * This method is called on all elements of the bridge chain as part of
-        * the bus format negotiation process that happens in
-        * &drm_atomic_bridge_chain_select_bus_fmts().
-        * This method is optional. When not implemented, the core will bypass
-        * bus format negotiation on this element of the bridge without
-        * failing, and the previous element in the chain will be passed
-        * MEDIA_BUS_FMT_FIXED as its output bus format.
-        *
-        * Bridge drivers that need to support being linked to bridges that are
-        * not supporting bus format negotiation should handle the
-        * output_fmt == MEDIA_BUS_FMT_FIXED case appropriately, by selecting a
-        * sensible default value or extracting this information from somewhere
-        * else (FW property, &drm_display_mode, &drm_display_info, ...)
-        *
-        * Note: Even if input format selection on the first bridge has no
-        * impact on the negotiation process (bus format negotiation stops once
-        * we reach the first element of the chain), drivers are expected to
-        * return accurate input formats as the input format may be used to
-        * configure the CRTC output appropriately.
-        */
-       u32 *(*atomic_get_input_bus_fmts)(struct drm_bridge *bridge,
-                                         struct drm_bridge_state *bridge_state,
-                                         struct drm_crtc_state *crtc_state,
-                                         struct drm_connector_state *conn_state,
-                                         u32 output_fmt,
-                                         unsigned int *num_input_fmts);
-
-       /**
-        * @atomic_check:
-        *
-        * This method is responsible for checking bridge state correctness.
-        * It can also check the state of the surrounding components in chain
-        * to make sure the whole pipeline can work properly.
-        *
-        * &drm_bridge_funcs.atomic_check() hooks are called in reverse
-        * order (from the last to the first bridge).
-        *
-        * This method is optional. &drm_bridge_funcs.mode_fixup() is not
-        * called when &drm_bridge_funcs.atomic_check() is implemented, so only
-        * one of them should be provided.
-        *
-        * If drivers need to tweak &drm_bridge_state.input_bus_cfg.flags or
-        * &drm_bridge_state.output_bus_cfg.flags it should should happen in
-        * this function. By default the &drm_bridge_state.output_bus_cfg.flags
-        * field is set to the next bridge
-        * &drm_bridge_state.input_bus_cfg.flags value or
-        * &drm_connector.display_info.bus_flags if the bridge is the last
-        * element in the chain.
-        *
-        * RETURNS:
-        * zero if the check passed, a negative error code otherwise.
-        */
-       int (*atomic_check)(struct drm_bridge *bridge,
-                           struct drm_bridge_state *bridge_state,
-                           struct drm_crtc_state *crtc_state,
-                           struct drm_connector_state *conn_state);
-
-       /**
-        * @atomic_reset:
-        *
-        * Reset the bridge to a predefined state (or retrieve its current
-        * state) and return a &drm_bridge_state object matching this state.
-        * This function is called at attach time.
-        *
-        * The atomic_reset hook is optional. When not implemented the core
-        * allocates a new state and calls &__drm_atomic_helper_bridge_reset().
-        *
-        * RETURNS:
-        * A valid drm_bridge_state object in case of success, an ERR_PTR()
-        * giving the reason of the failure otherwise.
-        */
-       struct drm_bridge_state *(*atomic_reset)(struct drm_bridge *bridge);
+                                   struct drm_atomic_state *old_state);
 };
 
 /**
@@ -582,8 +380,6 @@ struct drm_bridge_timings {
  * struct drm_bridge - central DRM bridge control structure
  */
 struct drm_bridge {
-       /** @base: inherit from &drm_private_object */
-       struct drm_private_obj base;
        /** @dev: DRM device this bridge belongs to */
        struct drm_device *dev;
        /** @encoder: encoder to which this bridge is connected */
@@ -608,12 +404,6 @@ struct drm_bridge {
        void *driver_private;
 };
 
-static inline struct drm_bridge *
-drm_priv_to_bridge(struct drm_private_obj *priv)
-{
-       return container_of(priv, struct drm_bridge, base);
-}
-
 void drm_bridge_add(struct drm_bridge *bridge);
 void drm_bridge_remove(struct drm_bridge *bridge);
 struct drm_bridge *of_drm_find_bridge(struct device_node *np);
@@ -692,9 +482,6 @@ void drm_bridge_chain_mode_set(struct drm_bridge *bridge,
 void drm_bridge_chain_pre_enable(struct drm_bridge *bridge);
 void drm_bridge_chain_enable(struct drm_bridge *bridge);
 
-int drm_atomic_bridge_chain_check(struct drm_bridge *bridge,
-                                 struct drm_crtc_state *crtc_state,
-                                 struct drm_connector_state *conn_state);
 void drm_atomic_bridge_chain_disable(struct drm_bridge *bridge,
                                     struct drm_atomic_state *state);
 void drm_atomic_bridge_chain_post_disable(struct drm_bridge *bridge,
@@ -704,58 +491,6 @@ void drm_atomic_bridge_chain_pre_enable(struct drm_bridge *bridge,
 void drm_atomic_bridge_chain_enable(struct drm_bridge *bridge,
                                    struct drm_atomic_state *state);
 
-u32 *
-drm_atomic_helper_bridge_propagate_bus_fmt(struct drm_bridge *bridge,
-                                       struct drm_bridge_state *bridge_state,
-                                       struct drm_crtc_state *crtc_state,
-                                       struct drm_connector_state *conn_state,
-                                       u32 output_fmt,
-                                       unsigned int *num_input_fmts);
-
-void __drm_atomic_helper_bridge_reset(struct drm_bridge *bridge,
-                                     struct drm_bridge_state *state);
-void __drm_atomic_helper_bridge_duplicate_state(struct drm_bridge *bridge,
-                                               struct drm_bridge_state *new);
-
-static inline struct drm_bridge_state *
-drm_atomic_get_bridge_state(struct drm_atomic_state *state,
-                           struct drm_bridge *bridge)
-{
-       struct drm_private_state *obj_state;
-
-       obj_state = drm_atomic_get_private_obj_state(state, &bridge->base);
-       if (IS_ERR(obj_state))
-               return ERR_CAST(obj_state);
-
-       return drm_priv_to_bridge_state(obj_state);
-}
-
-static inline struct drm_bridge_state *
-drm_atomic_get_old_bridge_state(struct drm_atomic_state *state,
-                               struct drm_bridge *bridge)
-{
-       struct drm_private_state *obj_state;
-
-       obj_state = drm_atomic_get_old_private_obj_state(state, &bridge->base);
-       if (!obj_state)
-               return NULL;
-
-       return drm_priv_to_bridge_state(obj_state);
-}
-
-static inline struct drm_bridge_state *
-drm_atomic_get_new_bridge_state(struct drm_atomic_state *state,
-                               struct drm_bridge *bridge)
-{
-       struct drm_private_state *obj_state;
-
-       obj_state = drm_atomic_get_new_private_obj_state(state, &bridge->base);
-       if (!obj_state)
-               return NULL;
-
-       return drm_priv_to_bridge_state(obj_state);
-}
-
 #ifdef CONFIG_DRM_PANEL_BRIDGE
 struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel);
 struct drm_bridge *drm_panel_bridge_add_typed(struct drm_panel *panel,
index 8f8f363..bc04467 100644 (file)
@@ -1465,6 +1465,7 @@ int drm_dp_downstream_id(struct drm_dp_aux *aux, char id[6]);
 void drm_dp_downstream_debug(struct seq_file *m, const u8 dpcd[DP_RECEIVER_CAP_SIZE],
                             const u8 port_cap[4], struct drm_dp_aux *aux);
 
+void drm_dp_remote_aux_init(struct drm_dp_aux *aux);
 void drm_dp_aux_init(struct drm_dp_aux *aux);
 int drm_dp_aux_register(struct drm_dp_aux *aux);
 void drm_dp_aux_unregister(struct drm_dp_aux *aux);
@@ -1522,6 +1523,13 @@ enum drm_dp_quirk {
         * The driver should ignore SINK_COUNT during detection.
         */
        DP_DPCD_QUIRK_NO_SINK_COUNT,
+       /**
+        * @DP_DPCD_QUIRK_DSC_WITHOUT_VIRTUAL_DPCD:
+        *
+        * The device supports MST DSC despite not supporting Virtual DPCD.
+        * The DSC caps can be read from the physical aux instead.
+        */
+       DP_DPCD_QUIRK_DSC_WITHOUT_VIRTUAL_DPCD,
 };
 
 /**
index 5699493..e550377 100644 (file)
@@ -156,6 +156,8 @@ struct drm_dp_mst_port {
         * audio-capable.
         */
        bool has_audio;
+
+       bool fec_capable;
 };
 
 /**
@@ -383,6 +385,7 @@ struct drm_dp_port_number_req {
 
 struct drm_dp_enum_path_resources_ack_reply {
        u8 port_number;
+       bool fec_capable;
        u16 full_payload_bw_number;
        u16 avail_payload_bw_number;
 };
@@ -499,6 +502,8 @@ struct drm_dp_payload {
 struct drm_dp_vcpi_allocation {
        struct drm_dp_mst_port *port;
        int vcpi;
+       int pbn;
+       bool dsc_enabled;
        struct list_head next;
 };
 
@@ -727,8 +732,7 @@ bool drm_dp_mst_port_has_audio(struct drm_dp_mst_topology_mgr *mgr,
 struct edid *drm_dp_mst_get_edid(struct drm_connector *connector, struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
 
 
-int drm_dp_calc_pbn_mode(int clock, int bpp);
-
+int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc);
 
 bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
                              struct drm_dp_mst_port *port, int pbn, int slots);
@@ -777,7 +781,15 @@ struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_a
 int __must_check
 drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,
                              struct drm_dp_mst_topology_mgr *mgr,
-                             struct drm_dp_mst_port *port, int pbn);
+                             struct drm_dp_mst_port *port, int pbn,
+                             int pbn_div);
+int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state,
+                                struct drm_dp_mst_port *port,
+                                int pbn, int pbn_div,
+                                bool enable);
+int __must_check
+drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state,
+                                 struct drm_dp_mst_topology_mgr *mgr);
 int __must_check
 drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state,
                                 struct drm_dp_mst_topology_mgr *mgr,
@@ -789,6 +801,8 @@ int __must_check drm_dp_mst_atomic_check(struct drm_atomic_state *state);
 void drm_dp_mst_get_port_malloc(struct drm_dp_mst_port *port);
 void drm_dp_mst_put_port_malloc(struct drm_dp_mst_port *port);
 
+struct drm_dp_aux *drm_dp_mst_dsc_aux_for_port(struct drm_dp_mst_port *port);
+
 extern const struct drm_private_state_funcs drm_dp_mst_topology_state_funcs;
 
 /**
index 4becb09..795aea1 100644 (file)
@@ -2,6 +2,8 @@
 #ifndef __DRM_FB_CMA_HELPER_H__
 #define __DRM_FB_CMA_HELPER_H__
 
+#include <linux/types.h>
+
 struct drm_framebuffer;
 struct drm_plane_state;
 
index 684692a..96a1a1b 100644 (file)
@@ -81,8 +81,9 @@ enum drm_sched_priority {
 struct drm_sched_entity {
        struct list_head                list;
        struct drm_sched_rq             *rq;
-       struct drm_sched_rq             **rq_list;
-       unsigned int                    num_rq_list;
+       unsigned int                    num_sched_list;
+       struct drm_gpu_scheduler        **sched_list;
+       enum drm_sched_priority         priority;
        spinlock_t                      rq_lock;
 
        struct spsc_queue               job_queue;
@@ -312,7 +313,8 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq,
                                struct drm_sched_entity *entity);
 
 int drm_sched_entity_init(struct drm_sched_entity *entity,
-                         struct drm_sched_rq **rq_list,
+                         enum drm_sched_priority priority,
+                         struct drm_gpu_scheduler **sched_list,
                          unsigned int num_rq_list,
                          atomic_t *guilty);
 long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout);
diff --git a/include/drm/task_barrier.h b/include/drm/task_barrier.h
new file mode 100644 (file)
index 0000000..087e3f6
--- /dev/null
@@ -0,0 +1,107 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/semaphore.h>
+#include <linux/atomic.h>
+
+/*
+ * Reusable 2 PHASE task barrier (randevouz point) implementation for N tasks.
+ * Based on the Little book of sempahores - https://greenteapress.com/wp/semaphores/
+ */
+
+
+
+#ifndef DRM_TASK_BARRIER_H_
+#define DRM_TASK_BARRIER_H_
+
+/*
+ * Represents an instance of a task barrier.
+ */
+struct task_barrier {
+       unsigned int n;
+       atomic_t count;
+       struct semaphore enter_turnstile;
+       struct semaphore exit_turnstile;
+};
+
+static inline void task_barrier_signal_turnstile(struct semaphore *turnstile,
+                                                unsigned int n)
+{
+       int i;
+
+       for (i = 0 ; i < n; i++)
+               up(turnstile);
+}
+
+static inline void task_barrier_init(struct task_barrier *tb)
+{
+       tb->n = 0;
+       atomic_set(&tb->count, 0);
+       sema_init(&tb->enter_turnstile, 0);
+       sema_init(&tb->exit_turnstile, 0);
+}
+
+static inline void task_barrier_add_task(struct task_barrier *tb)
+{
+       tb->n++;
+}
+
+static inline void task_barrier_rem_task(struct task_barrier *tb)
+{
+       tb->n--;
+}
+
+/*
+ * Lines up all the threads BEFORE the critical point.
+ *
+ * When all thread passed this code the entry barrier is back to locked state.
+ */
+static inline void task_barrier_enter(struct task_barrier *tb)
+{
+       if (atomic_inc_return(&tb->count) == tb->n)
+               task_barrier_signal_turnstile(&tb->enter_turnstile, tb->n);
+
+       down(&tb->enter_turnstile);
+}
+
+/*
+ * Lines up all the threads AFTER the critical point.
+ *
+ * This function is used to avoid any one thread running ahead if the barrier is
+ *  used repeatedly .
+ */
+static inline void task_barrier_exit(struct task_barrier *tb)
+{
+       if (atomic_dec_return(&tb->count) == 0)
+               task_barrier_signal_turnstile(&tb->exit_turnstile, tb->n);
+
+       down(&tb->exit_turnstile);
+}
+
+/* Convinieince function when nothing to be done in between entry and exit */
+static inline void task_barrier_full(struct task_barrier *tb)
+{
+       task_barrier_enter(tb);
+       task_barrier_exit(tb);
+}
+
+#endif
index 6edeb92..62d216f 100644 (file)
@@ -24,16 +24,20 @@ struct iommu_group;
  * struct host1x_client_ops - host1x client operations
  * @init: host1x client initialization code
  * @exit: host1x client tear down code
+ * @suspend: host1x client suspend code
+ * @resume: host1x client resume code
  */
 struct host1x_client_ops {
        int (*init)(struct host1x_client *client);
        int (*exit)(struct host1x_client *client);
+       int (*suspend)(struct host1x_client *client);
+       int (*resume)(struct host1x_client *client);
 };
 
 /**
  * struct host1x_client - host1x client structure
  * @list: list node for the host1x client
- * @parent: pointer to struct device representing the host1x controller
+ * @host: pointer to struct device representing the host1x controller
  * @dev: pointer to struct device backing this host1x client
  * @group: IOMMU group that this client is a member of
  * @ops: host1x client operations
@@ -44,7 +48,7 @@ struct host1x_client_ops {
  */
 struct host1x_client {
        struct list_head list;
-       struct device *parent;
+       struct device *host;
        struct device *dev;
        struct iommu_group *group;
 
@@ -55,6 +59,10 @@ struct host1x_client {
 
        struct host1x_syncpt **syncpts;
        unsigned int num_syncpts;
+
+       struct host1x_client *parent;
+       unsigned int usecount;
+       struct mutex lock;
 };
 
 /*
@@ -309,6 +317,9 @@ int host1x_device_exit(struct host1x_device *device);
 int host1x_client_register(struct host1x_client *client);
 int host1x_client_unregister(struct host1x_client *client);
 
+int host1x_client_suspend(struct host1x_client *client);
+int host1x_client_resume(struct host1x_client *client);
+
 struct tegra_mipi_device;
 
 struct tegra_mipi_device *tegra_mipi_request(struct device *device);
index e6f54ef..a4dc45f 100644 (file)
 #define CMDQ_WFE_WAIT                  BIT(15)
 #define CMDQ_WFE_WAIT_VALUE            0x1
 
+/*
+ * WFE arg_b
+ * bit 0-11: wait value
+ * bit 15: 1 - wait, 0 - no wait
+ * bit 16-27: update value
+ * bit 31: 1 - update, 0 - no update
+ */
+#define CMDQ_WFE_OPTION                        (CMDQ_WFE_UPDATE | CMDQ_WFE_WAIT | \
+                                       CMDQ_WFE_WAIT_VALUE)
+
 /** cmdq event maximum */
 #define CMDQ_MAX_EVENT                 0x3ff
 
@@ -45,6 +55,7 @@
 enum cmdq_code {
        CMDQ_CODE_MASK = 0x02,
        CMDQ_CODE_WRITE = 0x04,
+       CMDQ_CODE_POLL = 0x08,
        CMDQ_CODE_JUMP = 0x10,
        CMDQ_CODE_WFE = 0x20,
        CMDQ_CODE_EOC = 0x40,
index 9618deb..a74c1d5 100644 (file)
 
 struct cmdq_pkt;
 
+struct cmdq_client_reg {
+       u8 subsys;
+       u16 offset;
+       u16 size;
+};
+
 struct cmdq_client {
        spinlock_t lock;
        u32 pkt_cnt;
@@ -25,6 +31,21 @@ struct cmdq_client {
 };
 
 /**
+ * cmdq_dev_get_client_reg() - parse cmdq client reg from the device
+ *                            node of CMDQ client
+ * @dev:       device of CMDQ mailbox client
+ * @client_reg: CMDQ client reg pointer
+ * @idx:       the index of desired reg
+ *
+ * Return: 0 for success; else the error code is returned
+ *
+ * Help CMDQ client parsing the cmdq client reg
+ * from the device node of CMDQ client.
+ */
+int cmdq_dev_get_client_reg(struct device *dev,
+                           struct cmdq_client_reg *client_reg, int idx);
+
+/**
  * cmdq_mbox_create() - create CMDQ mailbox client and channel
  * @dev:       device of CMDQ mailbox client
  * @index:     index of CMDQ mailbox channel
@@ -100,6 +121,38 @@ int cmdq_pkt_wfe(struct cmdq_pkt *pkt, u16 event);
 int cmdq_pkt_clear_event(struct cmdq_pkt *pkt, u16 event);
 
 /**
+ * cmdq_pkt_poll() - Append polling command to the CMDQ packet, ask GCE to
+ *                  execute an instruction that wait for a specified
+ *                  hardware register to check for the value w/o mask.
+ *                  All GCE hardware threads will be blocked by this
+ *                  instruction.
+ * @pkt:       the CMDQ packet
+ * @subsys:    the CMDQ sub system code
+ * @offset:    register offset from CMDQ sub system
+ * @value:     the specified target register value
+ *
+ * Return: 0 for success; else the error code is returned
+ */
+int cmdq_pkt_poll(struct cmdq_pkt *pkt, u8 subsys,
+                 u16 offset, u32 value);
+
+/**
+ * cmdq_pkt_poll_mask() - Append polling command to the CMDQ packet, ask GCE to
+ *                       execute an instruction that wait for a specified
+ *                       hardware register to check for the value w/ mask.
+ *                       All GCE hardware threads will be blocked by this
+ *                       instruction.
+ * @pkt:       the CMDQ packet
+ * @subsys:    the CMDQ sub system code
+ * @offset:    register offset from CMDQ sub system
+ * @value:     the specified target register value
+ * @mask:      the specified target register mask
+ *
+ * Return: 0 for success; else the error code is returned
+ */
+int cmdq_pkt_poll_mask(struct cmdq_pkt *pkt, u8 subsys,
+                      u16 offset, u32 value, u32 mask);
+/**
  * cmdq_pkt_flush_async() - trigger CMDQ to asynchronously execute the CMDQ
  *                          packet and call back at the end of done packet
  * @pkt:       the CMDQ packet