diff options
author | Stephen Warren <swarren@nvidia.com> | 2016-09-23 16:44:51 -0600 |
---|---|---|
committer | Tom Warren <twarren@nvidia.com> | 2016-09-27 09:11:02 -0700 |
commit | 74686766847146e4408486c5e3ca8a1681b145c0 (patch) | |
tree | f3ea903ef107802cff616f5479b3c822863c7ec0 /cmd/ini.c | |
parent | 4a332d3ee770bd6b633fd3abba741451b17156bc (diff) | |
download | u-boot-imx-74686766847146e4408486c5e3ca8a1681b145c0.zip u-boot-imx-74686766847146e4408486c5e3ca8a1681b145c0.tar.gz u-boot-imx-74686766847146e4408486c5e3ca8a1681b145c0.tar.bz2 |
ARM: tegra: fix clock_get_periph_rate() for UART clocks
Make clock_get_periph_rate() return the correct value for UART clocks.
This change needs to be applied before the patches that enable CONFIG_CLK
for Tegra SoCs before Tegra186, since enabling that option causes
ns16550_serial_ofdata_to_platdata() to rely on clk_get_rate() for UART
clocks, and clk_get_rate() eventually calls clock_get_periph_rate().
This change is a rather horrible hack, as explained in the comment added
to the clock driver. I've tried fixing this correctly for all clocks as
described in that comment, but there's too much fallout elsewhere. I
believe the clock driver has a number of bugs which all cancel each-other
out, and unravelling that chain is too complex at present. This change is
the smallest change that fixes clock_get_periph_rate() for UART clocks
while guaranteeing no change in behaviour for any other clock, which
avoids other regressions.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Diffstat (limited to 'cmd/ini.c')
0 files changed, 0 insertions, 0 deletions