Discussion:
[coreboot] Update on G505S (dGPU init success)
Hans Jürgen Kitter
2018-11-03 17:41:15 UTC
Permalink
Hi All,

I thought that I share with the G505S+Coreboot (+QubesOS) users that I
finally managed to enable and use the dGPUs on the G505S boards. When
mentioning boards, I mean, that both variants with its respective VBIOSes:
dGPU=1002,6663 (ATI HD 8570M, Sun-Pro) and the dGPU=1002,6665 (ATI R5 M230,
Jet-Pro) are working under Ubuntu Linux or Qubes 4.0 (no Windows testing
was done or planned).
DRI_PRIME=1 seem to work, I use the radeon driver for the APU GPU
(A10-5750m) and amdgpu for the dGPU. I'm currently investigating how to get
the most out of this setup in Qubes (HW accel 2D in AppVMs).

Problem statement: the dGPU was not initiated correctly and was not enabled
by the radeon or amdgpu kernel module, VFCT header was corrupt/truncated
(effectively was not corrupt, but only VFCT for the APU/iGPU was present).
Trying to init the dGPU device through Oprom initialization failed all my
attempts (radeon 0000:04:00.0: Invalid PCI ROM header signature: expecting
0xaa55, got 0x0000). This was true whether using Coreboot+GRUB or using
Coreboot+Seabios, regardless of kernel version 4.x.

Solution in short: Modify Coreboot, that the VFCT table is only created for
the dGPU so it can be used by either the radeon or amdgpu KMS driver.

Solution in more details:

- CB 4.8 latest, with SB latest 1.11.2 as payload
- I modified the IO address & the PCI device ID in APU GPU VBIOS file
(even though IO address change to 0x3000 might not required).
- I modified the IO address in the dGPU VBIOS files to the actual CB -
linux kernel provided (0x1000) address
- both prepared VBIOSes were put into CBFS
- Coreboot config: std. g505s build, but both run oprom & load oprom was
enabled (native mode), also display was set to FB, 1024x768 16.8M color
8:8:8, legacy VGA → this provides console output even if eg. GRUB is the
payload).
- main payload Seabios (compiled as elf separately)
- I had to add my SPI chip driver for EN25QH32=0x7016 to CB
- I used version 0600111F AMD Fam 15h microcode (even though this was
later recalled by AMD to n-1 version 06001119). Nevertheless, Spectre V2
RSB and IBPB mitigations are thus enabled.
- I enabled pci 00:02.0 for dGPU in devicetree.cb → according to the CB
logs this causes pci resource allocation problems, because the 256M address
window of the dGPU conficts with the APU GPU VRAM UMA memory window
(originally 512M). Solution (not really professional, but I gave up
understanding the whole AGESA and CB PCI allocation mechanism): I decreased
the UMA size in buildopts.c to 256M (0x1000) and patched amd_mtrr.c,
function setup_bsp_ramtop, to bring TOM down by 256M. So now the APU GPU
has an 256M PCI address window @0xd0000000 and the dGPU has also 256M from
0xe0000000 without eventual resource conflict (there's an initial
allocation warning for pci:00:01.0 in linux log though).
- I modified pci_device.c, function pci_dev_init to only enable
pci_rom_probe and pci_rom load for the dGPU pci device (no oprom exec
needed for PCI_CLASS_DISPLAY_OTHER).
- I modified pci_rom.c,
- so the VBIOS of the dGPU is copied to 0xd0000 from CBFS
(..check_initialized..)
- and pci_rom_write_acpi_tables only run for PCI_CLASS_DEVICE_OTHER →
dGPU (ACPI VFCT fill is done from the previously prepared
0xd0000=PCI_RAM_IMAGE_START address)

Remark1: there could be one ACPI VFCT table containing both VBIOSes
present, but the APU GPU VBIOS is fully working via Oprom exec only.
Remark2: in the stock v3 bios additional ACPI tables are present (SSDT,
DSDT) which contain VGA related methods and descriptions therefore, the
VFCT table itself (in EFI mode) is a lot smaller ca. 20K (vs. 65k). These
additional ACPI extentions allow eg. the vga_switcheroo to work under
Ubuntu in stock BIOS. WIth CB currently only offloading (DRI...) seems to
work. And as I checked I will need to understand the whole ACPI concept
before I can migrate the relevant ACPI tables from the stock BIOS to CB for
the switching to work. Also, it seems, that TurboCore (PowerNow?) is not
currently enabled entirely in CB --> also implemented via ACPI tables in
stock BIOS.
Well, I’m not a coding expert (just a G505S enthusiast), this is the reason
I didn’t include patches. My goal was to describe a working solution, and
someone with proper coding skills and the possibility to submit official
patches for CB can get this committed.

BR,
HJK
Mike Banon
2018-11-03 20:13:31 UTC
Permalink
Dear HJK,

My sincere huge congratulations to you for your incredible
breakthrough! :-) As you could see from
http://mail.coreboot.org/pipermail/coreboot/2018-November/087679.html
- I got stuck after loading both iGPU and dGPU OpROMs at kernel panic
and thought its' only because of a broken PCI config. But after
reading your good experience, it seems there were some other problems
as well which you've successfully solved. Please could you clarify
some details :
1)
I modified the IO address & the PCI device ID in APU GPU VBIOS file (even though IO address change to 0x3000 might not required).
I modified the IO address in the dGPU VBIOS files to the actual CB - linux kernel provided (0x1000) address
1a) Please confirm that you've modified the IO addresses as following:
APU iGPU (integrated GPU) = 0x3000
dGPU (discrete GPU) = 0x1000
1b) Why these exact "IO address" values? How you came up with them,
and why not any different addresses?
1c) At what address offsets (in relation to the beginnings of VBIOS
files) these "IO address" values are situated?
1d) To what value you've changed the PCI device ID in APU iGPU VBIOS,
and why this change is necessary?
2)
I had to add my SPI chip driver for EN25QH32=0x7016 to CB
Why it was needed? My G505S also has this EN25QH32 chip and I never
had to add any SPI chip drivers - this chip always "just worked" at
coreboot. If this is indeed required (e.g. for the successful graphics
initialization), where to get this SPI chip driver and how to add it?
3)
I enabled pci 00:02.0 for dGPU in devicetree.cb → according to the CB logs this causes pci resource allocation problems
Do you still have any part of this log, for the reference? I don't
remember any resource allocation problems in 8 level CB logs
4)
I modified pci_device.c, function pci_dev_init to only enable pci_rom_probe and pci_rom load for the dGPU pci device (no oprom exec needed for PCI_CLASS_DISPLAY_OTHER)
Why OpROM exec is not needed for dGPU device? And what would happen if
I will execute it there? (would it break the things, or just no
difference)
5)
I modified pci_rom.c, so the VBIOS of the dGPU is copied to 0xd0000 from CBFS (..check_initialized..)
Why this specific 0xd0000 address has been selected? Or it has been
automatically chosen by coreboot when it wanted to load this OpROM to
dGPU ?
6)
and pci_rom_write_acpi_tables only run for PCI_CLASS_DEVICE_OTHER → dGPU (ACPI VFCT fill is done from the previously prepared 0xd0000=PCI_RAM_IMAGE_START address)
Why this should be done for dGPU only? (not for APU iGPU or any other devices)

7) I would like to refine your "hacks" and commit them to coreboot,
while giving a full credit to you of course. But, if I would start
doing it from scratch according to your instructions, there is always
a chance that I'd misunderstand or forget something (or you forgot to
mention some small but important technicality) and as result my
attempt to reproduce your success could be problematic. It would be
ideal if you could share:
A*) your successful coreboot.rom (I'd like to check its' PCI state out
of curiosity)
B*) your coreboot's .config file for the reference purposes
C*) upload your whole coreboot directory to some convenient place,
either to GitHub or just archive it as .tar.gz (maybe as a multipart
archive if it ends up too big) and upload it to some hosting website;
or maybe even create a .torrent out of it and host it temporarily -
whatever is most convenient to you.

Hopefully you could share this stuff and I will be able to reproduce
your results, initially "AS-IS" - but then I will try my best to
refine it to the acceptable code quality (so that it could be
officially accepted to coreboot) while checking from time to time if
it is still working after my gradual changes. Hope to hear any news
from you soon

Best regards,
Mike Banon
On Sat, Nov 3, 2018 at 8:42 PM Hans Jürgen Kitter
Hi All,
I thought that I share with the G505S+Coreboot (+QubesOS) users that I finally managed to enable and use the dGPUs on the G505S boards. When mentioning boards, I mean, that both variants with its respective VBIOSes: dGPU=1002,6663 (ATI HD 8570M, Sun-Pro) and the dGPU=1002,6665 (ATI R5 M230, Jet-Pro) are working under Ubuntu Linux or Qubes 4.0 (no Windows testing was done or planned).
DRI_PRIME=1 seem to work, I use the radeon driver for the APU GPU (A10-5750m) and amdgpu for the dGPU. I'm currently investigating how to get the most out of this setup in Qubes (HW accel 2D in AppVMs).
Problem statement: the dGPU was not initiated correctly and was not enabled by the radeon or amdgpu kernel module, VFCT header was corrupt/truncated (effectively was not corrupt, but only VFCT for the APU/iGPU was present). Trying to init the dGPU device through Oprom initialization failed all my attempts (radeon 0000:04:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x0000). This was true whether using Coreboot+GRUB or using Coreboot+Seabios, regardless of kernel version 4.x.
Solution in short: Modify Coreboot, that the VFCT table is only created for the dGPU so it can be used by either the radeon or amdgpu KMS driver.
CB 4.8 latest, with SB latest 1.11.2 as payload
I modified the IO address & the PCI device ID in APU GPU VBIOS file (even though IO address change to 0x3000 might not required).
I modified the IO address in the dGPU VBIOS files to the actual CB - linux kernel provided (0x1000) address
both prepared VBIOSes were put into CBFS
Coreboot config: std. g505s build, but both run oprom & load oprom was enabled (native mode), also display was set to FB, 1024x768 16.8M color 8:8:8, legacy VGA → this provides console output even if eg. GRUB is the payload).
main payload Seabios (compiled as elf separately)
I had to add my SPI chip driver for EN25QH32=0x7016 to CB
I used version 0600111F AMD Fam 15h microcode (even though this was later recalled by AMD to n-1 version 06001119). Nevertheless, Spectre V2 RSB and IBPB mitigations are thus enabled.
I modified pci_device.c, function pci_dev_init to only enable pci_rom_probe and pci_rom load for the dGPU pci device (no oprom exec needed for PCI_CLASS_DISPLAY_OTHER).
I modified pci_rom.c,
so the VBIOS of the dGPU is copied to 0xd0000 from CBFS (..check_initialized..)
and pci_rom_write_acpi_tables only run for PCI_CLASS_DEVICE_OTHER → dGPU (ACPI VFCT fill is done from the previously prepared 0xd0000=PCI_RAM_IMAGE_START address)
Remark1: there could be one ACPI VFCT table containing both VBIOSes present, but the APU GPU VBIOS is fully working via Oprom exec only.
Remark2: in the stock v3 bios additional ACPI tables are present (SSDT, DSDT) which contain VGA related methods and descriptions therefore, the VFCT table itself (in EFI mode) is a lot smaller ca. 20K (vs. 65k). These additional ACPI extentions allow eg. the vga_switcheroo to work under Ubuntu in stock BIOS. WIth CB currently only offloading (DRI...) seems to work. And as I checked I will need to understand the whole ACPI concept before I can migrate the relevant ACPI tables from the stock BIOS to CB for the switching to work. Also, it seems, that TurboCore (PowerNow?) is not currently enabled entirely in CB --> also implemented via ACPI tables in stock BIOS.
Well, I’m not a coding expert (just a G505S enthusiast), this is the reason I didn’t include patches. My goal was to describe a working solution, and someone with proper coding skills and the possibility to submit official patches for CB can get this committed.
BR,
HJK
--
https://mail.coreboot.org/mailman/listinfo/coreboot
--
coreboot mailing list: ***@coreboot.org
Hans Jürgen Kitter
2018-11-04 10:12:56 UTC
Permalink
Hi,

Here are some additional details, but before I answer I share some
preconceptions I had along the way:
- after some initial tries I thought, that the stock BIOS sets up the
configuration in a special way for the iGPU and gDPU (PCI adress BARs, IO
addresses etc) to get those working therefore, I tried to achieve a very
similar setup using CB
- the problem is that the stock BIOS patches the raw VBIOS/Oprom files
during execution - which Coreboot doesn't do. (ref:
https://www.coreboot.org/Board:lenovo/x60/Installation)
- when I started "hacking" months ago I got similar linux kernel panics
like you got when pci bus 02.0 was enabled. I assumed there was a pci
allocation problem behind (one additional pci device with big pci resource
requirement), also I assumed (maybe wrongly), that pci BAR addresses for
the dGPU - and also UMA memory - need to be below the first 4 GB of memory
- for the dGPU to work. (details below)
- some of these preconceptions might not need to be considered any more,
but so far I haven't made attempts to exclude any of these. By sharing
this, I hope more people can help us cleaning the unnecessary stuff...
Post by Mike Banon
Dear HJK,
My sincere huge congratulations to you for your incredible
breakthrough! :-) As you could see from
http://mail.coreboot.org/pipermail/coreboot/2018-November/087679.html
- I got stuck after loading both iGPU and dGPU OpROMs at kernel panic
and thought its' only because of a broken PCI config. But after
reading your good experience, it seems there were some other problems
as well which you've successfully solved. Please could you clarify
1)
Post by Hans Jürgen Kitter
I modified the IO address & the PCI device ID in APU GPU VBIOS file
(even though IO address change to 0x3000 might not required).
Post by Hans Jürgen Kitter
I modified the IO address in the dGPU VBIOS files to the actual CB -
linux kernel provided (0x1000) address
APU iGPU (integrated GPU) = 0x3000
dGPU (discrete GPU) = 0x1000
1b) Why these exact "IO address" values? How you came up with them,
and why not any different addresses?
1c) At what address offsets (in relation to the beginnings of VBIOS
files) these "IO address" values are situated?
1d) To what value you've changed the PCI device ID in APU iGPU VBIOS,
and why this change is necessary?
When CB boots the iGPU gets IO address 0x3000 and dGPU gets address 0x1000
from the resource allocator. Since CB doesn't patches the VBIOS files, I
had to do this manually. Giving it a thought now, this might not be
required any more (to mod the VBIOS), because 1. this was not the problem
the dGPU didn't init, 2. if I'm not mistaken normally Oprom execution
should do this, right?
Offset for IO address for iGPU is @ 0x1A4 (2 bytes), for dGPU @ 0x218. You
can use radeon-bios-decode to check the result, and use atomcrc.exe to show
the new checksum (@offset 0x21) - to be corrected with hexedit.
I have multiple variants of APUs, the iGPU pci device ID need to match
within VBIOS (offset 0x1b8 for pci id (4 bytes)). additional remark: you
can also set the subsystem ID @ 0x1a6 (4 bytes).


2)
Post by Mike Banon
Post by Hans Jürgen Kitter
I had to add my SPI chip driver for EN25QH32=0x7016 to CB
Why it was needed? My G505S also has this EN25QH32 chip and I never
had to add any SPI chip drivers - this chip always "just worked" at
coreboot. If this is indeed required (e.g. for the successful graphics
initialization), where to get this SPI chip driver and how to add it?
This is not required for graphics, but hybernation support needs that CB
writes data to the SPI chip. You can see if SPI writes succeed or not in
CB console log. I have multiple g505s boards - each with different SPI
chips and my EON chip ID was not listed in CB SPI eon.c

3)
Post by Mike Banon
Post by Hans Jürgen Kitter
I enabled pci 00:02.0 for dGPU in devicetree.cb → according to the CB
logs this causes pci resource allocation problems
Do you still have any part of this log, for the reference? I don't
remember any resource allocation problems in 8 level CB logs
Well, this is an interesting topic and I admit, that I don't fully
understand all the details, so my phrasing below might not be too
"academic"... Moreover, pls. check my preconceptions, and note the fact
that pci 02.0 has to be enabled in CB to be able to fill the VFCT for the
dGPU.
In the stock BIOS iGPU gets BAR address @0xd0000000. But iGPU UMA
size=768M, therefore I assume iGPU also occupies +2x256M UMA memory
starting from 0xb0000000. If you check the stock BIOS it is located in the
0x9xxxxxxx range, whereas in CB - depending on the pci resource allocator
provided ranges, the AGESA stipulations and the UMA size - from normally
end of the 0xb.... range. The problem is that it seems in AGESA Top of low
memory (TOM) is always 0xe0000000 (set by the BSP through msr register),
this is used by CB to calculate the relative offsets. In a g505s without
dGPU pci addres BAR for iGPU is set @ 0xe0000000, and since CB UMA
size=512MB, there is room for the 512 MB UMA (below 4GB) starting from
0xc... and the iGPU BAR from 0xe...
When I enabled the dGPU in CB (in another board), pci resource allocator
provided pci address 0xd... to the iGPU which causes overlap with UMA
memory (512 Mb from 0xc...). I got a black screen (and kernel panic). I
"managed" to solve this by decreasing UMA size to 256M and also modified
UMA_base to start 256M "lower". This way iGPU UMA memory started from
0xc.. (256MB), iGPU pci address was @0xd.. and dGPU pci address was @ 0xe..
Enabling the dGPU with its own 2GB RAM will potentially circumvent the
negative performance effects using a lower iGPU UMA size.
In a recent modificaion, I changed this method: instead of setting UMA_base
address starting at a lower address, I modified CB in a way, that when BSP
TOM address is "announced" to the other CPUs, I write 0xd0000000 instead of
AGESA 0xe0000000 to the msr register.

There could certainly be better ways to modify the CB code, to consider
UMA_size, additional PCI resources and still allocate these memory
resources below 4GB.

4)
Post by Mike Banon
Post by Hans Jürgen Kitter
I modified pci_device.c, function pci_dev_init to only enable
pci_rom_probe and pci_rom load for the dGPU pci device (no oprom exec
needed for PCI_CLASS_DISPLAY_OTHER)
Why OpROM exec is not needed for dGPU device? And what would happen if
I will execute it there? (would it break the things, or just no
difference)
My experience shows, that dGPU Oprom execution in itself didn't properly
initiated the dGPU, kernel still complained. I had interim attempts, when
CB or Seabios executed the dGPU Oprom, without any noticeable init results
at those times. The important thing is though to prevent SB from creating
the boot entry for the executable Oprom, because this always freezes the
laptop at boot list (without any means to debug what has happened). I
solved that by modifying SB to run the dGPU oprom using the vgarom_setup
function and not the optionrom_setup function.

Remark: you can disable oprom execution an easy way by changing the 4th
byte of the oprom from 0xe9 to 0xcb --> return without initial jump to the
exec code.

5)
Post by Mike Banon
Post by Hans Jürgen Kitter
I modified pci_rom.c, so the VBIOS of the dGPU is copied to 0xd0000 from
CBFS (..check_initialized..)
Why this specific 0xd0000 address has been selected? Or it has been
automatically chosen by coreboot when it wanted to load this OpROM to
dGPU ?
We have two pci vga cards, and the shadow rom by design for non-legacy vga
starts at 0xc0000 and is 64K long. 0xd0000 is the non-vga pci option rom
shadow area. Interestingly linux kernel allocates 132K starting from
0xc0000 for video shadow, and this could be a reason, why "normal" oprom
initialization for vga card1 + vga card2 doesn't work, because vga card 1
claims in linux kernel all 132K for itself, and therefore the second vga
card cannot use this overlapping range for its shadowed rom. At least this
is something (how I understood) I found when searching for this. This was
mentioned in connection with a linux 3.16? kernel issue.
Also note, that SB alligns oproms with 2048 (0x800) address allignment, so
when I "played" with oprom init both in CB and in SB I transparently used
4096 (0x1000) allignment in SB. Oproms can only exectue, if its addresses
are alligned to at least 0x800 as I remember...


6)
Post by Mike Banon
Post by Hans Jürgen Kitter
and pci_rom_write_acpi_tables only run for PCI_CLASS_DEVICE_OTHER → dGPU
(ACPI VFCT fill is done from the previously prepared
0xd0000=PCI_RAM_IMAGE_START address)
Why this should be done for dGPU only? (not for APU iGPU or any other devices)
So far I haven't succeeded to modify CB ACPI VFCT creator function to
povide VFCT tables, that contain both VBIOSes(Oproms) in a way, that the
kernel radeon or amdgpu driver can use. The problem is, that I believe one
main ACPI-VCFT table need to be created, with two VBIOS contents at
different offsets (ref:
https://www.phoronix.com/forums/forum/linux-graphics-x-org-drivers/open-source-amd-linux/926087-more-radeon-amdgpu-fixes-line-up-for-linux-4-10/page2).
Also worth to check the linux kernel sources for ACPI VFCT headers..
CB ACPI VFCT was not designed to provide that, it runs once when normally
pci_class = PCI_CLASS_DEVICE_VGA device is found. For simplicity, I
modified this to run oly for PCI_CLASS_DEVICE_OTHER, and it worked, so I
haven't gone further. CB initializes iGPU oprom anyways..
dGPU is only initialized (KMS) when linux kernel loads the radeon/amdgu
driver, until that the dGPU is not initialized.

7) I would like to refine your "hacks" and commit them to coreboot,
Post by Mike Banon
while giving a full credit to you of course. But, if I would start
doing it from scratch according to your instructions, there is always
a chance that I'd misunderstand or forget something (or you forgot to
mention some small but important technicality) and as result my
attempt to reproduce your success could be problematic. It would be
A*) your successful coreboot.rom (I'd like to check its' PCI state out
of curiosity)
B*) your coreboot's .config file for the reference purposes
C*) upload your whole coreboot directory to some convenient place,
either to GitHub or just archive it as .tar.gz (maybe as a multipart
archive if it ends up too big) and upload it to some hosting website;
or maybe even create a .torrent out of it and host it temporarily -
whatever is most convenient to you.
OK, I'll see what I can do to submit logs, config files and maybe "cleaned
up" patches, of my current working setup.

BR,
HJK


Hopefully you could share this stuff and I will be able to reproduce
Post by Mike Banon
your results, initially "AS-IS" - but then I will try my best to
refine it to the acceptable code quality (so that it could be
officially accepted to coreboot) while checking from time to time if
it is still working after my gradual changes. Hope to hear any news
from you soon
Best regards,
Mike Banon
On Sat, Nov 3, 2018 at 8:42 PM Hans JÃŒrgen Kitter
Post by Hans Jürgen Kitter
Hi All,
I thought that I share with the G505S+Coreboot (+QubesOS) users that I
finally managed to enable and use the dGPUs on the G505S boards. When
dGPU=1002,6663 (ATI HD 8570M, Sun-Pro) and the dGPU=1002,6665 (ATI R5 M230,
Jet-Pro) are working under Ubuntu Linux or Qubes 4.0 (no Windows testing
was done or planned).
Post by Hans Jürgen Kitter
DRI_PRIME=1 seem to work, I use the radeon driver for the APU GPU
(A10-5750m) and amdgpu for the dGPU. I'm currently investigating how to get
the most out of this setup in Qubes (HW accel 2D in AppVMs).
Post by Hans Jürgen Kitter
Problem statement: the dGPU was not initiated correctly and was not
enabled by the radeon or amdgpu kernel module, VFCT header was
corrupt/truncated (effectively was not corrupt, but only VFCT for the
APU/iGPU was present). Trying to init the dGPU device through Oprom
initialization failed all my attempts (radeon 0000:04:00.0: Invalid PCI ROM
header signature: expecting 0xaa55, got 0x0000). This was true whether
using Coreboot+GRUB or using Coreboot+Seabios, regardless of kernel version
4.x.
Post by Hans Jürgen Kitter
Solution in short: Modify Coreboot, that the VFCT table is only created
for the dGPU so it can be used by either the radeon or amdgpu KMS driver.
Post by Hans Jürgen Kitter
CB 4.8 latest, with SB latest 1.11.2 as payload
I modified the IO address & the PCI device ID in APU GPU VBIOS file
(even though IO address change to 0x3000 might not required).
Post by Hans Jürgen Kitter
I modified the IO address in the dGPU VBIOS files to the actual CB -
linux kernel provided (0x1000) address
Post by Hans Jürgen Kitter
both prepared VBIOSes were put into CBFS
Coreboot config: std. g505s build, but both run oprom & load oprom was
enabled (native mode), also display was set to FB, 1024x768 16.8M color
8:8:8, legacy VGA → this provides console output even if eg. GRUB is the
payload).
Post by Hans Jürgen Kitter
main payload Seabios (compiled as elf separately)
I had to add my SPI chip driver for EN25QH32=0x7016 to CB
I used version 0600111F AMD Fam 15h microcode (even though this was
later recalled by AMD to n-1 version 06001119). Nevertheless, Spectre V2
RSB and IBPB mitigations are thus enabled.
Post by Hans Jürgen Kitter
I enabled pci 00:02.0 for dGPU in devicetree.cb → according to the CB
logs this causes pci resource allocation problems, because the 256M address
window of the dGPU conficts with the APU GPU VRAM UMA memory window
(originally 512M). Solution (not really professional, but I gave up
understanding the whole AGESA and CB PCI allocation mechanism): I decreased
the UMA size in buildopts.c to 256M (0x1000) and patched amd_mtrr.c,
function setup_bsp_ramtop, to bring TOM down by 256M. So now the APU GPU
0xe0000000 without eventual resource conflict (there's an initial
allocation warning for pci:00:01.0 in linux log though).
Post by Hans Jürgen Kitter
I modified pci_device.c, function pci_dev_init to only enable
pci_rom_probe and pci_rom load for the dGPU pci device (no oprom exec
needed for PCI_CLASS_DISPLAY_OTHER).
Post by Hans Jürgen Kitter
I modified pci_rom.c,
so the VBIOS of the dGPU is copied to 0xd0000 from CBFS
(..check_initialized..)
Post by Hans Jürgen Kitter
and pci_rom_write_acpi_tables only run for PCI_CLASS_DEVICE_OTHER → dGPU
(ACPI VFCT fill is done from the previously prepared
0xd0000=PCI_RAM_IMAGE_START address)
Post by Hans Jürgen Kitter
Remark1: there could be one ACPI VFCT table containing both VBIOSes
present, but the APU GPU VBIOS is fully working via Oprom exec only.
Post by Hans Jürgen Kitter
Remark2: in the stock v3 bios additional ACPI tables are present (SSDT,
DSDT) which contain VGA related methods and descriptions therefore, the
VFCT table itself (in EFI mode) is a lot smaller ca. 20K (vs. 65k). These
additional ACPI extentions allow eg. the vga_switcheroo to work under
Ubuntu in stock BIOS. WIth CB currently only offloading (DRI...) seems to
work. And as I checked I will need to understand the whole ACPI concept
before I can migrate the relevant ACPI tables from the stock BIOS to CB for
the switching to work. Also, it seems, that TurboCore (PowerNow?) is not
currently enabled entirely in CB --> also implemented via ACPI tables in
stock BIOS.
Post by Hans Jürgen Kitter
Well, I’m not a coding expert (just a G505S enthusiast), this is the
reason I didn’t include patches. My goal was to describe a working
solution, and someone with proper coding skills and the possibility to
submit official patches for CB can get this committed.
Post by Hans Jürgen Kitter
BR,
HJK
--
https://mail.coreboot.org/mailman/listinfo/coreboot
Mike Banon
2018-11-04 21:10:30 UTC
Permalink
Dear HJK,
the problem is that the stock BIOS patches the raw VBIOS/Oprom files during execution - which Coreboot doesn't do
Since CB doesn't patches the VBIOS files, I had to do this manually
But these iGPU and dGPU VBIOS/OpROM files for G505S that we are using
- available at https://github.com/g505s-opensource-researcher/g505s-atombios
repository - have been extracted from the running system with
proprietary UEFI/BIOS (as you could see from this repository's readme)
which of course have patched them during booting. So these files
should have been already patched by it, and I dont understand why this
additional patching was needed. Maybe you are using the different
VBIOS files - which have been directly extracted from UEFI binary
image (probably obtained from this another repository
https://github.com/g505s-opensource-researcher/g505s-proprietary/tree/master/WinVALGC300_unpacked
) - that were not patched? It would be interesting to see your VBIOS
files after you would share your ./coreboot/ directory (hopefully they
would be inside) to compare them with "g505s-atombios" files
if I'm not mistaken normally Oprom execution should do this, right?
Maybe OpROM execution could initialize some of its' own fields during
its' execution, but there are data structures which have to be patched
by someone else. When we tried to use "clean" VBIOS files ("clean" =
directly extracted from UEFI image and therefore not patched by it),
the screen's backlight was not working ; then we compared "clean"
(from WinVALGC300_unpacked) vs "dirty" (patched, from g505s-atombios)
VBIOSes - and there were many differences, including at these data
structures: LVDS_Info, sLCDTiming, ucLVDS_Misc,
ucLCDPanel_SpecialHandlingCap, PowerPlayInfo,
usSpreadSpectrumPercentage . I am attaching "clean" and "dirty" iGPU
AtomDis outputs to this email if anyone wants to compare. By the way,
what is better - AtomDis or radeon-bios-decode ?
I have multiple variants of APUs, the iGPU pci device ID need to match within VBIOS
What other APUs do you have in addition to A10 that are working at
coreboot? I've always thought that coreboot is compatible only with
A10-5750M G505S. If you would like to make a universal coreboot image
that would be compatible not just with A10 but e.g. with A8-5550M
also, it might be a good idea to get and include the extra
"pci1002,990d.rom" file (A8-5550M GPU 8550G VBIOS) in addition to
"pci1002,990b.rom" (for A10-5750M HD 8650G). Although these GPUs could
be 100% compatible with each other, in which case you could indeed
patch A10 iGPU VBIOS with A8 iGPU IDs and use A10's VBIOS with A8
(maybe it'll even give you some extra performance). To verify its'
okay to do, you could extract iGPU VBIOS from A8 G505S running UEFI,
and compare it with A10 VBIOS.
The important thing is though to prevent SB from creating the boot entry for the executable Oprom
What command are you using to add dGPU OpROMs to coreboot? I think
that if you'd be adding it like this:
./util/cbfstool/cbfstool $COREBOOT_ROM_PATH add -f $VGABIOS_PATH -n
pci****,****.rom -t optionrom
( -t optionrom is important ) SB will not be creating the boot entry
for it, at least SB never created it during my experiments
I used version 0600111F AMD Fam 15h microcode (even though this was later recalled by AMD to n-1 version 06001119)
Are you sure 0600111F has been recalled? It has been released only for
some proprietary UEFI laptops, and they still have this microcode if
my data isn't outdated. So not recalled, just has never been released
to the "open source world" for stupid reasons - never pushed to
linux-firmware.git / amd-ucode and there's horribly outdated 06001119
from 2012 year

Best regards,
Mike Banon
On Sun, Nov 4, 2018 at 1:14 PM Hans JÃŒrgen Kitter
Hi,
- after some initial tries I thought, that the stock BIOS sets up the configuration in a special way for the iGPU and gDPU (PCI adress BARs, IO addresses etc) to get those working therefore, I tried to achieve a very similar setup using CB
- the problem is that the stock BIOS patches the raw VBIOS/Oprom files during execution - which Coreboot doesn't do. (ref: https://www.coreboot.org/Board:lenovo/x60/Installation)
- when I started "hacking" months ago I got similar linux kernel panics like you got when pci bus 02.0 was enabled. I assumed there was a pci allocation problem behind (one additional pci device with big pci resource requirement), also I assumed (maybe wrongly), that pci BAR addresses for the dGPU - and also UMA memory - need to be below the first 4 GB of memory - for the dGPU to work. (details below)
- some of these preconceptions might not need to be considered any more, but so far I haven't made attempts to exclude any of these. By sharing this, I hope more people can help us cleaning the unnecessary stuff...
Post by Mike Banon
Dear HJK,
My sincere huge congratulations to you for your incredible
breakthrough! :-) As you could see from
http://mail.coreboot.org/pipermail/coreboot/2018-November/087679.html
- I got stuck after loading both iGPU and dGPU OpROMs at kernel panic
and thought its' only because of a broken PCI config. But after
reading your good experience, it seems there were some other problems
as well which you've successfully solved. Please could you clarify
1)
I modified the IO address & the PCI device ID in APU GPU VBIOS file (even though IO address change to 0x3000 might not required).
I modified the IO address in the dGPU VBIOS files to the actual CB - linux kernel provided (0x1000) address
APU iGPU (integrated GPU) = 0x3000
dGPU (discrete GPU) = 0x1000
1b) Why these exact "IO address" values? How you came up with them,
and why not any different addresses?
1c) At what address offsets (in relation to the beginnings of VBIOS
files) these "IO address" values are situated?
1d) To what value you've changed the PCI device ID in APU iGPU VBIOS,
and why this change is necessary?
When CB boots the iGPU gets IO address 0x3000 and dGPU gets address 0x1000 from the resource allocator. Since CB doesn't patches the VBIOS files, I had to do this manually. Giving it a thought now, this might not be required any more (to mod the VBIOS), because 1. this was not the problem the dGPU didn't init, 2. if I'm not mistaken normally Oprom execution should do this, right?
Post by Mike Banon
2)
I had to add my SPI chip driver for EN25QH32=0x7016 to CB
Why it was needed? My G505S also has this EN25QH32 chip and I never
had to add any SPI chip drivers - this chip always "just worked" at
coreboot. If this is indeed required (e.g. for the successful graphics
initialization), where to get this SPI chip driver and how to add it?
This is not required for graphics, but hybernation support needs that CB writes data to the SPI chip. You can see if SPI writes succeed or not in CB console log. I have multiple g505s boards - each with different SPI chips and my EON chip ID was not listed in CB SPI eon.c
Post by Mike Banon
3)
I enabled pci 00:02.0 for dGPU in devicetree.cb → according to the CB logs this causes pci resource allocation problems
Do you still have any part of this log, for the reference? I don't
remember any resource allocation problems in 8 level CB logs
Well, this is an interesting topic and I admit, that I don't fully understand all the details, so my phrasing below might not be too "academic"... Moreover, pls. check my preconceptions, and note the fact that pci 02.0 has to be enabled in CB to be able to fill the VFCT for the dGPU.
In a recent modificaion, I changed this method: instead of setting UMA_base address starting at a lower address, I modified CB in a way, that when BSP TOM address is "announced" to the other CPUs, I write 0xd0000000 instead of AGESA 0xe0000000 to the msr register.
There could certainly be better ways to modify the CB code, to consider UMA_size, additional PCI resources and still allocate these memory resources below 4GB.
Post by Mike Banon
4)
I modified pci_device.c, function pci_dev_init to only enable pci_rom_probe and pci_rom load for the dGPU pci device (no oprom exec needed for PCI_CLASS_DISPLAY_OTHER)
Why OpROM exec is not needed for dGPU device? And what would happen if
I will execute it there? (would it break the things, or just no
difference)
My experience shows, that dGPU Oprom execution in itself didn't properly initiated the dGPU, kernel still complained. I had interim attempts, when CB or Seabios executed the dGPU Oprom, without any noticeable init results at those times. The important thing is though to prevent SB from creating the boot entry for the executable Oprom, because this always freezes the laptop at boot list (without any means to debug what has happened). I solved that by modifying SB to run the dGPU oprom using the vgarom_setup function and not the optionrom_setup function.
Remark: you can disable oprom execution an easy way by changing the 4th byte of the oprom from 0xe9 to 0xcb --> return without initial jump to the exec code.
Post by Mike Banon
5)
I modified pci_rom.c, so the VBIOS of the dGPU is copied to 0xd0000 from CBFS (..check_initialized..)
Why this specific 0xd0000 address has been selected? Or it has been
automatically chosen by coreboot when it wanted to load this OpROM to
dGPU ?
We have two pci vga cards, and the shadow rom by design for non-legacy vga starts at 0xc0000 and is 64K long. 0xd0000 is the non-vga pci option rom shadow area. Interestingly linux kernel allocates 132K starting from 0xc0000 for video shadow, and this could be a reason, why "normal" oprom initialization for vga card1 + vga card2 doesn't work, because vga card 1 claims in linux kernel all 132K for itself, and therefore the second vga card cannot use this overlapping range for its shadowed rom. At least this is something (how I understood) I found when searching for this. This was mentioned in connection with a linux 3.16? kernel issue.
Also note, that SB alligns oproms with 2048 (0x800) address allignment, so when I "played" with oprom init both in CB and in SB I transparently used 4096 (0x1000) allignment in SB. Oproms can only exectue, if its addresses are alligned to at least 0x800 as I remember...
Post by Mike Banon
6)
and pci_rom_write_acpi_tables only run for PCI_CLASS_DEVICE_OTHER → dGPU (ACPI VFCT fill is done from the previously prepared 0xd0000=PCI_RAM_IMAGE_START address)
Why this should be done for dGPU only? (not for APU iGPU or any other devices)
So far I haven't succeeded to modify CB ACPI VFCT creator function to povide VFCT tables, that contain both VBIOSes(Oproms) in a way, that the kernel radeon or amdgpu driver can use. The problem is, that I believe one main ACPI-VCFT table need to be created, with two VBIOS contents at different offsets (ref: https://www.phoronix.com/forums/forum/linux-graphics-x-org-drivers/open-source-amd-linux/926087-more-radeon-amdgpu-fixes-line-up-for-linux-4-10/page2). Also worth to check the linux kernel sources for ACPI VFCT headers..
CB ACPI VFCT was not designed to provide that, it runs once when normally pci_class = PCI_CLASS_DEVICE_VGA device is found. For simplicity, I modified this to run oly for PCI_CLASS_DEVICE_OTHER, and it worked, so I haven't gone further. CB initializes iGPU oprom anyways..
dGPU is only initialized (KMS) when linux kernel loads the radeon/amdgu driver, until that the dGPU is not initialized.
Post by Mike Banon
7) I would like to refine your "hacks" and commit them to coreboot,
while giving a full credit to you of course. But, if I would start
doing it from scratch according to your instructions, there is always
a chance that I'd misunderstand or forget something (or you forgot to
mention some small but important technicality) and as result my
attempt to reproduce your success could be problematic. It would be
A*) your successful coreboot.rom (I'd like to check its' PCI state out
of curiosity)
B*) your coreboot's .config file for the reference purposes
C*) upload your whole coreboot directory to some convenient place,
either to GitHub or just archive it as .tar.gz (maybe as a multipart
archive if it ends up too big) and upload it to some hosting website;
or maybe even create a .torrent out of it and host it temporarily -
whatever is most convenient to you.
OK, I'll see what I can do to submit logs, config files and maybe "cleaned up" patches, of my current working setup.
BR,
HJK
Post by Mike Banon
Hopefully you could share this stuff and I will be able to reproduce
your results, initially "AS-IS" - but then I will try my best to
refine it to the acceptable code quality (so that it could be
officially accepted to coreboot) while checking from time to time if
it is still working after my gradual changes. Hope to hear any news
from you soon
Best regards,
Mike Banon
On Sat, Nov 3, 2018 at 8:42 PM Hans JÃŒrgen Kitter
Hi All,
I thought that I share with the G505S+Coreboot (+QubesOS) users that I finally managed to enable and use the dGPUs on the G505S boards. When mentioning boards, I mean, that both variants with its respective VBIOSes: dGPU=1002,6663 (ATI HD 8570M, Sun-Pro) and the dGPU=1002,6665 (ATI R5 M230, Jet-Pro) are working under Ubuntu Linux or Qubes 4.0 (no Windows testing was done or planned).
DRI_PRIME=1 seem to work, I use the radeon driver for the APU GPU (A10-5750m) and amdgpu for the dGPU. I'm currently investigating how to get the most out of this setup in Qubes (HW accel 2D in AppVMs).
Problem statement: the dGPU was not initiated correctly and was not enabled by the radeon or amdgpu kernel module, VFCT header was corrupt/truncated (effectively was not corrupt, but only VFCT for the APU/iGPU was present). Trying to init the dGPU device through Oprom initialization failed all my attempts (radeon 0000:04:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x0000). This was true whether using Coreboot+GRUB or using Coreboot+Seabios, regardless of kernel version 4.x.
Solution in short: Modify Coreboot, that the VFCT table is only created for the dGPU so it can be used by either the radeon or amdgpu KMS driver.
CB 4.8 latest, with SB latest 1.11.2 as payload
I modified the IO address & the PCI device ID in APU GPU VBIOS file (even though IO address change to 0x3000 might not required).
I modified the IO address in the dGPU VBIOS files to the actual CB - linux kernel provided (0x1000) address
both prepared VBIOSes were put into CBFS
Coreboot config: std. g505s build, but both run oprom & load oprom was enabled (native mode), also display was set to FB, 1024x768 16.8M color 8:8:8, legacy VGA → this provides console output even if eg. GRUB is the payload).
main payload Seabios (compiled as elf separately)
I had to add my SPI chip driver for EN25QH32=0x7016 to CB
I used version 0600111F AMD Fam 15h microcode (even though this was later recalled by AMD to n-1 version 06001119). Nevertheless, Spectre V2 RSB and IBPB mitigations are thus enabled.
I modified pci_device.c, function pci_dev_init to only enable pci_rom_probe and pci_rom load for the dGPU pci device (no oprom exec needed for PCI_CLASS_DISPLAY_OTHER).
I modified pci_rom.c,
so the VBIOS of the dGPU is copied to 0xd0000 from CBFS (..check_initialized..)
and pci_rom_write_acpi_tables only run for PCI_CLASS_DEVICE_OTHER → dGPU (ACPI VFCT fill is done from the previously prepared 0xd0000=PCI_RAM_IMAGE_START address)
Remark1: there could be one ACPI VFCT table containing both VBIOSes present, but the APU GPU VBIOS is fully working via Oprom exec only.
Remark2: in the stock v3 bios additional ACPI tables are present (SSDT, DSDT) which contain VGA related methods and descriptions therefore, the VFCT table itself (in EFI mode) is a lot smaller ca. 20K (vs. 65k). These additional ACPI extentions allow eg. the vga_switcheroo to work under Ubuntu in stock BIOS. WIth CB currently only offloading (DRI...) seems to work. And as I checked I will need to understand the whole ACPI concept before I can migrate the relevant ACPI tables from the stock BIOS to CB for the switching to work. Also, it seems, that TurboCore (PowerNow?) is not currently enabled entirely in CB --> also implemented via ACPI tables in stock BIOS.
Well, I’m not a coding expert (just a G505S enthusiast), this is the reason I didn’t include patches. My goal was to describe a working solution, and someone with proper coding skills and the possibility to submit official patches for CB can get this committed.
BR,
HJK
--
https://mail.coreboot.org/mailman/listinfo/coreboot
--
https://mail.coreboot.org/mailman/listinfo/coreboot
Loading...