Aros/Developer/Docs/Resources/ACPI

Navbar for the Aros wikibook
Aros User
Aros User Docs
Aros User FAQs
Aros User Applications
Aros User DOS Shell
Aros/User/AmigaLegacy
Aros Dev Docs
Aros Developer Docs
Porting Software from AmigaOS/SDL
For Zune Beginners
Zune .MUI Classes
For SDL Beginners
Aros Developer BuildSystem
Specific platforms
Aros x86 Complete System HCL
Aros x86 Audio/Video Support
Aros x86 Network Support
Aros Intel AMD x86 Installing
Aros Storage Support IDE SATA etc
Aros Poseidon USB Support
x86-64 Support
Motorola 68k Amiga Support
Linux and FreeBSD Support
Windows Mingw and MacOSX Support
Android Support
Arm Raspberry Pi Support
PPC Power Architecture
misc
Aros Public License

Introduction edit

Essentially the first phase of the bounty is to try and get all the resources that bare "ACPI" exposes usable in AROS (that can be used at least), and the second phase to add proper AML support so that we can fully utilise ACPI.

First of all, we now have working acpi.resource. It's currently responsible for gathering ACPI tables from the system and verifying their consistency. Verified data can be queried from the resource using a simple API. The API is designed to provide a framework which would assist in parsing tables by consumers. I know, the API is very raw and incomplete. I am populating it when a need for it arises.

Second, we now have ACPITool. It's a diagnosis program. It looks like PCITool and displays the available info. Currently i wrote parsers only for MADT and FADT tables. The tool correctly displays their contents in human-readable form, as well as some general information about ACPI itself.

Third, on x86-64 we already have secondary cores starting up into idle loop. Processors are managed by kernel.resource, and it already provides information about their number. processor.resource uses this information and provides it to user software in abstract form. For example, if you run ShowConfig, it will display how many CPUs you have. However only bootstrap CPU will have information filled in. There's currently no way to run code on a secondary core, and this is out of scope of the task.

Things that won't happen:

1. Bounty specs says ACPITool should tell who uses the information. There is no such mechanism and it's against of Amiga-family OS concept to register who uses some component. Anyone can OpenResource("acpi.resource") and use it.

However i have a list of components using acpi.resource:

  • kernel.resource uses APIC data to register and run CPUs. Registered by kernel.resource. processor.resource displays this information. processor.resource is abstracted, it doesn't report IDs. In AROS, it would be enough to identify a CPU by its number. 0 is boot CPU. kernel.resource already has a function to give a number of CPU on which it was executed.
  • exec.library ShutdownA() currently uses acpi.resource for cold reboot, existing functionality is enough for this. It will be refactored soon, but the idea will stay the same.
  • battclock.resource uses century register number from FADT.
  • vgah.hidd will be aware of "No VGA" flag in FADT.
  • pciusb.device will detect MacMini via ACPI, and will not require "forceusbpower" argument on this machine. The list can be expanded, i just don't know what other hardware is prone to power bug.
  • PS/2 driver can be made aware of "No PS/2" flag... However i wouldn't like to dive into existing code and refactor it first. However promise to refactor the PS/2 driver in a reasonable time. looked at doing it in a similar fashion in the past but decided against it since it was a "hack" and not the way this stuff is meant to be accessed (which requires a proper acpi aml parser). Power off requires AML. Reboot doesn't, reset register definition is part of FADT. It just needs to set up AROS by retrieve the basic hardware settings and resources exposed in the acpi tables, since we don't have access to the information provided from the DSDT etc (due to lack of AML parser).

2. IRQ routing. It's rather complex task by itself. And i would leave configuring IRQs for some other task. It can be ACPI phase II or something else, like "Multi-processing support". The basic idea is to implement a standalone MP analog of exec.library and be able to run tasks on secondary cores. This will not be an SMP, but it will be a step towards. And it could be used in CPU-intensive applications, like video players, by running some dedicated task on an available core. since the ACPI resource should be setting up the system with information that is available from ACPI without the need for AML. BTW, this also should include configuring AROS to use HPET which is exposed through ACPI tables. his means that HPET support needs to be implemented. Currently timer.device can work only with legacy timer.

Doing this will require to distribute interrupts between processors. I think new components should be developed for this task. kernel.resource is a microkernel, it should not grow fat, and only a minimal MP support should stay there (identification, IPI, probably that's all).

3. The specification says "Information should be stored in Amiga-like manner". IMHO on this level there's no need to do this. There's no need to convert tables into something else, because tables are there anyway, and it's very easy to manage them using an appropriate memory. Introducing additional structures on top of these tables would just waste RAM, giving nothing new, like bootloader.resource which is currently superseded by kernel.resource's KrnGetBootInfo() (except command line parsing). IMHO providing some abstracted Amiga-style information is a job of higher-level components where appropriate. An example is processor.resource whose data do not depend on where it is taken from. ACPI is ACPI, it's nothing more, it's architecture-specific by design. But, as i said, there is Amiga-alike API for browsing tables. You can find the table you need by ID, and you can enumerate data stored in array via hook. I'll also add one more function for enumerating multiple tables with the same ID (for some tables it's valid case), and add taglists with options, like "Fetch DSTD with OEM ID foo", this is also going to be useful.

References edit

CPUS in SMP systems should be registered in some manner, and each set to run an idle thread except the BOOT CPU. IMHO halting the CPU is not the same as running an idle thread. The boot CPU aiso has idle loop in cpu_Dispatch(). It is basically while (no_task_to_run) halt;. This also halts the CPU, just processes interrupts.

Why have two ways of representing the same information? Isn't current API convenient? You can check for current MADT usage in kernel.resource. You don't go through tables yourself, and you don't go through data structures in a table yourself.

That is the job of the ioapic. Other than setting up the ioapic to handle routing etc, it has very little (if any) reason to be accessed. However, what to actually do with interrupts? I believe by default they are sent to all CPUs. Or only to boot CPU. BTW is it relevant for now at all, since only boot CPU actually processes the hardware? they are sent to all CPUs. Or only to boot CPU. They default to only the boot AP, and can be configured in many ways (including to the least active AP - which is IMHO the most desirable way to have IRQ processing handled).

  • Disable legacy PIC.
  • Reconfigure PCI devices to use APIC interrupts.

Resources having to parse all the information themselves is going to be more costly than exposing it in an AmigaOS-like manner. They don't parse all tables. They ask acpi.resource: "Give me this table, and locate these structures in it". You get exactly what you wanted to, nothing more.

Define idle thread then. A task which idles on the core - and allows us to see exactly how much time the core has been idle, to gauge processor load etc.

acpi.resource installs ShutdownA() replacement using SetFunction(). The goal is to prevent exec.library from bloat. Because i already had a EFI version of reboot routine, now i added ACPI.

Something like mpexec.library. It could contain a legacy-free version of SysBase for each CPU. And run task scheduler on them. The core of this should be handled in the current scheduling code, by using a local (per apic) instance of this task, instead of using the execbase version. I had changed the scheduler todo this in my old code via TLS, and it appeared to work fine on the boot processor - but I hadn't yet gotten to launching the scheduler on the AP's, since I am not entirely sure of how the scheduler should be launched correctly and suspect it needs each core to have a fake vblank interrupt to trigger the "engine".

I had also added the aforementioned changes to the scheduler to handle per-task time usage accounting (and interrupt handlers for all the apics vectors, which provide some very interesting details about whats going on from the apics p.o.v.

That IRQ routing is not trivial, is not reason/justification to ignore it - the bounty requirement explicitly mentions it since the ACPI resource should be setting up the system with information that is available from ACPI without the need for AML. Agree about IRQs then...So, let's define what needs to be done 1. Disable legacy PIC. 2. Reconfigure PCI devices to use APIC interrupts. Thats sounds about correct - the IRQ handling code needs to use APIC code (which is currently in kernel resource I believe) instead of PIC code (and should default to PIC unless other options are found), when ACPI resource determines we have better than PIC available. If IOAPICs are found (via the necessary tables) then we need to configure those to route IRQ delivery, which would preferably be to the least active core - but that would need "fixes" to most of the current interrupt handlers so that they lock access to the hardwares resources at the least.

In fact, SysBase->IdleCount and SysBase->DispCount are working now, so it's possible to measure usage at least in percents. FYI DispCount and IdleCount are not enough to calculate the CPU usage. You need to measure a time which cpu spends in idle mode and compare it against total time. Look for sam440/efika for more details where it remembers values of time base counter at certain points. yes, for each task it remembers amount of time cpu is busy with it. In same way idle time is remembered. Each second it compares amount of time spent in idle mode.

Therefore, CPU usage (in %) is defined as (idle_time * 100)

How is usage measured on classic amigaos? If I would write such utility for classic, would create a task with lowest possible priority with tc_Switch and tc_Launch set. Would measure how much time CPU spends in it and would compare it against system time.