Aros/Platforms/68k support/Developer/Exec
ArosBootStrap
editCan I use the maproom on the blizzard if i change the address from
- Main ROM (0xf80000 - 0xffffff)
ROMLOC_rom := 0x0f80000 to 0x4FF80000 - 4FF8ffff
Would i need to tell arosbootstrap or does this follow that pointer? Arosbootstrap is not compatible with any kind of external address remapping. That address is used to build normal 2x512K ROM images, arosbootstrap uses relocatable elf-image.
Note that rom detects arosbootstrap mode and automatically uses MMU (if available) to remap "ROM" to fast RAM if arosbootstrap originally loaded it in chip ram (happens when available fast RAM is not compatible, for example Blizzard A1200 accelerators' fast RAM has this problem because it is not autoconfig) Check the log for MMU messages.
It apparently has to be UBYTE m68060[0x12]; Guess mc68060 conflicts with some other compiler variable when compiling for 68060 or something like that. That union is only used to reserve space for largest FPU stack frame (68882), variable names are not important. (Perhaps prefix them with fpu_?)
Noticed a peculiar quirk of InternalLoadSeg_ELF.
It was doing Seek() calls on the BPTR passed into it. Now, this in itself wasn't too strange, except for when the funcarray[] has an override for Read (see workbench/c/AddDatatypes.c) which operated on an in-memory data structure instead of a file.
As I also needed the in-memory seeking capability for loading GZIP compressed ELF files into RAM (don't ask), I modified InternalLoadSeg and friends to use a 4th funcarray member (overriding Seek) to provide this capability.
PPC maintainers: Please double check my work to your files, and also, see if they can be merged back to rom/dos/internalloadseg_elf.c
Shutdown
editIs Exec/ShutdownA(SD_ACTION_POWEROFF) defined for any Amiga machine? If so, what do I need to do to make that machine power off? There is no soft power hardware in any Amiga model.
For other machines, what is the quickest way to make the screen go all black? (This is probably a copper list thing, right?). AFAIK each graphics driver installs a reset handler hook that blanks the screen. Reset handlers are probably not called upon ShutdownA(SD_ACTION_POWEROFF), but maybe they should be.
Opening up windows seems to take a LOT of memory. Is AROS allocating a full bitmap for each window?!? Or am I opening them wrong? Some magic OpenWindow() tag that says 'no backing store'? Only Smartrefresh windows should use extra window bitmaps (at least on AOS).
Turns out that Deallocate() NULL was the problem child. I've already committed a fix to make Deallocte() of NULL a no-op. I'll revert the UAEGfx change tonight.
Exec
editwhat priv mode is task->tc_Launch(), task->tc_Switch(), and Exec/Exception() supposed to run in?
I would assume in User mode, but given that core_Dispatch can be called from Switch(), wouldn't that mean that tc_Launch() could be executed in Supervisor mode?
Other important missing part seems to be expansion.library autoconfig board handling (and tricky extra: exec/supervisor stack relocation to fast ram if fast ram detected.) Will be simpler than the Exec/Dispatch() and Exec/Supervisor() implementations!
However, you may want to look at the exec/child* family of calls, as it looks like they are casting in a funny way, i.e.:
child = FindChild((ULONG)tid);
I don't know if this is safe for x86_64, I don't know your tid implementation.
A typical 68k asm coded PutChProc function does:
move.b d0,(A3)+ rts
A3 = PutChData (typically string buffer) and the function relies on getting back the modified A3 (pointing to next byte/char to be poked), the next time the PutChProc is called.
Example: "Hello", A3 = 0x100000
call PutChProc("H", 0x100000) call PutChProc("e", 0x100001) call PutChProc("l", 0x100002) call PutChProc("l", 0x100003) call PutChProc("o", 0x100004)
So A3 is basically an input+output param, not just input param.
You can also try getting exec in real autoconfig fast ram (after ConfigChain() call) if you still have too much free time. Are you talking about moving the "Boot Task" stack, or moving all of Exec out of ROM and into RAM? No, I meant execbase only. Official 2.0+ ROMs move execbase to real autoconfig fast (instead of 0xc00000 slow ram) ram if it exists. This is done because Chip RAM and "slow ram" (which actually has exactly same speed as chip ram) are relatively slow on real Amigas. That's pretty easy to do then. I'll add it as an arch function to either rom/expansion or rom/strap.
About dynamic relocation:
I am attempting to load relocatable 1M rom image to end of available fast RAM. Technically it already works but there is problem with autoconfig ram boards that disappear during reset. (UAE Z3 board does not, at least in WinUAE, A3000/A4000 motherboard ram is also "safe")
1: Let original KS (which is in ROM) do the autoconfig stuff and add coolcapture/kicktag hack that copies ConfigDevs to AROS expansion list. - aros autoconfig implementation is unused. This won't help use to find any bugs. - KS expansion behavior may have undocumented features that can break the copy-phase..
2: Put coldcapture/kicktag to chipram that points to aros expansion.library (also in chip ram). It runs autoconfig first phase (enabling the ram board where "rom" is located), stores the data in chip ram, jumps to aros rom which detects this situation and only collects the autoconfig data without rerunning autoconfig again. - is it possible to have separate relocatable file that only contains expansion.library? (Jason?) - can't use any exec routines, some patching needed.. (just use absolute chip ram addresses, no one cares, they are temporary anyway) (yes, I know that chipram also temporarily gets replaced by rom when reset is executed but this can be worked around, even on a 68000 without crashing. One game even used this as part of copy protection..)
3: just copy current autoconfig data and jump to rom image in ram without reset. Works only once (any reset kills it), any boot rom boards are not visible to aros rom. Too stupid for my tastes :)
Option 2 is not easy but it would be compatible with all Amiga models (as long as it has at least 1M of fast ram). Testing is too difficult for "normal" users as long as it needs specific hardware or eprom burner. (Unfortunately you have autoconfig ALL boards, you can't choose specific RAM board, unless that ram board is first board in autoconfig chain but I don't think you can assume that..)
How does WHDLoad do it? Can we generate a ROM image/relocation map WHDLoad can use? It uses .RTB files that are also some kind of relocation files (afaik they were originally used by some rom image loader). But personally I'd prefer everything in single file, it is too easy to mix different versions (perhaps even the loader should be included, titanics cruncher like "pseudo-overlay" file is easy to create). Anyway, I don't really care much until I have working expansion.library autoconfig hack (+ Gayle IDE driver port).
Removed the softint check (r36842) in m68k-amiga Disable() and moved all processing to Cause() because m68k Disable() and Enable() should be as short and as fast as possible. Softints are "automatic" when using Paula interrupts, there is no need to check them in each Enable() call. (m68k-amiga interrupt processing probably should be completely in assembly, it needs to be really optimized if this thing is going to be useful on A500.. But it is much too early now.)
The crash occurs when sprintf() is called. Isn't sprintf() in arosc.library? Of course, you could always replace that with a call to RawDoFmt() and remove the arosc.library dependency. I have half a mind to do that for all of workbench/c anyway (remove the arosc.library dependency)
The AOS 3.1 iPrefs utility patches into RawDoFmt() for localization, and doesn't understand the AROS 'magic' constants:
- RAWFMTFUNC_STRING
- RAWFMTFUNC_SERIAL
- RAWFMTFUNC_COUNT
Would anyone mind if I made those 'magic constants' point to real m68k functions on AROS m68k, for better 3.1 support? It changes the magic constants for m68k to point to functions, but continues to support the 'NULL == RAWFMTFUNC_STRING' assumption of AOS 4.x and Morphos.
Load up locale.library, have it patch AROS' RawDoFmt and then patch it too, with a wrapper that properly translates the special codes into real functions. Before exiting SetPatchAROS remember to unpatch RawDoFmt before unloading locale.library. I like that more than a pop-up. It'll be a little hacky in Exec/SetFunction, but doable.
So WB locale.library SetFunction()'s RawDoFmt() and then AROS programs that need non-AOS extensions (RAWFMTFUNC_STRING and others) stop working, right? Extending RawDoFmt() was bad idea. RawDoFmt() should only do what original Autodocs say and all AROS programs should use VNewRawDoFmt() Solution: replace or add wrapper macro that wraps all AROS RawDoFmt() calls with VNewRawDoFmt(). Well, while that does fix the WB on AROS issue, it makes AROS userspace on AOS ROM pretty much impossible (out of room in the AOS exec.library vector space), unless SetPatchAROS relocates and extends exec.library. In that case, I might as well make an external 'exec.library' that replaces the AOS one.
The Facts:
- AROS RawDoFmt() accepts 'special' PutChFunc vectors 0, 1, and 2
- AOS RawDoFmt() assumes that PutChFunc always points to a valid function
- AOS locale.library uses SetFunction() to update Exec/RawDoFmt() to one with AOS PutChFunc conventions
APTR realRawDoFmt; AROS_UFP4(fixupRawDoFmt, blah, blah) { If PutChProc is a magic vector, make it a real function. Call realRawDoFmt; } ... Exec/SetFunction (AROS) ... if (library_to_patch == SysBase && function_to_patch == LVO_RawDoFmt) { realRawDoFmt = vector_of_patch; vector_of_patch = fixupRawDoFmt; } Set library_to_patch -> function_to_patch = vector_of_patch
That's actually even better than what I had in mind, since it doesn't depend on locale.library, but anything that attempts to patch RawDoFmt() will get the wrapper around it. If you don't consider hackish patching SetFunction() itself. What happens now when AROS' locale.library patches RawDoFmt? It goes through the magic to real function translator fixup too, which adds 16 more m68k instructions to every call. We will move all the m68k specific stuff in rom/exec/setfunction.c to arch/m68k-all/exec/setfunction.c, that way it'll actually end up as a 'cleanup' for the other architectures. Should these AOS compatibility hacks be clearly marked or put inside some ifdefs? (They should be easily found or someone will sooner or later forget about them completely and it will get quite confusing..) Putting extensive comments into arch/m68k-all/exec/setfunction.c
Only one problem I can think of:
- Someone who tries to replace RawDoFmt will remove the chain instead and get unconverted values in his replacement.
- Someone who happens to chain RawDoFmt to catch exactly the same values will have to actually receive them
But from the call, we can't tell the chaining apart from the replacement, so how do we know if we have to add a converter patch to that patch as well?
Is a very minor issue, so it's a matter of whether we want guaranteed full compatibility, or just a good approximation.
Plus, I don't think there's any unpatching. I'm not sure whether that is going to cause notable problems.
But if VNewRawDoFmt uses those constants, doesn't it have to check against them? For comparison it wouldn't matter that they are really static function pointers. And being symbolical, comparison is the only operation allowed for recognising them.
Since AOS doesn't have the extended task structures, this code sets acpd = NULL->iet_acpd Fix arosc not to use private fields in task structure. You can use AVL trees for association (OS3.5+). Or duplicate these functions in arosc statically. Remember also that arosc.library relies on some other AROS extensions like NewAddTask() and NewStackSwap().
ACPI. Kernel/exec init is tricky, it perhaps can't be done at once. Exec is already initialized twice (and even thrice, if we count kernel.resource pickup). Kernel.resource should be initialized at 127 priority because in future even AllocMem() won't work without it (it will work on top of kernel's page allocator).
SAD debug
editFor debugging before PrepareExecBase. For this purpose you have own bug() macro in kernel_debug.h. For very early debugging in kernel.resource you can use kernel's own bug() definition which statically calls KrnBug(). It doesn't use exec in any way. And exec's facilities are up and running after calling exec's init code, which fills in KernelBase. Note that no other code than kernel.resource's startup code can be run before PrepareExecBase(). Remember also KrnPutChar() and internal krnPutC().
How do I enable SAD early? I currently have a (poorly licensed) m68k-gdbstub I'm using to provide debugging to my ROM that has to go when I commit. Implement KrnPutChar() and KrnMayGetChar() in your kernel.resource and it will work. Note that Alert() will not call it because current alert routine is very basic and does not process any input. This is done because there's no universal input hardware on PC. In fact this needs to be worked on. Perhaps alert needs to take over the screen, print information on it, then take over input and ask for some command from it. RFC, I wrote about debug channels is one small step towards implementing this mechanism.
For list review. This patch enables the '--with-paranoia' ./configure option, and gives an example usage in rom/exec.
Semantics:
./configure => No paranoia ./configure—with-paranoia => PARANOIA_CFLAGS=-Wall -W -Werror ./configure—with-paranoia=-Wmega => PARANOIA_CFLAGS=-Wmega
This allows (a) no changes to the build process, (b) devs to enable paranoia *for themselves* and (c) devs to enable paranoia *only* on targets they think are clean. This way, once all the -Wall issues on a library are cleared, it will stay that way.
People use different compiler version(*) and those versions report different warnings - for example right now the 4.4 series reports *tons* of strict aliasing problem when compiled without debugging. We would probably end up in situation that a certain module build for 9 out of 10 people but the unlucky 1 person is not capable or not inclined to do the fixes.
Whilst agreeing with some of the warnings in -Wall. Some of them can be wrong (e.g. "variable x may not be initialised"), or try to enforce a particular coding style (e.g. "consider using parentheses around assignment used as truth value").
We're aiming to avoid the use of -fno-strict-aliasing for performance reasons.
I think USER_CFLAGS is overused in the mmakefiles; IMO it should only be used in special occasions when a certain symbol needs to be defined etc. Most programs/libs should be compiled with the default CFLAGS as generated by configure.
BRA := "\(" KET := "\)" TST := "test$(BRA)test$(KET) test" USER_CFLAGS := -DDEFINE=\"$(TST)\"
or
USER_CFLAGS := -DDEFINE=\"test\(test\)\ test\"
Is the backslash after the last bracket really necessary though? I thought it was only needed to specify that the following character was a special case? Yes, it's needed to escape the space and make the whole thing one command-line argument.
Additionally you would not need to duplicate debugging functions. And one more, about GDB stubs. May be you should consider integrating it into existing SAD somehow? Ability to debug any machine remotely with gdb would be very nice. Unfortunately, gdb stubs are very machine specific. I believe they would need to be rewritten for every port. I plan to remove the GDB stubs once I get to the point where SAD works.
Unfortunately, gdb stubs are very machine specific. I believe they would need to be rewritten for every port. I plan to remove the GDB stubs once I get to the point where SAD works. Well, SAD really needs face-lift then. It's very old thing.
This message is addressed mainly to Jason and Toni. If you look at arch/all-<cpu>/include/aros/<cpu>/cpucontext.h, i've written CPU context definitions for all architectures except m68k. A pointer to such a structure will be used in two places:
- It is passed as third argument to kernel exception handlers (added using KrnAddExceptionHandler()).
- It is passed as second argument to exec trap handler (tc_TrapCode).
The primary purpose of this is to unify and extend crash handling code, and provide possibilities for third-party developers to write debugging tools which can catch exceptions and analyze task state. PowerPC context is binary compatible with AmigaOS4. I expect m68k context to be binary compatible with m68k AmigaOS. I know that on m68k
tc_TrapCode gets the whole context frame on the stack, this should be the only difference to other ports. I. e. on m68k we should take context pointer as follows:
void MyTrapHandler(ULONG trapCode) { struct ExceptionContext *regs = (struct ExceptionContext *)(&trapCode + 1); ... process exception here ... }
Also you'll need to write m68k-specific cpu_init.c, KrnCreateContext() and PrepareContext(). Please look at other architectures for examples.
Short explanation:
- kb_ContextSize needs to be set to the total size of your context.
- kb_ContextFlags can be used for any purpose you want. i386 and ARM use it to specify FPU type. Perhaps you won't need it at all because on m68k you have SysBase->AttnFlags. These two variables are set by cpu_init.c, which performs startup-time CPU probe.
KrnCreateContext() allocates context frame and sets some initial data (if needed). The common use is to create initial FPU frame. Common part of your CPU context should be sizeof(struct AROSCPUContext). This is needed for hosted ports because on hosted you need to store some host-specific private data as part of CPU context. If you look at hosted CPU definitions you'll see struct ExceptionContext in the beginning of struct AROSCPUContext. On native you are expected just to:
#define AROSCPUContext ExceptionContext
Optional data (like FPU context) follow struct AROSCPUContext in the same block.
PrepareContext() is not changed, i just expanded all macros. Since the context is unified, you don't need to define the same macros for every port any more.
The only legacy macros still needed in kernel_cpu.h are GET_PC and SET_PC. They are used by exec's crash handler. They will disappear after some time. PRINT_CPU_CONTEXT in fact prints useless crap, so you may safely remove it. Debug() will work without it.
should remember that BPTRs don't really exists on (all?) other ports..
> struct ExceptionContext
This is public form of AROS-side context. This is what AROS exception handlers except to get. It is identical on all architectures using the same CPU.
> regs_t
This is raw stack frame produced by CPU. On hosted AROS it's an alias to host OS context structure. On native it can be identical to ExceptionContext.
> struct AROSCPUContext
ExceptionContext + private part. Makes sense on hosted (where you save host-specific stuff). On native it is expected to be identical to ExceptionContext. struct AROSCPUContext contains struct ExceptionContext in the beginning.
> ucontext
UNIX name of context. regs_t is an alias of it on UNIX-hosted.
> I'm trying to get m68k to be similar to all the rest of the architectures, but there's been a lot of churn in the trap/exeception area, and too little documentation (or, at least, I don't know where the documentation is).
This is newly designed thing. I provided some comments in the source code, i hope this is enough. I'm sorry, i currently have too little of time and even can't read the mailing list actively. The main idea of what is done is unification of CPU context format per single CPU. So struct ExceptionContext is the same on the same CPU, no matter if it's hosted or native system. Yes, i studied AmigaOS exec trap handling, i know about the quirk. I would suggest to use asm stub for it. I commented this in the code.
Please put the indicator FIXME or TODO in your comment for things that need attention later. It makes it easier to find it back later and not forget about it. f.ex.
/* fetch Task pointer before function call because A6 can change inside initialPC * (FIXME: temporary hack) */
Add MoveExecBase() that m68k-amiga port can use to move exec from chip/slow ram to autoconfig real fast. Out of curiosity: Does this give any improvements on WinUAE or is it targeted at real hardware? Real hardware, execbase or any other commonly accessed system structure in chip ram (or "slow" ram) can cause noticeable slowdown compared to real (accelerator board) fast ram, especially on accelerated OCS/ECS Amigas.
16-bit OCS/ECS chip RAM vs accelerator 32-bit fast ram speed difference can be huge, also chip ram is not cacheable.
KS 2.0 was first official ROM that introduced this exec transfer to fast ram feature.
(This will get really tricky if we want working reset proof programs)
Scheduler
editYou need to use the existing kernel.resource. It already has complete scheduler, you just need to write CPU-specific code and you're done. Look at the core_* code in rom/kernel/kernel_schedule.c.
Note that in future there can be better scheduler (remember about KrnSetScheduler() function). One more note: startup code (start.c) in boot directory should IMHO better be in kernel directory because it is actually a part of kernel.resource.
Look at Windows-hosted and UNIX-hosted ports. They are the most recent and they are engineered using the latest model. x86-64 and PPC ports are just older, they don't use common code, but they served as a base for my implementation. I just didn't rewrite them because i don't have these machines and can't test it. In fact boot directory contains an external bootstrap program, which is supposed to load the kickstart image into machine's RAM and execute it. On Amiga kickstart is in ROM, and it doesn't need any bootloader (well, it might have one if you leave an option not to reflash the ROM but reckick the Amiga programmatically, in this case kickstart swapper will be your bootstrap).
I wrote it when I finished kernel.resource rewrite. It is still incomplete in places and lacks porting HOWTO.
Make sure task switches are done only when returning from supervisor to user mode, but not when returning from supervisor to supervisor (interrupt inside interrupt). x86 native had a similar disappearing task problem looooong ago, caused by buggy "do we return to usermode or not" check in exitintr handling. The code in arch/m68k-amiga/kernel/amiga_irq.c only calls core_ExitInterrupt() when returning to user mode. Of course, I could be wrong. I would appreciate a second set of eyes to look at my arch exec and kernel code, now that I more closely conform to the standard conventions. Syscalls are handled in amiga_irq.c (via the F-Line trap) and only for User Mode, all other interrupts either trap to the debugger or (for Paula IRQs) go through the Paula handler in amiga_irq.c
Under a slow processor it is more visible and task switching must as fast as possible. It had nothing to do with scheduler, problem was too low handler process priorities. Too low priority dos packet handlers and someone using all CPU time = console and filesystem io crawls. You can easily confirm it on AOS by changing all handler processes' priorities to zero or lower :)
Sysbase
editIn the expansion.library and scsi.device, there appear to be specially crafted 16-bit illegal instructions, that the 'original' Exec would look for and set D0 appropriately for after trapping them. Is there any documentation anywhere for these 'illegal instruction' traps? You must be using Amiga Forever 3.x ROM(s), they have at least one special illegal instruction to make it incompatible with real Amigas (UAE has a rom-check hack that fixes this..). AFAIK it was part of license, rom images must not work on real Amigas. Make sure you have real original kickstart rom image.
The solution is to make sure that when you have that 'SysBase' is the global, not a local copy.
SysBase = PrepareExecBase(...)
One small question: why do you think PrepareExecBase() should not set global SysBase? I remember once i also came up with such an idea, just because i thought that it doesn't look good. After changing this i realized how wrong am i. Many things in unexpected places may rely on global SysBase. I remember i had early debug output somewhere, and this broke it. Perhaps that output is even not there any more, but this proved that the idea was bad. Global SysBase should be set up as early as possible. This means - before PrepareAROSS. misled by how PrepareExecBase() was returning SysBase, and all of its callers where using it as 'SysBase = PrepareExecBase()'. It just looked like a typo that PrepareExecBase() was missing a local 'struct ExecBase *SysBase'. Maybe PrepareExecBase() should return void, so that its callers have to *explicitly* pick up the global SysBase?
Also, is goUser() supposed to drop down to user privs always, or restore the privs that were there before goSuper()? It switches to user mode. There's also goBack() definitions which jumps to previous mode remembered by goSuper().
One last thing - what priv mode is task->tc_Launch(), task->tc_Switch(), Launch and Switch callbacks are called directly from within supervisor, and i believe it's okay, and i think original Amiga does the same. Anyway you are not going to do something long-running in these callbacks.
and Exec/Exception() supposed to run in? Exception() (unlike in AmigaOS) is called when all arch-specific preparations are already done, and you are in usermode in task's context. This code just checks signals and calls appropriate routines, it does not contain any arch-specific code.
In order to process an exception correctly you need to save your task's context (which cpu_Dispatch() is going to jump to) somewhere, and then adjust the context so that your task jumps to exception handler. The handler should call exec's Exception(), then it should pick up the original task's context (one that you saved in your cpu_Dispatch()) and jump to it.
The result looks like your task just calls its exception handler. You may look at Windows-hosted implementation as an example of working one. UNIX-hosted implementation does not work (at least on PPC). On native ports exceptions also don't work. Just note that on m68k-native you 100% know what's going on your stack so you don't need to that trick with passing context to exception handler. You may save it right on task's stack instead (this is what UNIX-hosted version does and this is why it doesn't work).
Interrupts
editWith my last change (where I remembered to re-enable the hardware interrupts before going to CPU idle state in cpu_Dispatch()), I appear to be able to get to the KickStart 1.3 Intuition idle loop.
However, it appears I am missing something, because (other than a pretty white screen) I get nothing more.
I *think* I need to call either KS's "strap" or "romboot.library", or AROS's "dosboot.library", but I'm not quite sure *where* to call those from. FWIW, AROS's strap is called as a result of it being in the resident list (at priority -50). See rom/boot/strap.c.
The strap module does the disk block read in a real rom before dos initialization, it does not appear to be very well documented.
Semaphores
editI'm having some structure layout issues, since I am getting corruption of the MH free lists when I am trying to init the KickStart 3.0 libraries from AROS exec. (Oddly enough, they seem to init quite a bit. The memory corruption occurs in expansion.library after a call to Exec/InitSemaphore, and another that seems near the Graphics/Screen OpenScreen when called by Intuition - probably both the same structure)> A good source for amiga os 3.1 are the .i includes are a good reference for the byte-level layout of AmigaOS structures? (i.e. SignalSemaphore, etc.).
Does your compiler use WORD (2 byte) alignment for LONGs etc. as expected by AOS? If not you may need to have the headers like in AOS4/MOS where they surround stuff with:
#pragma pack(2) [...] #pragma pack()
NewAddTask wants to align the stack to 16 bytes. I've turned that off in AROS_FLAVOUR_BINCOMPAT for now. This alignment is need for e.g. PPC ports. Of course other ports might not need it.
I've noticed on KS 3.1 that some callers of Exec/InitStructure set bit 16 of the size field to 1. Don't know why. For now, I have to mask that out (which limits InitStructure to only being able to handle structures up to 64k in size).
That's likely a case of a function where the original function only looks at the lower 16 bits even if the library prototype for the param says LONG or ULONG. So the upper 16 bits may contain trash and some other code may rely on that (== actually have trash in upper 16 bits when calling the function).
One can see this also in other places, like graphics.library. There in some functions we use
x = (WORD)x
(FIX_GFXCOORD macro) to kill trash in upper 16 bits.
E.g. if the prototype is LONG, the intended value width is WORD and the passed trash is < 0, then why should the typecast be sufficient?
x = (WORD) (x & 0x0000FFFF)
to be sure that you only get the lower 16 bit? I tried on x86 (gcc 4) and 68k (gcc 2.95) where it works.
#include <stdio.h> void func(int param) { int fixed = (short)(param); printf("%d (%x) %d (%x)\n", param, param, fixed, fixed); } int main(void) { func(0xF234fffe); }
-231407618 (f234fffe) -2 (fffffffe)
You are right - when converting from (unsigned or signed) long to (unsigned or signed) short the C rule seems to be to preserve the low-order word. Personally I do prefer the "&" notation, as it shows exactly what is done, instead of keeping in mind what the compiler is known to do by some implicitely defined convention.
Memory
editDuring fixing up i386-pc exec initialization i again came across SysBase->MaxLocMem. AmigaOS 3.x NDK describes it as "Max ChipMem amount". If so, why can't it be set by examining MemList and summing up all MEMF_CHIP memory size? Why is there some magic dance with addresses? What is exactly its value? And the same question about MaxExtMem. I'd like to try to implement it correctly once for all platforms.
MaxLocMem is amount of chip ram (equals last address + 1 of chip ram on AOS)
MaxExtMem is last address + 1 (not size!) of "slow"/"ranger" RAM (0x00C00000) only. No slow RAM = NULL. It never includes any other local RAM regions.
I don't think these were designed to support multiple memory regions.
Devices
editDocumentation (and prototype files) say return type is BYTE but many programs (including WB1.3 system/setmap) assume LONG return type. Both KS1.3 and 3.1 OpenDevice() fetch io_Error and then extends it (EXT.W D0 + EXT.L D0) to LONG before returning it. Which one is wrong, documentation or implementation? (or neither? :))
(m68k-amiga port already has hack that extends OpenDevice() return code but it gets overwritten by dos lddemon hook)
in rom/dos/endnotify.c:
/* get the device pointer and dir lock. The lock is only needed for * packet.handler, and has been stored by it during FSA_ADD_NOTIFY */ iofs.IOFS.io_Device = (struct Device *) notify->nr_Handler; iofs.IOFS.io_Unit = (APTR)notify->nr_Reserved[0];
The 'iofs.IOFS.io_Unit' is a pointer, and notify->nr_Reserved[0] is ULONG.
This code location and FSA_ADD_NOTIFY need to handle 64-bit pointers.
No workaround at present.
In rom/dos/deviceproc.c:
/* all good. get the lock and device */ SetIoErr(dvp->dvp_Lock); res = dvp->dvp_Port;
The problem is that dvp->dvp_Lock is a BPTR, which can't be (safely) cast to a LONG under x86_64. As his function is marked as obsolete, I'm thinking that it should just throw a ERROR_DEVICE_NOT_MOUNTED on x86_64. I'm not sure how workable this would be, probably we did lots of assumptions throughout the whole codebase to make this an easy feat, but one idea would be to consider a BPTR as an opaque handler: on x86 it would just be a pointer alright, for speed purposes, whilst on other architectures that don't allow such an "optimization" it could be a key into a dictionary of some sort (implemented with an hash table, a binary tree, or something like that).
Would there be any harm in making pr_Result2 an SIPTR instead?
this is our "old" (not used) USB stack in trunk/contrib/necessary/USB: classes/HID classes/MassStorage stack/stubs. Current stack is in rom/poseidon. The old one is used in ppc-efika, though, that port isn't working at the moment. On the other hand, I hope to make some progress for the sam port.
Does anyone happen to know what the IECLASS_TIMER tick rate of input.device on AmigaOS was? (Google and the Autodocs have been very unhelpful - they only say that IECLASS_TIMER is a 'timer event', but don't say at what rate the events occur). It's set to once very 100 milliseconds in AROS. Is that correct? Or should it be the VBlank rate? Or slower?
Trackdisk.device
editMFM decoding and some simple hardware poking (Paula and CIA) needed.
For cia, the skeleton should remain in rom/cia afaik - you should just add the amiga specific files into arch/m68k-amiga/cia and use the %build_archspecific macro to override initial files with architecture specific.
disk.resource GetUnit() never enables disk dma finished interrupt. It only "worked" because serial transmit and disk interrupt use same interrupt level.. It is trackdisk.device that enables interrupt.
Error appears if I add arch/m68k-amiga/cia directory and put my amiga-specific files there (seticr.c and ableicr.c for example)
Target is amiga-m68k.
arch/m68k-amiga/cia/mmakefile.src is simple:
include $(TOP)/config/make.cfg
FILES := seticr ableicr USER_CFLAGS := -I$(SRCDIR)/rom/cia
%build_archspecific \
mainmmake=kernel-cia maindir=rom/cia arch=amiga-m68k \ files=$(FILES) modulename=cia
trackdisk.device should be located in... m68k-amiga/devs/
and HIDDs in... m68k-amiga/hidd/
and resources on the other hand go directly to m68k-amiga - believe this maps the layout of AROS/rom directory
"Retro"-specific question: how are we supposed to configure ROM for "compatible" mode/model specific configuration, for example unexpanded A500? ROM must not use too much RAM for advanced stuff (for example trackdisk.device must not allocate DMA buffer for HD drives, 15k vs 30k is huge difference in chip ram usage. Currently I do this dynamically, reallocate HD sized buffer only if HD disk is inserted, note that HD drives report being DD unless HD disk is inserted).
Not too important today but free ROM that can boot most A500 games and demos is my goal :) (full compatibility is of course impossible, some very old games even jump directly to ROM..)
I also noticed (previously I didn't know it was that bad) that M68k code produced is really terrible, inefficient and looooonng.. (hopefully only because of options used?)
Code that reads drive IDs (from my disk.resource implementation):
void readunitid_internal (struct DiscResource *DiskBase, LONG unitNum) { volatile struct CIA *ciaa = (struct CIA*)0xbfe001; volatile struct CIA *ciab = (struct CIA*)0xbfd000; UBYTE unitmask = 8 << unitNum; ULONG id = 0; int i; ciab->ciaprb &= ~0x80; // MTR ciab->ciaprb &= ~unitmask; // SELx ciab->ciaprb |= unitmask; // SELX ciab->ciaprb |= 0x80; // MTR ciab->ciaprb &= ~unitmask; // SELx ciab->ciaprb |= unitmask; // SELX for (i = 0; i < 32; i++) { ciab->ciaprb &= ~unitmask; // SELx id <<= 1; if (!(ciaa->ciapra & 0x20)) // RDY id |= 1; ciab->ciaprb |= unitmask; // SELX } if (unitNum == 0 && HAVE_NO_DF0_DISK_ID && id == 0) id = 0xffffffff; DiskBase->dr_UnitID[unitNum] = id; }
result is this: (start and end removed)
00FE8AB6 206f 0010 MOVEA.L (A7, $0010) == $0000ee78,A0 00FE8ABA 7208 MOVE.L #$00000008,D1 00FE8ABC 2008 MOVE.L A0,D0 00FE8ABE e1a9 LSL.L D0,D1 00FE8AC0 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8AC6 0200 007f AND.B #$7f,D0 00FE8ACA 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8AD0 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8AD6 1401 MOVE.B D1,D2 00FE8AD8 4602 NOT.B D2 00FE8ADA c002 AND.B D2,D0 00FE8ADC 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8AE2 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8AE8 8001 OR.B D1,D0 00FE8AEA 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8AF0 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8AF6 0000 ff80 OR.B #$80,D0 00FE8AFA 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8B00 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8B06 c002 AND.B D2,D0 00FE8B08 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8B0E 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8B14 8001 OR.B D1,D0 00FE8B16 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8B1C 327c 0020 MOVEA.W #$0020,A1 00FE8B20 7000 MOVE.L #$00000000,D0 00FE8B22 1639 00bf d100 MOVE.B $00bfd100,D3 00FE8B28 c602 AND.B D2,D3 00FE8B2A 13c3 00bf d100 MOVE.B D3,$00bfd100 00FE8B30 d080 ADD.L D0,D0 00FE8B32 1639 00bf e001 MOVE.B $00bfe001,D3 00FE8B38 0803 0005 BTST.L #$0005,D3 00FE8B3C 6604 BNE.B #$00000004 == $00FE8B42 00FE8B3E 7601 MOVE.L #$00000001,D3 00FE8B40 8083 OR.L D3,D0 00FE8B42 1639 00bf d100 MOVE.B $00bfd100,D3 00FE8B48 8601 OR.B D1,D3 00FE8B4A 13c3 00bf d100 MOVE.B D3,$00bfd100 00FE8B50 5389 SUBA.L #$00000001,A1 00FE8B52 b2fc 0000 CMPA.W #$0000,A1 00FE8B56 66ca BNE.B #$ffffffca == $00FE8B22
No address relative CIA addressing, move to register, do operation, write it back.. Can't get any worse. Why does it use address registers as counters (subaq.l #1,a1; cmpa.w #0,a1? You have used volatile keyword, so every operation on ciaprb is executed!
Consider:
ciab->ciaprb &= ~0x80; // MTR ciab->ciaprb &= ~unitmask; // SELx
to be replaced with
ciab->ciaprb &= ~(0x80 | unitmask); // MTR, SELx
and the generated code will be shorter... ;)
Change "int i" to "UBYTE i", since you're counting from 0 to 31 only. Otherwise you force gcc to allocate a 32 bit register for you :
I meant why it didn't create simple and short:
and.b #$7f,$bfd100 and.b d0,$bfd100
(or even better, put bfd100 in some address register and do and.b #$7f,(a0))
Many data registers are totally unused. Does volatile force totally-completely-as-unoptimized-as-possible code? :) sure! It forbids any excessive optimizations, since the state of variable can always change in unpredicted way. Register described by volatile keyword will be accessed as many times, as your code suggests :P
Handlers
editThe con-handler steers the Shell's I/O to the console.device which draws output in a window and reads input from the keyboard. The con-handler used to open the window and pass it to the console.device, but now the console.device does it. The con-handler still handles name-completion, command history, etc., but has been changed considerably.
The console.device opens the display, reads the keyboard, handles the display history, multiple consoles in the one window (tabs), the menu, etc. It has been restructured and largely rewritten.
I managed to get dos packets working yesterday (mainly functions that are needed at startup,like open/lock, examine, getdeviceproc, assignlock) Lock and Open are using FileLock and FileHandle structures.
UAE FS boots until first executable is run, also CON needs to be converted soon.
But reason I posted this: NIL handler.
It is not possible to create NIL handler because CreateProcess() needs NIL: which needs CreateProcess() and so on... Original NIL "handler" is nothing more than Open() checking for "NIL:" and returning FileHandle with fh_Type = NULL (packet port).
Question is: does this cause issues with other ports? If yes, how to solve it? (some kind of special CreateProcess () needed?) There is a NULL handler filesystem on the Aminet but I haven't looked at it in years. http://aminet.net/package/util/batch/NULL-Handler could be a possibility since it comes with source. The difference between NULL: and NIL: is that NIL: is a dummy filesystem while the NULL: filesystem is not.
Can't we just get rid of NIL handler? Would that be a compatibility issue? If there are existing AROS specific programs that expect (possible accidentally) "full" NIL handler (or NIL listed in DosList as DLT_DEVICE). Actually this is non-issue, real NIL: handler can be started from Dosboot manually (overriding pseudo NIL) easily if needed.
DOS packet conversion is advancing nicely, UAE FS boots now (and is faster for some reason). Until Open(CON:) is called. Next problem: console handler, it is quite difficult to test disk based commands without seeing anything on screen.. It does not appear to be as simple conversion as NIL handler (I converted it before I noticed it can't be used..) Do we have some older real dos packet based version hidden somewhere? Nope, the very first checkin (in 1998) of console.handler used the FSA_* API.
diff --git a/arch/m68k-amiga/devs/filesys/console_handler/con_handler.c b/arch/m index 2f0bc85..38c3bb9 100644 --- a/arch/m68k-amiga/devs/filesys/console_handler/con_handler.c +++ b/arch/m68k-amiga/devs/filesys/console_handler/con_handler.c @@ -297,10 +297,10 @@ static void startread(struct filehandle *fh) } #if (AROS_FLAVOUR & AROS_FLAVOUR_BINCOMPAT) - - /* SegList points here, must be long aligned */ - __attribute__((aligned(4))) - + /* We use the GCC trick of .balign, which + * will pad us with NOPs + */ +asm (" .text\n.balign 4\n"); #endif LONG CONMain(void)
Why does this problem not occur with Amiga compilers? Just luck, or do they automatically align functions to 4 bytes? If the latter, maybe the same should be done in our 68k cross-compiler.
That is horribly bad code that only works accidentally.. There is nothing in documentation that says input handlers have extra scratch registers. Do we need to save D2 and D3 only or do other programs modify (illegally) other non-scratch registers too? Equally horrible was Titanics Cruncher decruncher that calls dos functions using A5 as a base register, A6 pointed to something else than dosbase, it only worked (accidentally again) because dos packets can be sent without dosbase... i think all registers need save, but maybe its possible to add a GCC compiler command that after the call of eventhandler, all registers are invalid, so later the code use no registers.
but when you look in software interrupts, docu here stand "dont trash a6" and on interrupts stand
"(D0/D1/A0/A1/A5/A6) may be used as scratch registers by an interrupt handler"
And when there do not stand anything what registers can use in inputhandler, i think code in AOS is written that all registers can use from inputhandler. Don't assume something like that is allowed. Show us the documentation which confirms your assumption. And since InputHandlers are chained in input.device as struct Interrupt, I assume the same rules as by software interrupts applies.
IMO the rule is scratch registers are D0-D1/A0-A1, do not touch other registers unless otherwise specified in documentation. Do not just think, modify your test program to confirm it :) (see what registers can be changed without crashing AOS input handler, I am quite there is at least one address register that can't be modified without crashing it).
or maybe you take a look, why screenswitcher awin do not work on AROS.maybe i am wrong, but to find out, a version of AROS that save all registers is useful.all this programs crash aros total.
http://aminet.net/package/util/wb/ScreenSwitch http://aminet.net/package/util/wb/awin
awin is written from kas1e, maybe he know what can go wrong maybe commodity handlers have same problems, they do not accept if a register is change.
CON
editOriginal reason is for CON: is compiler/autoinit/stdiowin.c which always sets input and output streams as "CON://///AUTO/CLOSE". Which forces open console window if program does any read or write from Input()/Output().
This is correct. The behavior is copied from libnix. I suggest first to test on AmigaOS whether reading from this file really opens console window. Perhaps only output opens it, and input does not. The window is not opened immediately, but upon first access.
While writing this i understood the origin. CreateNewProc() takes care about this, but when started from Workbench, process' input/output are both NIL:. Startup code takes over both of them, and we end up with Input() not containing any pre-injected data.
I would suggest to test the following sequence on AmigaOS:
1. Open CON: with these parameters. 2. Try to read something. 3. Try to write something. 4. Try to read again.
If my guess is correct, step (2) will not cause window open, and just return EOF. If so, our console handler needs to be fixed. This didn't happen originally (=wrong behavior) until (I guess) some dos packet related fix was moved to mainline. It's not packet-related, it relates to how AmigaOS handles process' arguments. It injects them into Input().
KS3.1:
handle = Open("CON://///AUTO/CLOSE", MODE_OLDFILE) Console window is not yet open Read(handle,buf,1) = Window opens, waits for input or Write(handle,buf,1) = Window opens
Resources
editThere is tiny chicken and egg problem. AOS does this when automounting with 3rd party autoboot rom (UAE hardfile driver is 3rd party autoboot rom): FileSystem.resource is added.
Something adds FFS dostype nodes to FileSystem.resource. This happens before dos initializes. (maybe when FileSystem.resource is initialized or when FFS is initialized, according to Guru Book it initializes before dos).
Autoboot ROM does its job, checks for partitions, loads filesystem(s) from RDB if installed and compares versions against filesystems in FileSystem.resource, if no RDB FFS, checks FileSystem.resource, adds dosnodes etc... Dos initializes and so on...
Issue: AROS non-dospacket AFS.handler requires dos during init. Which means it can't be initialized before dos, so FileSystem.resource nodes can't be added by AFS handler early enough. (I'd be happy to break it because it is wrong but I guess I am in minority :D)
But AFS handler version and revision info is needed to populate FileSystem.resource AFS entries properly... (seglist isn't needed because NULL seglist means use rn_FileHandlerSegment which can be set up after dos).
Also resident list entries need to be added by something else (I have no idea) than AFS.handler (again, dos needed to do it).
I guess the real problem is that AROS FS detection didn't use FileSystem.resource at all. M68k-amiga needs it because 3rd party boot roms expect it to be there and expect it to contain AFS Dostypes if KS2.0+.
This remaining problem prevents booting from 3rd party boot rom driver RDB harddrives with FFS partitions without RDB LSEG FFS installed (real Amiga with for example any SCSI adapter or normal UAE hardfile driver) or normal partition hardfiles (UAE only).
AROS ata.driver works because it knows how to handle it, so does UAE directory harddrive because it is a filesystem, not a device driver.
Yes but note that AOS does not even have seglist field in FFS nodes (seglist bit in "patchflags" not set and seglist pointer is outside of allocated memory, they saved 4 bytes of RAM per node!), handler startup code uses rn_FileHandlerSegment in RootNode if seglist is null.
That would seem to imply that:
- FileSystem.resource initializes
- afs.handler inititializes by registering itself with FileSystem.resource
- dos.library initializes, looks up the 'afs.handler' entry in FileSystem, and sets rn_FileHandlerSegment to the entry's fse_SegList
If AROS's AFS Handler didn't need DOS during init, we'd be fine. Yeah, as long as it works with higher resident priority than dos. If so, that may be the best way to go. All the AFS handler would do (during init) would be to register itself (version and Handler LSEG) with FileSystem.resource? Yes but note that AOS does not even have seglist field in FFS nodes (seglist bit in "patchflags" not set and seglist pointer is outside of allocated memory, they saved 4 bytes of RAM per node!), handler startup code uses rn_FileHandlerSegment in RootNode if seglist is null.
Can we get rid of AROS_STACK_GROWS_DOWNWARDS? It seems to needlessly complicate a few things, and it does not seem to be consistently used throughout AROS. Do we actually support a 'stack grows up' architecture? Can anyone even think of one that isn't from before the 1980s?
It was for PA RISC was I believe (I might be wrong), and that was still moderately common in the mid-90's when this code was written. (I recall at the time it was the damn near the fastest CPU).
That said, I think for practical reasons getting rid of it makes sense. I would expect AROS to run on a CPU with register windows before it ever runs on an grows the other way stack.