Aros/Platforms/68k support/Developer/HIDD

Navbar for the Aros wikibook
Aros User
Aros User Docs
Aros User FAQs
Aros User Applications
Aros User DOS Shell
Aros/User/AmigaLegacy
Aros Dev Docs
Aros Developer Docs
Porting Software from AmigaOS/SDL
For Zune Beginners
Zune .MUI Classes
For SDL Beginners
Aros Developer BuildSystem
Specific platforms
Aros x86 Complete System HCL
Aros x86 Audio/Video Support
Aros x86 Network Support
Aros Intel AMD x86 Installing
Aros Storage Support IDE SATA etc
Aros Poseidon USB Support
x86-64 Support
Motorola 68k Amiga Support
Linux and FreeBSD Support
Windows Mingw and MacOSX Support
Android Support
Arm Raspberry Pi Support
PPC Power Architecture
misc
Aros Public License

Introduction edit

Hardware Support edit

Phase 1: support programs that have custom bootblock, possibly loads few more tracks using trackdisk.device and then the rest uses hardware banging loader.

Phase 2: support programs that boot to CLI, using startup-sequence.

cia.resource (rom/cia) seems to be some kind of special case (comment in mmakefile.src, it includes both ciaa and ciab.resources). I can’t get it to compile, not even the original skeleton version. If I try to compile it, I get “<path>/build/bin/amiga-m68k/gen/rom/cia/include/cia_deflibdefs.h: No such file or directory” error, I guess something expects “normal” library? The problem is most probably caused because rom/cia uses build_module_simple to hack both ciaa/ciab into one module. build_archspecific most probably assume build_module was used on generic files (or to put it more clear - that genmodule application was used).

The solutions are you want to have one physical module cia.resource which will contain both ciaa and ciab -> copy rom/cia into arch/m68k-amiga/cia, change the target name to something like kernel-cia-amiga-m68k and just use that (don't use %build_archspecific at all). May be just fix %build_archspecific instead? It's definitely a flaw. On weekdays i'll take a look if noone solves this. Should not be that difficult. Perhaps %build_module_simple just lacks ability to work with %build_archspecific. I remember i fixed this for %build_prog. But how do I add it to kernel-link-amiga-m68k final rom image? try make kernel-cia-amiga-m68k-kobj -> there is an automagic something-kobj target created by %build_module_simple macro

btw, addicrvector.c misses AbleICR(resource, 0x80 | (1 << iCRBit)); after successfully adding new interrupt.

Now that cia resource exists, I tried to create keyboard hidd. I took PC keyboard hidd driver, removed all PC specific code, added Amiga keyboard handling (CIA interrupt + handshake). add uselibs="rom" in your %build_module_simple invocation (check for example arch/i386-pc/drivers/keyboard/mmakefile.src) (other libs that you will probably need are: oop amiga ). Thanks again."rom" fixed it. "oop" was already included. (I actually used i386-pc keyboard driver as a base, it didn't have "rom"). Now it appears in resident list during init but kbd.hidd does not seem to be initialized. I must be missing something simple, again.. Your keyboard hidd class needs to explicitly register itself in main keyboard class during module start (at least that's how it looks in pc-i386, see i386-pc/drivers/keyboard/startup.c). However to do this, the main keyboard class must be already up and running and from your log it seems that your kdb.hidd initializes before keyboard.hidd. BTW, in pc-i386 the keyboard class has quite low priority (10) so it initializes even after intuition.library - this means the crash you are getting in input.device might not be triggered by missing keyboard class. Exactly. Just decrease its priority by 1/10. Keyboard hidd loads now (was as simple as too high priority..) It is possible to work without input HIDDs at all, you just won't get events and that's all. EFIKA works this way (it has no PS/2 hardware). I also implemented dummy amigamouse.hidd (does nothing yet). input.device does not call alert() anymore.

"card.resource" from KickStart 3.0 wants to call Exec #136 (right after TaggedOpenLibrary, #135) but there is no AROS implementation. Or any doc anywhere that mentions it. In the 3.9 SDK it is listed as 'ExecReserved12'. The name of the function is ReadGayle(). It returns the ID of the Gayle chip if it is on board, or zero if there is no such chip.It first checks whether the normal custom chips are also mirrored at DE1000, if so, no gayle is present. Then, it writes a zero into the gayle ID register (the address above), and then performs eight consecutive reads from the same address. Bit 7 of each read contributes one bit to the overall ID. Placing bit 7 of the first read into bit 7, bit 7 of the second read into bit 6 etc... gives the full ID of the gayle chip. If the result is 0xff, there is no chip at this address and the function returns zero. Otherwise, it returns the read data. Clearly, card.resource requires the gayle ID to know whether the interface to the pcmcia port is available - and this is controlled by gayle. Just make empty function returning 0, and override it in amiga port.

The mouse cursor has hardware cursor support via the Get method of your driver class you need to return this (one of them is enough)

        case aoHidd_Gfx_SupportsHWCursor:
            *msg->storage = (IPTR)TRUE;
            return;
        case aoHidd_Gfx_HWSpriteTypes:
            *msg->storage = vHidd_SpriteType_DirectColor;

The you need to implement following methods:

SetCursorShape
SetCursorPos
SetCursorVisible

If you need an example, please check nouveau driver: AROS/workbench/hidd/hidd.nouveau

Did you check that bitplane and copper dma gets enabled and copper list looks sane? (O-command in UAE debugger)

I now have quite interesting behavior, software failure and reset if I run it normally in WinUAE (with fastest possible mode). It runs fine if I enable debugger "trace" mode (any breakpoint active and it starts collecting trace data that can be viewed with H and HH commands)

Debugger in fastest possible mode probably delays interrupt detection by one CPU instruction. Cycle-exact mode works fine too. You can disable automatic reboot after unexpected exception (start.c/Exec_FatalException)? It makes debugging annoying (I replace it with infinite loop)

The gallium maintainer look over this? I don't know if we can safely use Disable()/Enable() in gallium. This will most likely kill or seriously hamper the performance. I assume gallium causes problems for m68k. You can do two things:

a) disable build of libgalliumauxiliary, gallium.library, mesa.library, gallium.hidd and softpipe.hidd for m68k arch (using your own workbench-libs and workbench-hidds targets for example)

b) ifdef your patch so that it is only applied on m68k architecture (so it would be PIPE_ATOMIC_OS_AROS_M68K), while other archs continue using the default approach (GCC instructs)

You can also try getting exec in real autoconfig fast ram (after ConfigChain() call) if you still have too much free time :) Are you talking about moving the "Boot Task" stack, or moving all of Exec out of ROM and into RAM?

If the former, it *should* happen automatically during the CreateProcess() of the first CLI program. I just need to fix up the memory allocation pointers to that the 'old' Boot Task stask is properly deallocated on RemTask(NULL)

Gfx edit

How do I set correct native mode IDs to mode tag list? (hires,lace,pal,ntsc, etc.). See DDRV_MonitorID tag of AddDisplayDriverA(). Then you'll perhaps need to subclass ModeID processing of graphics.hidd because you'll get another structure of ModeIDs. I suggest that you'll have one sync per monitor mode (i mean PAL is one sync, NTSC is another one, Multiscan is the third one, etc.). There will be only one pixelformat, other bits in ModeID will be sync modifiers.

zzzz xx yy

zzzz is actually driver number. They are assigned in turn, starting from 0x0010. This way numbers up to 0x000F are reserved. I did in especially for Amiga driver.

xx and yy are sync number and pixelformat number. I don't remember right now what is what, but it's not important in fact.

So zzzz is handled by graphics.library and xx yy handled by graphics.hidd.

Amiga chipset driver needs to occupy reserved region (0x0000 - 0x000A if i remember correctly). It is possible to explicitly request driver number using DDRV_MonitorID tag. Now you need to turn zzzz into zzzW, where W is also mode-specific thing. If you look at AddDisplayDriverA() you'll see AROS_MONITOR_ID_MASK hardcoded value. One of variants is to implement DDRV_IDMask tag which will allow to override mask value.

Okay, this is done. Now let's deal with xx and yy.

You'll likely have to override ModeID routines in your driver. To tell the truth i didn't look at this part at all, so i can't tell you if it would be efficient to incorporate this functionality into base class using additional attributes.

Now about alternative possible approach...

If you examine graphics.library code, you'll see a memory driver instance. Currently it's an instance of "hidd.graphics.graphics", it represents nondisplayable bitmap stored in RAM. All temporary planar bitmap objects are owned by this driver.

Perhaps it will be very efficient to instantiate chipset driver here. Since chipset driver is still a subclass of graphics.hidd, it will still support chunky bitmaps in RAM. But it will add missing functionality to planar bitmap.

The second point in this variant: since graphics.library knows about this particular driver, it can add its modes in a specific way and you won't need to introduce new tags to AddDisplayDriverA(). In fact i haven't implemented this tag already because it's too easy to screw up the whole GFX system by improper use of it, and it is not needed by anyone except chipset driver.

I believe each of them has its pros and its cons. It's up to you to decide what to actually do with it. Perhaps you'll invent your own way. Anyway I believe you'll have to extend GFX subsystem somehow. I noticed graphics internally here and there checks that aros modeid parts are valid (xx and yy parts must be less than total syncs and pixelformats) This will get messy...

{ aHidd_PixFmt_ColorModel   , vHidd_ColorModel_Palette },
// is following maximum or do I need to have multiple pixformat/depth pairs?
{ aHidd_PixFmt_Depth , 4 },
What does following tags mean in planar mode? (they need to be included or
pixformat gets rejected)
They don't make much sense if bitmap is planar.
{ aHidd_PixFmt_BytesPerPixel, 1 },
{ aHidd_PixFmt_BitsPerPixel , 1 },
{ aHidd_PixFmt_StdPixFmt, vHidd_StdPixFmt_Native },
{ aHidd_PixFmt_BitMapType, vHidd_BitMapType_Planar },

They mean what they mean. Of course BytesPerPixel carries dummy value. StdPixFmt in fact will always be vHidd_StdPixFmt_Plane.

For some reason boot menu crashes somewhere inside intuition.library OpenScreen(). It seems to happen deep inside intuition_mis.c, DoMethodA() call.. (jsr to non-existing address)

     D(bug("[intuition] RenderScreenBar: ScrDecorObj @ %p, DecorUserBuffer @
%p\n", ((struct IntScreen *)(scr))->ScrDecorObj, ((struct IntScreen
*)(scr))->DecorUserBuffer));
     DoMethodA(((struct IntScreen *)(scr))->ScrDecorObj, (Msg)&msg); }

D(bug("[intuition] RenderScreenBar: Update gadgets .. \n"));

gfx hidd update committed, now it looks much much better.

  • 24-bit AGA colors supported (if AGA detected)
  • display size hardware values are not hardcoded anymore
  • use interlace mode (default screen size is 640x480+, more correct aspect ratio)

but

  • no shres/hires/lores and lace/non-lace automatic selection.
  • software blitting only (=slow)

A chunky based driver (chunky shadow bitmap in RAM with chunkytoplanar to real bitmap in CHIPRAM) would be faster, except on the slowest real Amigas like A500. chip ram is slow. blitter is slow.

IIRC some game's c2p routines on 040 or higher could do the conversion at copy speed (chipram write bandwidth).

Since nothing in AROS or graphics.library relies on direct bitmap access you could even have a truecolor mode (driver uses real ARGB32 or similar chunky buffer as shadow bitmap) which is realtime dithered to 256 or 16 (or whatever) colors while doing the c2p. For everything outside the driver it is a real truecolor mode. Though, using planar custom chipset graphics as a dummy chunky C2P framebuffer is not Amiga-like enough for me and will be too slow on any real Amiga).

How do I get this monitor installed? monitorclass.c says you shouldn't touch it. I didn't find anything interesting in existing PC hidds either. Nothing odd. Basically monitorclass allows to handle multi-display input. Additionally it deals with mouse sprite, because graphics.library functions do not fit very well for multi-monitor environment. Other stuff is just a reimplementation of MorphOS API (to be compatible with something). So, if there's no monitorclass object, Intuition does not know about your monitor (while graphics still does), and can't handle input on it. Number of planes increased to 5 (if AGA) because default sprite colors (17-19) are not updated if display has less planes. How to handle this correctly? (Does cursor bitmap have a palette?). OpenScreen() should take care about it. Noone ever tested it with Amiga hardware, so check it. IIRC i commented mouse sprite part well.

Some private dependencies were removed, and changed monitorclass object lookup in intuition. Since they know monitor ID and mask, it's possible to find the descriptor by ModeID.

  • First question: why do they have such a wide mask (0xF00000000) ? I supposed chipset driver to have 0xFFF00000.
  • And why do you use custom IDs for UAE driver? In fact, i guess this is the reason. Perhaps both drivers happen to have the same monitor ID (ModeID part where mask has 1's). So FindMonitor() function in intuition finds a wrong monitor object, and intuition's input handler fails to look up the correct screen.

When designing the ID assignment scheme, different drivers get different IDs. Automatic assignment is tested very well, but never supposed to have so wide masks. This is why RTG IDs start with 0x00100000. In chipset modes five rightmost digits vary, and 6-8th digits are always 0, this is what reservation is based on. It was never supposed to have 0xF0000000 masks.

I added this to openscreen.c and now colors 17 to 19 gets allocated by intuition but SetColors bitmap method does get any >15 palette entries.

#if (AROS_FLAVOUR & AROS_FLAVOUR_BINCOMPAT)
        spritebase = 16;
#else
        spritebase = (ns.Depth < 5) ? (1 << ns.Depth) - 8 : 16;
#endif

Everything you said has been done. That was the easy part. (It was very well documented and for example vesa.hidd does the same as expected) Problem is something internal to graphics or intuition.

intuition/misc.c: this calls HIDD SetCursorPos() normally.
static void SetPointerPos(struct MonitorData *data, struct IntuitionBase
*IntuitionBase)
{
    ULONG x = data->mouseX;
    ULONG y = data->mouseY;

    DB2(bug("[monitorclass] SetPointerPos(%d, %d), pointer 0x%p\n", x, y,
data->pointer));
    if (data->pointer)
    {
        /* Take HotSpot into account */
    x += data->pointer->xoffset;
        y += data->pointer->yoffset;

    /* Update sprite position, just for backwards compatibility */
    data->pointer->sprite->es_SimpleSprite.x = x;
    data->pointer->sprite->es_SimpleSprite.y = y;
    }

    DB2(bug("[monitorclass] Physical coordinates: (%d, %d)\n", x, y));
    HIDD_Gfx_SetCursorPos(data->handle->gfxhidd, x, y);
}

void MonitorClass__MM_SetPointerPos(Class *cl, Object *obj, struct
msSetPointerPos *msg)
{
    struct MonitorData *data = INST_DATA(cl, obj);

    data->mouseX = msg->x;
    data->mouseY = msg->y;
    SetPointerPos(data, IntuitionBase);
}

Above is called from intuition/misc.c:

void MySetPointerPos(struct IntuitionBase *IntuitionBase)
{
    Object *mon = GetPrivIBase(IntuitionBase)->ActiveMonitor;

    if (mon)
    DoMethod(mon, MM_SetPointerPos, IntuitionBase->MouseX,
IntuitionBase->MouseY);
}

IntuitionBase->ActiveMonitor is always NULL. Same problem with all SetCursorXXX() functions, NULL ActiveMonitor -> do not call any mouse routines.

If I add "bug("x=%d y=%d);" in this function (outside of if (mon)), I get debug messages as expected when I move mouse.

Ok, maybe this will help you: when AddDisplayDriver is called by driver, the graphics.library registers a new driver and also notifies intuition.library that new driver was registered. This way intuition.library can create a new monitor object. To get this notification however, intuition.library must first register with graphics.library.

Maybe your initialization sequence looks like this:

graphics.library
amiga.hidd
intuition.library

I think in that case intuition might not be notified and might not know that there is a driver loaded.

Every graphics hidd function is implemented in "generic" way in super class using PutPixel/GetPixel from concrete implementation you need to implement all drawing functions in hidd to get rid of "super class" using PutPixel. (I would suggest starting "improvements" with CopyBox and then continuing with PutImage and PutImageAlpha). CopyBox is not in bitmap class.

There are two ways that cursor work:

  • emulation on fakehidd
  • native mode

As far as I remember for emulation to work, your driver needs to be a frambuffer one (aHidd_Gfx_NoFrameBuffer = FALSE) and you need to implement Show method to call super method. This emulation simply draws the cursor on your screen for you (restor old backuped 64x64 tile, backup new 64x64 tile, draw cursor)

For native mode, you need to implement SetCursorVisible, SetCursorPos and SetCursorShape and do the "drawing" yourself in SetCursorPos. You need to mark your driver as NoFrameBuffer = TRUE as well as tell system that you support hw cursor aHidd_Gfx_SupportsHWCursor = TRUE. Also in show method, you must return the received bitmap and do not call super method. Never touch anything above the gfx driver (the hidd class) to get cursor working correctly.

Picasso96 supports following high color modes:

RGBFB_R5G6B5PC,        /* HiColor16 (5 bit R, 6 bit G, 5 bit B), format: gggbbbbbrrrrrggg */
RGBFB_R5G5B5PC,        /* HiColor15 (5 bit each),                format: gggbbbbb0rrrrrgg */
RGBFB_R5G6B5,          /* HiColor16 (5 bit R, 6 bit G, 5 bit B), format: rrrrrggggggbbbbb */
RGBFB_R5G5B5,          /* HiColor15 (5 bit each),                format: 0rrrrrgggggbbbbb */
RGBFB_B5G6R5PC,        /* HiColor16 (5 bit R, 6 bit G, 5 bit B), format: gggrrrrrbbbbbggg */
RGBFB_B5G5R5PC,        /* HiColor15 (5 bit each),                format: gggrrrrr0bbbbbbgg */

UAE default is RGBFB_R5G6B5PC (no need for conversions, matches normal little ending 16 bit mode) and I think most UAE ports don't even support other high color modes.

Is it possible to define this mode for big endian CPU? (like 68k) I don't see how single color component shift and mask can work. I know this isn't required, 8-bit and 32-bit is more than enough but I want to have complete implementation :)

I also have another question: BitMap_GetImage and PutImage (+LUT variants), what is format of "pixels" array? Always same as bitmap or always 32-bit or 8-bit? I didn't find good enough examples or documentation. These are last remaining non-implemented methods. The format of incoming/requested pixel buffer is available in message: msg->pixFmt.

There are two pseudo formats you can get: Native and Native32. As far as I understood the gfx system Native means "same as bitmap object o which the method is called" and Native32 means "same as bitmap object on which the method is called but packed in 4 byte "pixels".

If we have 8-bit (256 color) bitmap and Native32 is requested, how to fill the pixels array? (1 byte of data + 3 zeros? 3 zeros + 1 byte? some color conversion?). There are already methods in base class which you can use:

GetMem32Image8
ConvertPixels
PutMem32Image8

Check /workbench/hidds/hidd.nouveau/nouveau_accel.c

HiddNouveauWriteFromRAM
HiddNouveauReadIntoRAM

The cascade of switch/case you will see in these functions is common to every driver's GetImage/PutImage I think. Let's take for example GetImage method. It's called on a bitmap object requesting to have a specified box (msg->x, msg->y, msg->width, msg->height) from this bitmap object to be transferred into memory buffer (msg->pixels) which has a given stride (msg->modulo) and a given pixel format (msg->pixFmt). To do this, you first check the given(requested) pixel format if it is Native, Native32 or something else. In first two cases you then check the input bytesperpixel value and based on this you decide which of the existing CopyMemBoxXXX or GetMemXXXImageYYY methods to use. If the requested format is different thatn Native or Native32, you need to use the ConvertPixels method which will do the conversion for you. Implementing your driver you don't have to worry how the memory is organized in formats during conversion because this is already implemented in CopyMemBox/GetMemImage/ConvertPixels methods.

Please note that generic conversion methods used in ConvertPixels are somewhat slow. There are more specific methods which are faster. You can switch to those methods by running tests/patchrgbconv. You can also do like I do in nouveau and patch those methods in the driver itself so that users don't have to worry about this.

Since we are talking about patches I understand that you don't want to write drivers for 3rd party Amiga hardware but do you think there would be any solution to use Picasso96 drivers on AROS m68k? I'm not asking you to do it, just want to know if installing Picasso96 on top would work or if the code you wrote allows using .chip/.card files so RTG can be used on real Amigas. I understand that original drivers of Deneb, scsi, and others should work OOTB in AROS m68k. Uaegfx hidd uses .card API internally to communicate with UAE side. It should be quite easy to modify it for "real" Picasso96 drivers and cards. (I could do this, I do have an A3000 but no expansion cards. Never really cared about display cards..) Most drivers should work fine.

screen bitmaps are locked (bm->locked variable) when visible (to prevent them from being swapped) but later never ever get unlocked even if some other screen bitmap has become the visible ones. So all screen bitmaps stay in VRAM forever. IIRC:

speed:

if there's some gfx functions which the driver does not handle itself, often the fallback functions will be slow or very slow. Even more so with RAM/VRAM swappable bitmaps because of the required bitmap locking. The fallback are not optimized and they never really can be super fast because the AROS gfx system is designed in such a way that it does not rely on having direct access to the gfx driver bitmap (or other bitmaps') pixel data. It could try to obtain direct access in fallback functions, but at the moment it mostly doesn't and so just uses indirect access to pixel data like through putpixel/getpixel or putimage/getimage.

One things which at the moment the uaegfx driver does not handle itself is blits where source bitmap pixfmt is different from dest pixfmt and at least one of the bitmap is in RAM instead of VRAM. For COPY drawmode/minterm at least it should be possible to handle/implement it using HIDD_BM_ConvertPixels (if both bitmaps are uaegfx bitmaps) or HIDD_BM_PutImage/GetImage (if one of the bitmaps is a foreign one). Speaking about drawmodes at least INVERT and CLEAR (bitmap allocated with BMF_CLEAR may trigger this) should be handled, too.

pixel fmt conversion routines: IIRC AROS 68k only has some of the improved ones (the stuff which used to be SYS:tests/patchrgbconv) built in (space constraints in ROM).

graphics.library/Text(): the code which puts the single font chars into the temp bitplane is very unoptimized. This template bitplane is what then gets rendered on screen using BltTemplate(). If someone wants to improve this, he can try to compile/optimize this (rom/graphics/text.c) on AmigaOS (maybe even AOS4 or MorphOS) with a little program which patches this into the system with SetFunction(). Doesn't rely on AROS internals, so no problem (disable antialiasing related stuff on AOS3). Then compare how much slower this is then original AOS function. Then optimize until it doesn't suck anymore ... then contribute back to AROS.

planar bitmaps: internally most AROS bitmaps are bitmap objects (boopsi like), but not all are. Handmade ones (like with InitBitMap()) are not and IIRC even planar bitmaps allocated with AllocBitMap() are not bitmap objects. So during gfx functions this are wrapped into bitmap objects on the fly, because the gfx system only works with this bitmap objects, but not "struct BitMap *" as known from graphics.library. One thing which I have noticed that is stupid/slow in this wrapping is that the planarbm class in SetbitMap() method always calls RegisterPixFmt to register pixfmt of wrapped planar bitmap. So during each and every gfx function with involved planar bitmap a pixfmt is re-registered. Should be changed to do so only if a matching pixfmt wasn't registered already earlier.

icons: loading of stuff like this (also fonts) is slow(er) in AROS, because AROS can be 32 bit, 64 bit, big endian, little endian and there is same code used for all possible variations. In AOS (or AOS4 or MOS) they have endianess fixed to "big endian" and the system is fixed to 32 bit. So they can basically read in this stuff as one chunk like it was one struct in memory. AROS m68k-amiga could have optimized loading, too (#if AROS_BIG_ENDIAN etc. do it like they do in AOS68k land), but doesn't at the moment.

Maybe I missed something important but it seems gfx hidds need to specify every supported depth value. For example if you want to fully support AGA, hidd should have 8 different pixelformats (and OCS/ECS 4 or 5 depending on horizontal resolution). Shouldn't it automatically generate all lower depth pixelformats?

This really slows down graphics on real Amigas, especially on non-AGA machines because now it always forces 4 planes hires which eats all chip ram bandwidth and takes too much memory.

Another big compatibility problem is wrong modeIDs. This needs working solution. I also would like to have solution to modeid issue, I don't really see the point of dynamically generated modeids by some outside entity. The main point is: the same driver can be instantiated more than once. For example you can have two similar graphics card, each of them can have more than one output. This way in case of two Radeon cards you would have four instances of the same driver. What mode IDs would you assign? And how? In CyberGraphX you create mode IDs manually using preferences program. I decided that it's not comfortable and implemented dynamic generation of monitor IDs (most significant word of mode ID). The rest (pixelformat and sync numbers) in fact left from the old implementation. I invented nothing new there. Perhaps it still has some drawbacks, however I believe it's acceptable. Anyway the programmer doesn't have to assign IDs manually.

Added more modes to RTG driver and suddenly all other modes got new IDs... Also apparently at least some m68k RTG programs decide that too "small" modeid numbers = native planar modes.. What is "too small" ?

Now about what you want to know... When i implemented the new ID scheme, i actually left space for Amiga IDs (read: for you). For the clearance: display mode ID is divided into two halves: most significant half is "monitor ID" and least significant one is raw mode ID itself (sync + pixelformat). The HIDD knows only about the latter half, monitor ID is maintained by graphics.library.

If you look at the code carefully, you'll see that ID numbering starts from 0x0010. No RTG drivers are added with IDs less than this one. Your driver needs to occupy this reserved space. Additionally it needs to specify 0xFFF00000 mask for its modes, since Amiga mode IDs can be represented as 0x000X XXXX.

Currently AddDisplayDriverA() lacks a possibility to specify mask. However you can specify starting ID (using DDRV_MonitorID tag). There are the following possible solutions:

  • a Implement DDRV_IDMask tag.
  • b Implement a single boolean tag like DDRV_AmigaModes, which will automatically assign Amiga modes to the driver.
  • c Make graphics.library on m68k to explicitly instantiate chipset driver instead of dummy memory driver as its default driver.

I don't like (a) because improper use of this tag can easily screw up graphics.library. I would consider only (b) or (c) options. Perhaps (c) is even better because it would automatically make all planar bitmaps belonging to the chipset driver. It's actually up to you to decide what to do and implement it. But please don't break the whole ID scheme. I thought a lot on it. I asked others before implementing it, but noone came up with some alternative.

modeID is generated by "someone else" and any modification to supported modes renumbers the modeIDs. (which is especially problem under emulation because user can select which bit depths are enabled or if you want BGR or RGB etc..).

what concerns rtg and non rtg modes (i think its modeid hes talking about):

everything below monitor-id $00100000 is an amigamode 
all other ids are standing for rtg 
if you found anything over $00100000 use the cgfx-functions to identify the gfxcard
p96 supports the functions of cgfx 

myself, i thought cgx supplies static modeids, but on p96 the first four digits that define resolution are dependent on in which order the modes have been declared, so they are semi-random, the last that describe depth/bit order are hardwired to mode they represent. Here are p96 modeid last four digits values that represent particular bit orders on each physical resolution, ive been able to find out on my voodoo3 system.

xxxx1000 = 8bit
xxxx1100 = 16bit pc
xxxx1102 = 16 bit
xxxx1201 = 24bit
xxxx1300 = 32bit argb
xxxx1303 = 32bit bgra

0x8000100e which is DIPF_IS_HAM | DIPF_IS_PF2PRI | DIPF_IS_DUALPF | DIPF_IS_EXTRAHALFBRITE (+ bit 31 set which apparently is a game bug..)

Unfortunately this means AOS original ModeIDs need to be emulated someway, even when using RTG display (all native chipset special bits needs to be zeroed in this situation)

How to add "standard" monitor ModeIDs that aros graphics system can promote to standard hidd modes? (Just like real m68k-amiga RTG hardware does when mode promotion is enabled). See updated AddDisplayDriverA() documentation. I added DDRV_ID_Mask tag for this. "Note that for this feature to work correctly, you also need to override mode ID processing in your driver class. Default methods provided by hidd.graphics.graphics base class suppose that the whole lower word of mode ID specifies the display mode."

What exactly does overriding mode ID processing means? Overload methods like GetMode, GetSync, NextMode, etc.

For example PowerPacker does this:

  • calls QueryOverscan(0x29000,&rect, OSCAN_TEXT); which fails.
  • PP ignores return code, rect contains random memory contents.
  • copies garbage rect contents to NewScreen structure screen position and size.
  • adds some OpenScreenTags() tags, including modeid tag = 0x29000
  • of course screen fails to open (non-existing mode id, totally bogus screen width and height)
0x29000 = PAL_MONITOR_ID | HIRES_KEY.

At least PAL_MONITOR_ID, NTSC_MONITOR_ID, HIRES_KEY and LORES_KEY combinations should be always supported and return matching resolution. There is some promotion code in OpenScreen() (left over from MorphOS), check it and fix if needed (see openscreen.c, line 597). I guess this is easy and simple, mark one driver as "preferred" (can drivers have priorities?). They can. However currently they don't.

If "other" driver modeid is asked, just find similar enough mode from preferred driver mode list? Currently BestModeIDA() looks through all drivers, perhaps we should really have some "default" driver, user-specified. We should develop then some concept for user preferences regarding monitors. This can include specifying preferred monitor, advanced mode promotion, describing placement of monitors, etc.

Apparently you need to define pixel format for all plane counts instead of just one that describes max supported plane count. Which means for example there needs to be 8 different modeIDs for single AGA resolution. (320x256x1, 320x256x2,... 320x256x8 and same for all other resolutions)

It is quite difficult to fit 8 modeIDs (or 4/5 on OCS/ECS) in single official documented modeid bit. maybe its possible to add to the AROS code that the automatic ID generation create only ID above 0x10000000 ? and all below can create thru manual ID. If only 320x256x8 is defined, all lower plane mode requests are always "promoted" to 8 planes.

One more issue that needs to be fixed before native modes can work. aoHidd_BitMap_Depth is needed (which is for some reason commented out and not handled everywhere, like in AllocBitmap()). AllocBitMap() sends width, height, pixelformat to HIDD but not depth. Which is really needed to support m68k-amiga chipset modes. It needs one pixelformat/modeid per resolution, not one pixelformat per (resolution * available depths). If you look at line 392 in allocbitmap.c you will see that the depth is actually copied from pixel format (* or from bitmap object - this part I don't understand). According to my understanding its driver's role to correctly handle the situation:

AllocBitmap(depth=1 bit, friend bitmap depth=2)

It should allocate a bitmap with depth of 2 ignoring the passed depth completely. This code is in root GraphicsClass (line 1177). I put it in allocbitmap because modeid is also copied from friend bitmap there (not in the driver) Because technically chunky modeid includes the depth too. I also compared your bitmap New method with my nouveau New method and I noticed that you use:

OOP_GetAttr(o, aHidd_BitMap_Depth, &depth);

while I use

 OOP_GetAttr(o, aHidd_BitMap_PixFmt, (APTR)&pf);
 OOP_GetAttr(pf, aHidd_PixFmt_Depth, &depth);

This means you are taking depth that was passed (see root BitMap class New method) while I'm taking depth taken from pixel format from friend bitmap. Is this because of difference in handling planar and chunky modes? Yes. Planar mode's depth is separate from pixelformat (modeid). Planar mode's pixfmt_depth =pixelformat's max supported depth (different on OCS/ECS and AGA).

Any non-driver code that assumes pixfmt_depth = bitmap's current depth can't work correctly with planar modes. I complained about this AOS incompatible design issue few times in this ML until I re-added commented out aHidd_BitMap_Depth and more, did you miss it? :)

Do I do something wrong or is this missing gfx hidd feature? btw, afaik at least bitplane mask field in rastport should be passed to hidd rectfill() and friends if we are going to handle all low bitplane tricks, in worst case minterm array is needed (which would replace drawmode and mask in complex situations) if dpaint screen mode requester is supposed to work correctly.

HIDD_BM_ConvertPixels() - it doesn't seem to fully convert from chunky to planar. ie vHidd_StdPixFmt_ARGB32 -> vHidd_StdPixFmt_Plane does the LUT transformation, but not repacking the pixels into planar mode. Is this the job of HIDD_BM_ConvertPixels(), and if so, are there any 'well known' Chunky->Planar routines in AROS I could reuse?

aHidd_PixFmt_BitsPerPixel: Should be the # of bits per pixel, obviously, but appears only to be set, never used. aHidd_PixFmt_BytesPerPixel: What should this be set to for Planar formats? 1? 0? (- Depth)? aHidd_PixFmt_Depth: This should be the number of planes for Planar. Shouldn't this be 0 for Chunky?

In fact this can be an incomplete implementation. Historically vHidd_StdPixFmt_Plane referred only to single-plane monochrome bitmap, it was used for rendering fonts. Later it was extended, and about a year ago i dropped original font drawing routines.

Proposing...

Chunky Formats:

 aHidd_PixFmt_Depth == 1
 aHidd_PixFmt_BytesPerPixel == Number of bytes to store a pixel, if

the pixel is byte aligned. 0 for 'packed pixel' (ie 2-color bitmap non-planar)

 aHidd_PixFmt_BitsPerPixel == Number of valid bits per pixel chunk.

If BytesPerPixel == 0, then this indicated that packing of packed pixels

Examples:

  • Hercules monochrome: Depth = 1, BytesPerPixel = 0, BitsPerPixel = 1
  • CGA 4 color mode: Depth = 1, BytesPerPixel = 0, BitsPerPixel = 2
  • VGA 256 color mode: Depth = 1, BytesPerPixel = 1, BitsPerPixel = 8
  • RGB 15 bit mode: Depth = 1, BytesPerPixel = 2, BitsPerPixel = 15

Planar Formats follow the same rules, but note that there can be more than one bit per pixel per plane.

 aHidd_PixFmt_Depth = 1..8 (Number of planes)
 aHidd_PixFmt_BytesPerPixel == Number of bytes to store a pixel, if

the pixel is byte aligned 0 for 'packed pixel' (ie 2-color bitmap non-planar)

 aHidd_PixFmt_BitsPerPixel == Number of valid bits per pixel chunk.

If BytesPerPixel == 0, then this indicated that packing of packed pixels

Examples:

  • Amiga 4 color mode: Depth = 4, BytesPerPixel = 1, BitsPerPixel = 1
  • Amiga 256 color mode: Depth = 8, BytesPerPixel = 1, BitsPerPixel = 1
  • Some make up 24+8 bit RGBA, where there are 8 bits of a single color or alpha per plane: Depth = 4, BytesPerPixel = 1, BitsPerPixel = 8

ConvertPixels for planar, and I'm at a bit of a quandry as to what certain fields mean. Maybe the list is incomplete, but isn't there anything like "BytesPerLine" or "Width including pad"? There is gettable aHidd_BitMap_BytesPerRow. There is however no support for padding. Once i needed this for planar bitmap and i introduced aHidd_BitMap_Align. Alignment does not affect BytesPerRow value which is always returned as Width * BytesPerPixel (taken from bitmap's (pixelformat). This is likely wrong and alignment should be accounted. Also BytesPerRow will be calculated wrong for planar bitmaps, the attribute needs to be overloaded in planarbm class. The way I see it is that ConvertPixels was never meant for things like planar to chunky conversion. But only for LUT to LUT. Or LUT to TRUECOLOR. OR TRUECOLOR to TRUECOLOR. Where LUT basically implies a chunky pixel layout. And ConvertPixels being there to support functions like WritePixelArray(), WriteLUTPixelArray(), ReadPixelArray(), WriteChunkyPixels().

So, if I understand you correctly, GetImage/PutImage for Planar should handle the chunky/planar conversion, and use the current ConvertPixels as a helper, instead of calling the root class's GetImage/PutImage and its usage of ConvertPixels. Currently, PlanarBM is calling the root class for non-native PixFmt types on PutImage. You imply that PlanarBM should be handling the conversion itself for *all* incoming types, correct?

All the code can stay as it is now and it should not matter how internally the bitmap looks like (or is there something which doesn't work?), because graphics library and hidd stuff don't rely on how it looks like. When code needs to look at bitmap pixels, it is all done using pixel buffers == pixel arrays == chunky arrays and functions like PutImage/GetImage even for planar bitmaps and pixfmt == Native or Native32 will write/read chunky pixel arrays/buffers. (see rom/hidds/graphics/planarbm.c)

So if one writes a gfx driver with a planar bitmap class then PutImage/GetImage and pixfmt==Native|Native32 is not supposed to use/return planar data, but chunky data.

Also in theory there is no single planar format. Things like interleaved or not. MSBFirst/LSBFirst (see X11). Atari like planar formats. Implicit alignment is an option, but if the API is legacy-free, indeed it is more convenient to have BytesPerRow and Pad separately. When copying from/to a bitmap, normally the pad is just wasted space. Plus, having it declared separately avoids mistakes in calculation.

This would also help to distinct planar vs. chunky/true-color formats. aHidd_PixFmt_BitsPerPixel: Should be the # of bits per pixel, obviously, but appears only to be set, never used. Really? It can be used at least for comparison, when registering new pixelformats. Don't remove anything unless you are sure that you studied all the code. I remember i had problems with changes there, it took significant time to settle all it down.

aHidd_PixFmt_BytesPerPixel: What should this be set to for Planar formats? 1? 0? (- Depth)? I would say this is undefined for planar bitmaps. Let's set it to 0. Just we need to be sure that it breaks nothing.

aHidd_PixFmt_Depth: This should be the number of planes for Planar. Shouldn't this be 0 for Chunky? No, It's used in many places to determine number of colors. I saw it the other way - Depth in struct BitMap was the number of Plane[] array elements that were valid. In Chunky, only the first one was valid, therefore Depth = 1.

How can I call RTG (=any non-custom chipset driver in this case) Hidd_Gfx__Show(FALSE) when custom chipset driver Hidd_Gfx__Show(TRUE) is called? (and vice versa). There's no such relation. Drivers are completely independent on each other. You can use simultaneously both displays, one on chipset and another on GFX board. Problem is that something must tell the RTG driver to become idle and enable chipset mode passthrough when custom chipset mode is selected. This is not emulation specific.

This is needed to switch between custom/RTG modes on emulation (and without switching cables on real Amigas). Well, you can make AROS to unload chipset driver when any other driver is loaded. Set DDRV_BootMode to TRUE when adding it, and you're done. Chipset driver must never be unloaded, it must be always available if user runs program that specifically requires chipset mode (for example old games or whdload or whatever).

Or maybe we need something more powerful than Show() so that possible non-display ram bitmaps are also freed to keep memory usage lower? Show() displays only one bitmap. Other bitmaps go offscreen. There's ShowViewPorts(), it takes a list of bitmaps. It's up to driver to track their state. However you can't free any bitmaps, AROS may use them. Only the creator can free bitmaps. Chipset display is hidden "behind RTG" (not visible) because hardware (emulated or not) thinks RTG is the currently active mode.

Currently any attempt to open custom chipset mode opens it "behind" RTG mode. Do you mean some emulation issue? Does emulator attempt to display both modes in the same window? Well, emulation sucks. In order to overcome this, you can teach the driver to shutdown the emulated hardware when nothing is displayed (Show(NULL) is called). This would tell UAE to close emulated RTG display. Windows-hosted driver does the same thing in order to avoid empty windows - it just closes the display window when there's nothing to show in it. On real hardware this is not an issue, this is an advantage instead. You can work with two monitors this way and display different screens on them (thanks to different mode IDs). Problem is lack of Show(NULL). I'll implement it in WinUAE if your real physical monitor disappears when you call Show(NULL). In fact during LoadView() every driver will get Show() if something changes on it. If there was something on display, and it goes away, the driver will get Show(NULL). It still does not have anything to do with my problem. RTG and native are being considered different physical monitors. (at least we will agree on that!) = No one is going to call RTG.Show(NULL) when something opens on native screen (Native.Show(BitMap)). I only need to know how to call another driver's (name is known at least) Show() method (indirectly or directly) from other driver's Show() method so both both are never active simultaneously. Cleanly. Lets call it blanking the display instead of disabling it or turning it off if we need some kind of stupid analogy :)

btw, custom chipset driver ModeIDs are now correct (no more generated fake ids) And what about those real physical RTG cards that have custom chipset passthrough cable and relay or electronic switch? This is what UAE emulates. Not two physical displays. In other words: we have single monitor that shares two displays, if topmost screen = RTG screen -> RTG enabled, if topmost screen = chipset screen -> RTG disabled, chipset screen passthrough gets enabled. This is not (fully) supported. It is a hack but that's how it is supposed to work in this situation. I don't care if it is strange or stupid way of design today but Picasso96 was designed to support passthrough automatically via software and some cards (and UAE) support and expect it.

There can be a problem with "cleanly". The hacks I can think off are:

a) put both driver codes into one hidd and do a lot of "if (native) else"

b) add some dummy module that will be known by both drivers and will arbitrate:

graphics -> native.hidd (Show) -> module

module checks if rgt.hidd was registers before

if yes:

module-> rtg.hidd(Show(NULL))

then

native.hidd-> module (register as being shown)

native.hidd -> continue with Show

However I'm not sure if it is safe to call Show(NULL) from this module

PS. For screen dragging you will have to implement ShowViewPorts anyhow

Too ugly :)

Can't you just enumerate all monitors and send some kind of "Power save"/"blank" signal to all monitors except the one that matches current modeid? (that then internally gets converted to Show(NULL)?) Or is there some kind of LoadView(MonitorID, NULL) call? Not that I know off - your use case goes kind off against system design. The assumption so far was that every driver instance outputs its "signal" to a separate physical display device and the drivers don't know about each other existence, while in your case two separate drivers need to output to one physical display device, but can never output at a same time. Unless you want to redesign how the gfx drivers system work, you will have to cope with b)-like "approach"

Couldn't it be implemented in intuition.library/graphics.library that they keep track of displays on the same monitor and on different monitor and send the Show(NULL) when one driver on the same monitor get priority over another one ? This is probably the "correct" solution but I am not going to request (possibly complex) new core feature that only (?) m68k-amiga (with RTG) needs.

Automatic monitor swithing on amiga hardware is a normal feature and should be properly supported by AROS; not by hacks.

We have two classes of display devices: Drivers and Monitors. Monitors describe the allowed display mode timings (but not color depth). Drivers are connected to Monitors, and use the Monitor's timing lists to determine which of their possible modes are valid. Multiple Drivers can be connected to a single Monitor instance, to represent passthru systems.

Let's imagine a A4000 with a Picasso96 style card, and a separate (ATI Mach64) card on a PCI busboard, and two monitors, a 1084 and a standard VGA:

Driver 1 (P96) \
                --->  Monitor 1 (Commodore 1084)
Driver 2 (AGA) /

Driver 3 (ATI) ---->  Monitor 2 (VGA)

In this case, we'll want the 'blanking' behavior Toni was talking about for the P96 and AGA. Now, let's suppose someone (me, for instance), write a driver for a Voodoo 1 3D card, which has a VGA passthrough, and puts it in the A4000:

Driver 1 (P96) \
                --->  Monitor 1 (Commodore 1084)
Driver 2 (AGA) /

Driver 3 (ATI)    \
                   --->  Monitor 2 (VGA)
Driver 4 (Voodoo) /

In this case also, we want the blanking behavior. The Voodoo case is also valid for i386, by the way.

No one is going to call RTG.Show(NULL) when something opens on native screen (Native.Show(BitMap))

This is wrong. All drivers get something during LoadView(). When a driver had a screen on it, and this screen is closed, the driver gets Show(NULL). At least it should be so. If this doesn't happen, it's a bug. See driver_LoadView() routine.

Or are you talking about the situation where you have two screens, one on board and on on chipset, and try do depth-arrange them? Hm... The system really doesn't support such a case. Looks like the driver needs to know somehow that its screen is behind chipset's one. Exactly. (Or vice versa, it is important to select "monitor" that has current topmost screen).

It is possible to implement it without hacks. However, to do this, two things must be fulfilled:

1. The driver needs to know ViewPort being shown.

2. Chipset driver needs to fill in copperlist pointers for its ViewPorts.

I guess, (2) can be done by adding MakeVPort method to graphics HIDD. (1)... Don't know yet. Perhaps ShowViewPort() is bad. If these two conditions are met, RTG driver can check GfxBase->View and check if its ViewPort is behind or in front of chipset one. Only chipset ViewPorts will have nonzero copperlist pointers.

r38178 "Interrupt immediately if the callback returned error" test causes blank screen on m68k-amiga when using UAEGFX because chipset mdd comes first, fn() returns zero which then causes following driver (UAEGFX) to be always skipped. Removing the test allows UAEGFX to work again. (wrong test? success == 0, not error, at least driver_LoadViewPorts() return zero on success). Of course 0 stands for no error, it's a bug. Remove exclamation sign.

I'd like to select nice 640x480x8 or similar mode for bootmenu and initial WB screen when RTG is available (instead of boring 640x256x2, hires and 4 planes on OCS/ECS is horribly slow).

Probably too ugly way is to mask monitor id part of modeid and check if it is nonzero.. Unfortunately DPIF_IS flags seem to be useless.

You can look how IsCyberModeId() is implemented in cgx. OOP_Object *pf = (OOP_Object *)info.reserved[1]; Yes, I don't think this is the way to go either.. However IMHO this is not a proper way to go. This does not prevent BestModeID() from selecting chipset modes. Something needs to be invented. Does "other" Amiga-like OS's have some kind of DPIF_CHUNKY or similar flags? Annoyingly there is DPIF_AA and DPIF_ECS but no DPIF_OCS or similar.. I'd like to have quick and clean way to detect modeid type in rom code. (to select "best" boot menu and initial shell modes, early alert?, before the screen is open).

    /*
     * Set the propertyflags,
     * Note that we enforce some flags because we emulate these features by software 
     * DIPF_IS_FOREIGN is actually set only by Picasso96 and only for modes that are 
     * not graphics.library compatible. Many m68k RTG games rely on this flag not being 
     * set.
     */
    di->PropertyFlags = DIPF_IS_WB | DIPF_IS_SPRITES | DIPF_IS_DBUFFER
| HIDDProps.DisplayInfoFlags;

I think this should be moved to HIDD_Gfx_ModeProperties () so that it can be overridden by custom chipset driver. These flags are intentionally forced for features that are simulated by graphics.library. SPRITES is set because graphics.library can emulate mouse sprite. WB is set... Why not ? DBUFFER is set because i emulate this. I know, the emulation sucks, and in fact this should be handled in the driver.

DIPF_IS_WB should not be set in special custom chipset modes. (HAM, etc..). Also it seems DIPF_IS_DBUFFER is not set in native Amiga RTG modes. I haven't 100% confirmed this yet but if it is true, it is guaranteed some (stupid) software will assume DIPF_IS_DBUFFER set = always chipset mode..

  • Remove DBUFFER enforcement.
  • Add one more field to ModeProperties, like ResetFlags. It will contain bits that need to be RESET in default flag set. If you SET DIPF_IS_WB there, it will be UNSET in the resulting flags set. This will provide good backwards compatibility with what is already done.

What do you think, what if we sort drivers not by ID number, but by some priority? If we assign for example -128 to chipset driver and 0 to others, from two modes with equal parameters BestModeID() will select one with higher priority. However, if for example we request 320x256, it won't be promoted to RTG if RTG drivers only offer 320x200 and 640x480. PAL 320x256 will still be a best match. Native Amiga mode promotion is very simple, BestModeID() only returns RTG modes. It never returns chipset modes (at least without using some configuration program). This way OpenScreen() without modeid always gets RTG mode, only way to get mode you want (including chipset modes) is to specify wanted ModeID. 320x256 will return something larger (maybe 320x400 in worst case). This is still the expected result.

So plain priorities won't fix this problem completely. One possible option is to add some "don't autoselect me" flag to AddDisplayDriverA(). Another option it to let user to select a preferred monitor in prefs, and BestModeIDA() should first try this monitor, and try others only if nothing was found.

BestModeID() option probably is the best, at least for shared monitor case.

Mode promotion needs to be configurable anyway so preferred monitor probably is the most native amiga compatible way. Not sure if is fits in AROS design very well.. M68k-amiga currently has overridden BestModeID() (until there is better solution) that simply looks for RTG modes first and only checks chipset modes if RTG mode scan didn't return any results. It seems to work fine.

I think we have slightly different use cases which should be handled by driver code:

  • Single monitor, single driver. Always works as expected.
  • Single monitor, multiple drivers. Here one should be configured as "a primary driver", BestModeID() only returns modes if it is primary driver mode.
  • Multiple monitors, one driver per monitor. Priorities needed in this case? Should BestModeID() return mode from any or only from some kind of "preferred monitors"?
  • Multiple monitors, one or more driver(s) per monitor. (Does this exists? Maybe m68k-amiga and 2 RTG cards with only one having passthrough?) Not sure what to do here or if this is worth the trouble..

But this would need separation of gfx driver and monitor. Probably this gets too complex..

A quick hack to RTG driver that allows RTG/native switching, probably too ugly but at least it seems to work and allows easier testing. I noticed there is strange side-effect when switching between RTG/chipset screens. Enforce mentioned flags only for chunky drivers. For planar ones they will need to be provided by the driver. The philosophy behind: any RTG screen is WB-compatible. But not every planar one.

Boot to Wanderer (RTG mode), run any program that specifically wants PAL or NTSC custom chipset screen. Mode switches as expected (RTG chipset passthrough activates) but chipset screen's menu bar and mouse cursor are missing, also input (mouse and keyboard) still goes to Wanderer's window. Automatic screen activation is missing. The input will be switched when some window on the screen gets activated. ActiveScreen stuff in ActivateWindow() but why does this problem only happen when switching between "monitors"? Screen switching works fine if only using RTG or only using chipset screens. Active window doesn't change. It changes only when you click in another window on frontmost screen. Mouse sprite is visible only on active monitor. Active monitor is a monitor on which active screen is displayed. Active monitor is only set when screen is set topmost (shortcut and depth gadget only sets screen to bottom) and OpenScreen() only sets active monitor when there is no monitor set. Intuition tracks current active monitor. When a screen is sent to bottom, new active screen is not head of the list. It is the first screen which matches current monitor. Active monitor changes in the following cases:

1. Some screen is explicitly set frontmost by some software.
2. There was no active monitor at all (all screens blank). 
3. Hosted   driver   calls   a  special  activation  callback  (see monitorclass source).
4. Mouse  pointer crosses screen borders and spatial linkage for the current monitor is set up in monitorclass. 

(4) is a TODO, it's not implemented yet, but should be.

Do I simply add temporary #ifdefs to always set active monitor when building m68k build? (lack of ActiveMonitor() calls makes development and testing quite annoying so any kind of temporary hack is better than nothing). If you want to, you can play with activation callbacks. They will perfectly work on native drivers. RTG driver needs to call it when it switches off the passthrough. The only bottleneck - chipset driver needs to call it when RTG driver releases the passthrough. They need to talk somehow. Alternative: add monitorclass attributes to describe drivers which share the same display (kind of z-order linkage, analogous to current spatial linkage). And make Intuition tracking this.

Same happens if I switch screens by pressing Amiga+N/M or clicking Wanderer's screen depth gadget.

The whole getdisplayinfodata.c should be extended to support custom chipset special features (most important being overscan and sprite resolution not same as display resolution). Many things are still missing. This is why we added size to ModeProperties structure. It can be extended.

A HUGE amount of FindName("hidd.graphics.gc") calls during any graphics operation (text output, scrolling, dialog) etc.. This is due to the way the OOP stubs work. In non-ROMmable libraries, (AROS_CREATE_ROM undefined) they cache the result of that lookup in the .bss. In ROMmable libs, without a .bss. Really need to find a way to get this to work for ROMmable libraries without all that overhead.

Implemented an API for MakeVPort() and MrgCop().

  • MrgCop() calls PrepareViewPorts after building ViewPortData chains. The method also gets View pointer. You can do any preparations there.
  • ShowViewPort's job is just to commit calculated changes.

Since you get View, you can detect position of your ViewPortData in it. You can also manage AmigaOS copperlist pointers in these methods. Could fakegfxhidd (software mouse cursor) support ShowViewPorts() too? No need for composition/multiple ViewPorts, only need to get View information.

Because fakegfxhidd only supports Show(), there is no way to (at least without really ugly hacks) to handle RTG/chipset passthrough and most UAE versions don't have Picasso96 (UAEGFX) hardware mouse cursor emulation. (afaik it is not ported from WinUAE and WinUAE only supports it in Direct3D mode which isn't meant to be compatible with too old PCs).

Maybe I can add software cursor emulation to uaegfx driver but I am not sure it is worth the trouble. (or is it the right way to do it?)

Today i made planarbm faster and better Now it just carries struct BitMap inside. No more planes array etc copying and you can retrieve a pointer to it at any time for direct modification and now i'm working on software composition layer. Want to generalize it. It will be a separate module sitting in DEVS:Monitors. It will hotplug into graphics.library. You can update Amiga display driver to support it. Instead of calling GetBitMap (which copies the structure), just get aHidd_PlanarBM_BitMap attribute. This is similar to aHidd_ChunkyBM_Buffer. It's ignitable and setable, so you can wrap own bitmaps into objects. And also i invented a framework for mirrored display driver (like VESA and VGA). Base class will support everything by itself. You'll just need to supply a single framebuffer bitmap to it. It will do mirroring by itself, eliminating code duplication.

This happens when amiga driver is asked to create new bitmap, I don't see how it can have anything to do with GetBitMap/SetBitMap and friends. (at least not this problem). AmigaVideoBM__Root__New() is called which then immediately calls OOP_DoSuperMethod -> BM_New -> PBM_New -> failure.

Amiga driver isn't very clean and it still is too slow for anything useful. For example PutImage() needs assembly C2P and GetImage() P2C routines, currently they are slow^10.

Adding RTG driver support but chipset is too slow for software composition. Normal full hardware screen dragging will be done someday (currently only front most screen is visible when dragging).

Why do many graphics routines use LONG coordinates? Is it possible to replace them with WORDs or platform specific 16/32-bit typedef? This causes unnecessary slowdown on 68000-compatible build because it can force use of much slower 32bit multiplications (32x32=32) and divisions (32/32=32) which are not directly supported by 68000/010. (68000/010 only have 16x16=32 multiplication and 32/16=16:16 division instruction), also all LONG operations are always slower than WORD on those CPUs (68000/010 ALU is 16-bit).

All graphics routines :) (including hidds/graphics/BM_Class.c for example) Graphics.library function parameter documentation does not make much sense.. clib/protos lists most functions as taking LONGs (even Move and Draw that use RastPort WORD field to store last position!) Autodocs lists most functions as taking WORDs.

Internally AOS graphics.library seem to always use WORDs (ignoring high 16 bits, so easy with register parameters) This may be way too late, but I wonder why AROS didn't ever define "virtual registers" which would have allowed the caller to write <anything smaller than register width> into <whatever register with predefined width on that architecture> and then on the library function side to extract <whatever expected width> for local use.

  • virtual register is 64 bit wide
  • calling code writes an unsigned 32 bit value
  • called code takes the lower 16 bit
  • called code puts it into local unsigned 16 bit copy and uses it

On 68k it could be mostly no-ops, and on other architectures something else.

graphics.library API declarations are OS3.1-compatible. If the question concerns them, i would cast them to WORDs internally. As to graphics.hidd... Well, personally i'm not against changing to WORDs (however, IIRC, coordinates are WORDs there).

amiga-m68k biggest game compatibility problem is "wrong" (RTG-like) struct bitmap format for planar screens, many system friendly (as in not taking over the system) games open screen normally but then write directly to screen's bitmap. How to fix this problem? I noticed there are already support (at least partial), for example macros like IS_HIDD_BM and OBTAIN_HIDD_BM.

Isn't there a PlanarBM class in graphics.hidd that is basically a in-memory bitmap implementation that allows access to its rows of data? Possibly but application visible struct bitmap is still the BMF_SPECIALFMT one. My test was game Dune II that shows only red mouse cursor and blank screen. (Which also calls LoadRGB4() with NULL viewport, after that comes correct LoadRGB4() call..) Just check screen's struct bitmap flags field and plane pointers? BMF_SPECIALFMT "plane pointers" are quite random looking, don't point to chip ram (if fast ram is available), compared to real plane pointers.