Navbar for the Aros wikibook
Aros User
Aros User Docs
Aros User FAQs
Aros User Applications
Aros User DOS Shell
Aros Dev Docs
Aros Developer Docs
Porting Software from AmigaOS/SDL
For Zune Beginners
Zune .MUI Classes
For SDL Beginners
Aros Developer BuildSystem
Specific platforms
Aros x86 Complete System HCL
Aros x86 Audio/Video Support
Aros x86 Network Support
Aros Intel AMD x86 Installing
Aros Storage Support IDE SATA etc
Aros Poseidon USB Support
x86-64 Support
Motorola 68k Amiga Support
Linux and FreeBSD Support
Windows Mingw and MacOSX Support
Android Support
Arm Raspberry Pi Support
PPC Power Architecture
Aros Public License



Extending the AmigaGFX HIDD class so that I would only have to add the screen-modes that are not there already. This brings up a few questions: Is it practical or even possible to inherit from an existing HIDD that is already a child class of the Graphics.hidd class? It's possible, however perhaps not practical. No-one has ever tried this. AROS is a Research OS, so feel free to try. :)

Would it be better or easier to write a new driver based on the AmigaGFX driver code? It would be better just to add support for new modes to existing Amiga driver. I see NatAMI chipset to be a superset of AGA. Will more than one HIDD load in at once and still share resources? Is it true? Load - yes. Share resources - yes if they are designed to do so. However it can be problematic. The reason I wanted to superclass the AmigaGFX driver was to avoid problems and make the driver more maintainable. If it's problematic to extend one driver to implement another, I'll just have to modify the AmigaGFX HIDD. Sub-classing of bitmap HIDDs is already done in some cases. The VESA bitmap subclasses a generic chunky bitmap, and AFAIK the AmigaGFX bitmap subclasses the planar bitmap class.

What does isCyberModeID() look for? Does it mark the absence of a Copper or the presence of chunky pixels? Chunky pixelformat. How is it used? AROS itself doesn't use it.

As I think about this, adding chunky support to the Indivision ECS or GraffitiGraphics/Indivision AGA 2.0 support would require a similar approach and will need the answers for those questions also. I've already skimmed the documentation on WikiBooks and it looks like it might be possible but I'm still not sure.

There are no little-endian modes in the NatAmi so I know I can ditch those modes. The R8G8B8 mode is stored as 3 byte-planes rather than a pure chunky 3-byte pixel width so I'm not quite sure what to do with that other than adding a HYBRID mode to the modes enum. Most importantly, I need to know how to tell AROS what modes are supported by the hardware. (I see a SUPPORTMASK value in natamirtg.h but don't know if that just communicates with the rest of the HIDD or if it is used to communicate more globally to CGFX.library.

I guess my main questions are these two: Is there a better template for me to follow? UAEGFX may not be the best choice because it has ugly modeid remapping, mode list comes from UAE and all mode setup and blit functions call UAE native code. It is probably easier to examine simple driver like vga or vesa first which have "normal" mode generation code and does not have any "hidden" code.

Mode id remapping can be always added later if you need AOS-like modeid range for compatibility with original AOS programs.

Most importantly, I need to know how to tell AROS what modes are supported by the hardware.

In the New method of your gfx class, you need to build a taglist of "pix formats" and "syncs" and then pass it to parent class. See AROS/workbench/hidds/hidd.nouveau/nouveauclass.c, line 355

        struct TagItem modetags[] = {
        { aHidd_Gfx_PixFmtTags,    (IPTR)pftags_24bpp    },
        { aHidd_Gfx_PixFmtTags,    (IPTR)pftags_16bpp    },
        { TAG_MORE, (IPTR)syncs },  /* FIXME: sync tags will leak */
        { TAG_DONE, 0UL }

        struct TagItem mytags[] = {
        { aHidd_Gfx_ModeTags,    (IPTR)modetags    },
        { TAG_MORE, (IPTR)msg->attrList }

        struct pRoot_New mymsg;

        mymsg.mID = msg->mID;
        mymsg.attrList = mytags;

        msg = &mymsg;

        o = (OOP_Object *)OOP_DoSuperMethod(cl, o, (OOP_Msg)msg);

in workbench/hidds/






manu_bitmap.c - VOID METHOD




name_class.c - OOP_Object and VOID






one bitmap is a memory bitmap and another one is display driver's bitmap.

Currently drivers call DoSuperMethod in such case. I can imagine it could be optimized a little bit more (similar to the PutImage methods in ATI driver), where DMA engine is used if target bitmap is maintained by the driver.

Just to note: in this case a memory bitmap might have some restrictions on its layout (like alignment requirements). For example in my Windows GDI driver BlitColorExpansion() perfectly works with real planar bitmaps of planarbm class, however in this case this bitmap has to be created using NewBitMap() method of display HIDD. This means that it has to be actually a friend of display bitmap (or a friend of a friend of display bitmap...).

This makes us to revisit meaning of bitmap friendship. Currently a bitmap inherits all characteristics (internal layout, pixelformat, and class) of the friend bitmap. If it gets another class (because pixelformat was specified explicitly), the friendship is actually broken. In a new situation this would not mean that the friendship is totally broken, because NewBitMap() still may apply some adjustments to the bitmap. They would just be some more distant friends...

However, if the bitmap was created without friend specification, and it was created without explicit display mode specification, it will be a pure RAM-based bitmap. Gfx drivers will unlikely be able to handle it in accelerated way (because alignment is wrong). So the question is - how safe is to implement such a mechanism? Wouldn't we end up in slow graphics which is impossible to accelerate by design, because half of bitmaps will be created without any friendship specification.

What are chances that a gfx driver will be able to accelerate blits from ARBITRARY formatted bitmap (without any forced alignment etc.)?

In such case DoSuperMethod call is slow, but safe, since it will use GetPixel/PutPixel methods for the two drivers in order to transfer the bitmap in a safe way. Of course, this case could be also optimized a little bit more.

So, cards actually can transfer data between each other by DMA?

Could fakegfxhidd (software mouse cursor) support ShowViewPorts() too? No need for composition/multiple ViewPorts, only need to get View information.

Check PrepareViewPorts. It's called always. BTW, it gets struct View* directly. You can remember the information you gather in the first bitmap in viewports chain. This is what will be passed to Show. Implementing ShowViewPorts in fakegfx seems to be problematic because it badly coexists with framebuffer. BTW, you can also implement ShowViewPorts which does some preparations and returns FALSE.

What is the difference between a framebuffer driver and a mirrored driver? In framebuffer driver there's actually only one on-screen bitmap which is constantly displayed, and Show is actually emulated by copying contents and replacing bitmap object. Mirroring is in fact avoidance of slow VRAM read. Mirrored driver does not use VRAM for bitmap object, it uses a mirror buffer instead, updating VRAM when it is needed. Since mirroring driver has a full copy of the displayed bitmap, it doesn't need Show emulation. Instead it just attaches different mirrors to the VRAM. So the VESA and VGA drivers don't need to be marked as framebuffer drivers, since they use mirroring? Yes, they don't. Mirroring drivers handle VRAM on their own, and they don't need Show emulation. Opening a framebuffer is just a waste of RAM for them. BTW, my drivers return NULL in response on doing so. These drivers indicate this by returning TRUE as value of aHidd_Gfx_NoFrameBuffer attribute.

Trying to make the compositing class sharable between drivers too. There's one pro in this approach. It's faster with mirrored drivers, because you can compose bitmaps right into VRAM. So in the long term we might end up with a base compositing class and a series of subclasses to suit different driver types. BTW perhaps the same can be done with software sprite emulation. It would be faster and less flickery to put its image directly into VRAM instead of backed copying by fakegfx. The VESA driver will now superimpose the cursor on a cursor-sized scratch bitmap with the right bit of background copied to it beforehand. I experimented with alpha blitting the cursor directly into VRAM, but it was too much hassle to support all colour depths that way because PutAlphaImage couldn't be used on the VRAM.

Also, as GMA does it, driver can detect in ShowViewPorts if only topmost screen is visible, and deactivate mirroring, so that every blit is not duplicated. Unfortunately, no. Mirroring is not just a way to do composition. Its primary aim is to avoid slow VRAM reading. If you get around it and just place the bitmap in VRAM (framebuffer approach), it will work very slowly. This is why mirroring is done in VGA and VESA.



CreateVLayerHandleTagList() and DeleteVLayerHandle() are two methods are added to graphics driver class and some attributes developed for an overlay class. Currently there's no overlay base class since i don't see what to put there.

It is really necessary for Intuitions to have knowledge of physical displays. In future when i implement the whole class it will be able to maintain information about their relative placement in the physical space. This is needed in order to handle multi-display input correctly.

The rest is a reimplementation of MorphOS API. I decided not to reinvent a wheel once more, and use already existing API instead. I want to maintain compatibility between extensions.

So far we had clean layers: Intuition works on top of Graphics, Graphics works on top of Graphics HIDD. Now, we introduce a wiring between Intuition and Graphics HIDD for Monitor class to read HIDD information. Intuition already uses direct HIDD access - in pointerclass, where it attaches palette to a sprite bitmap. Yes, and what is really bad in it? Even a common application may query HIDD information if it wants to.

Monitorclass is a part of BOOPSI and BOOPSI is a part of Intuition itself. The code itself could be moved to graphics.library, however it would still need Intuition in order to call MakeClass().

Some people here are against adding more and more private functions. Our graphics.library already has 7 private functions for CGX operation, and one more AROS-specific AddDisplayDriverA(). It's already enough i believe. In fact i do exactly what was suggested - to move AROS-specific functionality to some new AROS-specific module. In this case this module already exists - it's a graphics.hidd. Components running on top of it may directly talk to it when needed.

To tell the truth i dislike only one fact - that i declared AddDisplayDriverA() public at all. I should have implemented monitorclass from the beginning, and Intuition would call private AddDisplayDriverA() upon monitorclass object creation. Well, done is done. I won't rewrite the whole stuff. AddDisplayDriverA() will create monitorclass objects now, upon successful driver installation. But isn't AddDisplayDriverA a graphics.library functions? This means that a module that provides functionality (graphics.library) will now be dependent on module that consumes functionality (intuition.library). It's like having browser dependent on network stack and at the same time network stack dependent on browser. After this change will it even be possible to have graphics.library running without intuition? Again I think this is an argument for considering separating the monitor class into another module. MorphOS might have put it in Intuition but we don't have to - we might preserve the interface but have it implemented in more suitable form.

Yes, in one point. However i would not call this strict dependency. AddDisplayDriverA() may do the following at its end:

if (!IntuitionBase)
     IntuitionBase = OpenLibrary("intuition.library", 50);
if (!IntuitionBase)
     return ret;

mon = NewObjectA(NULL, "monitorclass", tags);

graphics.library this way consumes no functionality from the intuition.library. It just provides some information for it (tells about new displays).

A similar code is already present in sync class of graphics.hidd where it manually opens graphics.library, creates MonitorSpec structure and adds it to MonitorList.

graphics.hidd does not have RastPort structure, this means that it doesn't work with Layers. This means - no windows and screens. In theory it's possible to use HIDDs for drawing ONLY on some kind of simple embedded system which doesn't run multiple applications and doesn't need windowing. It would be similar to manual construction of ViewPort and View on AmigaOS (which would also work on AROS).

Linux Driver


Linux Driver Stack.

For information to write a driver Unix X-server Graphic Drivers and study source

Linux Radeon build. Blog.



see here

Completion Status