Aros/Developer/AHIDrivers

Navbar for the Aros wikibook
Aros User
Aros User Docs
Aros User FAQs
Aros User Applications
Aros User DOS Shell
Aros/User/AmigaLegacy
Aros Dev Docs
Aros Developer Docs
Porting Software from AmigaOS/SDL
For Zune Beginners
Zune .MUI Classes
For SDL Beginners
Aros Developer BuildSystem
Specific platforms
Aros x86 Complete System HCL
Aros x86 Audio/Video Support
Aros x86 Network Support
Aros Intel AMD x86 Installing
Aros Storage Support IDE SATA etc
Aros Poseidon USB Support
x86-64 Support
Motorola 68k Amiga Support
Linux and FreeBSD Support
Windows Mingw and MacOSX Support
Android Support
Arm Raspberry Pi Support
PPC Power Architecture
misc
Aros Public License

Retargetable Audio Devices

edit

For other sound cards, a system called AHI was developed to support other sound cards other than using Paula (Amiga(TM)). AHI uses the ahi.device and extra drivers to support different sound cards which is set in the AHI preferences (in Prefs drawer). It can be programmed in a similar way as the old Amiga audio.device. More information is included with the AHI Developer files which you can download from the AHI homepage or Aminet.

Wikipedia page

Amiga Sourceforge DevHelp

Devices

Units 0 - 3 can be shared by as many programs as you define channels for them. Music Unit blocks the hardware which it is set to exclusively, so that no other program can play sound at the same time through this hardware. That's what <ahi-device>.audio was invented, it is a virtual hardware for which sends its sound data to the unit you set it up. This way, although music unit pull-down option blocks <ahi-device>.audio exclusively, other programs can still send sound to unit 0 - 3. Normally all programs use unit 0. Only very few programs use the music unit.

It works on two models, as a device driver (High-level) or library (Low-level). Although this confused me at first, having programmed the device model it is because this is simple just allowing you to send sound streams and the library model deals with samples for use with trackers which are preloaded.

Library Approach

edit

The library (Low-level) approach use AHI functions like AHI_SetSound(), AHI_SetVol() and so on. In reality, this way has one big problem: if you work with this method, you will not have any mixing functions - your program will lock ahi.device and while your program is running, all other ahi programs will not work. The only advantage of low-level coding as mentioned in the documentation is "low overhead and much more advanced control over the playing sounds".

Here you get exclusive access to the audio hardware, and can do almost whatever you want, including monitoring.

Disadvantage is obviously, that the audio hardware is blocked for all other programs. Most of the drivers don't care about this situation, and if a program tries to access the hardware it mostly trashes everything and you need to restart your program, or re-allocate the audio hardware.

From AHI6 on, there is a nonblocking AHI mode called "device mode", but it doesn't allow recording. Playback only and has bad timing. Good enough for Playback something, but too bad for real time response.

To use two samples, you need to use the library model, you would open the ahi.device and extract the library base from the AHI device structure.

Device Approach

edit

In this way you just use ahi.device as standard amiga device, and use CMD_WRITE for sending raw data to the AHI. With hi-level ahi coding you are allowed to mix sound, there are no 'locks' of ahi.device anymore and so on. For example, for mp3 playing, or for mod players, you just depack the data needed by the CPU and use this unpacked raw data with CMD_WRITE.

The device interface is much easier to program, and is suitable for system niose or mp3 players. It has a fix latency of 20ms, which suffices in most (non-musical) situations. It is non-blocking the hardware, so it is the first choice when it comes to quick play some audio.

It also supports the CMD_READ message but

  • As soon as you read, it blocks AHI exclusively.
  • it can sometimes gives the odd click while recording via device interfaces

All you do is use the CMD_WRITE command with the sample info set in the AHI IORequest structure, for more samples you will need to copy the IORequest and use that for the other samples, and so on. Essentially the structure is sent as a message to the AHI daemon, which is standard for Exec devices, so that is why it needs a copy. Otherwise it would try to link a message in the list which is already in there, of the same address, then crash!

I would start with the device API, not least because it's very simple. When you have loaded/generated sample data, opened the AHI device and allocated IORequest(s), you can use the Exec library functions (DoIO, SendIO, BeginIO...) to play the samples. However, there may be a limited amount of AHI channels, so IIRC in that case lower priority sounds will be queued and played later. You could create your own mixer routine which basically "streams" data using double-buffered IO requests (there is an example in the AHI SDK about double buffering).

Do I really need to record via ahi.device? If it is just the monitor feature, you can use ahi's internal monitor functionality (which has the lowest possible latency and might use the hardware possibility to monitor) or you can read, manipulate and copy to the outbuffer your data using the lib interface. The latency will be usually 20ms, depending on the driver, the application has no control over this.

You could also use datatypes.library to play samples but cannot say if it's very accurate timing wisely, but it's at least very simple to use.

CreateMsgPort
CreateIORequest
OpenDevice (ahi.device)

loop
    {
    depack some music data
    fill AHIdatas
    SendIO((struct IORequest *) AHIdatas);
    }

Then, when i need , i just do for sound (which will be played on second channel (it's only solution to make it works at the same time by ahi):

CreateMsgPort
CreateIORequest
OpenDevice (ahi.device)

fill AHIdatas

DoIO/sendio

Find the default audio ID such as for unit 0 or the default unit. Then call AHI_AllocAudioA() and pass it the ID, or AHI_DEFALUT_ID, and a AHIA_Channels tag with the minimum channels you need. Then check if it allocated the channels. If so you should know it has enough channels and you can then AHI_FreeAudio(). If not that should mean it doesn't have enough provided you have passed all the required tags.

AHI device interface plays streams, not samples. AHI mixes as many streams together as you have channels set in the prefs. If you try to play more streams than channels are available, the additional streams are muted.

If you need to synchronise two samples (perhaps for stereo) then you can issue a CMD_STOP, do your CMD_WRITEs, then issue a CMD_START to start play. What you have gotta watch is that is affects all AHI applications and not just your own.

This brings me to another point. Are your sounds mono or stereo? As you would have read the proper way for stereo is to tell AHI to centre the pan and to give it a stereo sample. I don't know if it returns an error if it can't do it, possibly will accept the write but with a muted channel as it looks like you found out.

Another thing about multiple CMD_WRITEs from different AHI Requests. AHI will treat each instance separately and will mix the sound together on the same track. Providing the hardware supports it, the high level API can only offer panning and not let specify a direct track AFAIK.

http://utilitybase.com/forum/index.php?action=vthread&forum=201&topic=1565&page=-1

If you want to play multiple samples with only one channel through the device API, you have to create one stream from the samples yourself.

The AHI API uses OpenDevice to do CMD_READ, CMD_WRITE, CMD_START, CMD_STOP.

see here thread

Setting up

edit
if (AHImp=CreateMsgPort()) {
  if (AHIio=(struct AHIRequest *)CreateIORequest(AHImp,sizeof(struct AHIRequest))) {
     AHIio->ahir_Version = 6;
     AHIDevice=OpenDevice(AHINAME,0,(struct IORequest *)AHIio,NULL);
  }

This will create a new message port, the create some IORequest structures and finally open the AHI device for which to write.

Playing a Sound

edit
// Play buffer
AHIio->ahir_Std.io_Message.mn_Node.ln_Pri = pri;
AHIio->ahir_Std.io_Command = CMD_WRITE;
AHIio->ahir_Std.io_Data = p1;
AHIio->ahir_Std.io_Length = length;
AHIio->ahir_Std.io_Offset = 0;
AHIio->ahir_Frequency = FREQUENCY;
AHIio->ahir_Type = TYPE;
AHIio->ahir_Volume = 0x10000; // Full volume
AHIio->ahir_Position = 0x8000; // Centered
AHIio->ahir_Link = link;
SendIO((struct IORequest *) AHIio);
// fill

  AHIios[0]->ahir_Std.io_Message.mn_Node.ln_Pri = 127;
  AHIios[0]->ahir_Std.io_Command  = CMD_WRITE;
  AHIios[0]->ahir_Std.io_Data     = raw_data;
  AHIios[0]->ahir_Std.io_Length   = size_of_buffer;
  AHIios[0]->ahir_Std.io_Offset   = 0;
  AHIios[0]->ahir_Frequency       = 48000;     // freq
  AHIios[0]->ahir_Type            = AHIST_S16S;// 16b
  AHIios[0]->ahir_Volume          = 0x10000;   // vol.
  AHIios[0]->ahir_Position        = 0x8000;   
  AHIios[0]->ahir_Link            = NULL;

// send

SendIO((struct IORequest *) AHIios[0]);

The AHI_IORequest structure is similar to audio device structures. p1 points to the actual raw sound data, length is the size of the data buffer, Frequency is the amount to reply at e.g. 8000 Hertz, Type is type of music data e.g. AHIST_M8S, then the Volume and Position of speakers. SendIO will start playing the sound and you may use WaitIO to wait until the buffer is played before starting on next block of data.

Freeing Audio

edit
  • i call AHI_ControlAudio()n with play to false so that i make sure nothing is being played
  • i unload the sounds using AHI_UnloadSound() to make sure sounds get unloaded
  • then i call AHI_FreeAudio()

Closing down

edit

Once you have finished with the AHI device, you need to close down the device. e.g.

  • Does a CloseDevice()
  • then DeletIORequest()
  • and last DeleteMsgPort()
if (!AHIDevice)
   CloseDevice((struct IORequest *)AHIio);
   DeleteIORequest((struct IORequest *)AHIio);
   DeleteIORequest((struct IORequest *)AHIio2);
   DeleteMsgPort(AHImp);

Updating Sound often

edit

Take a look at the simpleplay example.

If you want to 'update' your sound on a regular base then there is already functionality available.

In AllocAudio() you can provide a playerfunction using the AHIA_PlayerFunc field.

AHIA_PlayerFunc If you are going to play a musical score, you should use this "interrupt" source instead of VBLANK or CIA timers in order to get the best result with all audio drivers. If you cannot use this, you must not use any "non-realtime" modes (see AHI_GetAudioAttrsA() in the autodocs, the AHIDB_Realtime tag).

AHIA_PlayerFreq If non-zero, it enables timing and specifies how many times per second PlayerFunc will be called. This must be specified if AHIA_PlayerFunc is! It is suggested that you keep the frequency below 100–200 Hz. Since the frequency is a fixpoint number AHIA_PlayerFreq should be less than 13107200 (that's 200 Hz).

That way it is for example possible to write some kind of replayer that decides which sounds needs to be stopped, or for example be slided, volume turned up/down etc.

Just make the main loop wait for the player to be 'done'.

You could do that by messaging, but for example also by using a signal. In order to stop the player you could use a boolean (that you set by pressing a button or whatever you want) that the player checks and then signals the main loop to quit.

Please take a look at the PlaySineEverywhere.c example in the AHI developer archive.

Misc

edit

There are a number of things that are called "latency." The thing that concerns me most is the time between when audio (like a microphone) hits the input and when it comes out the monitor output. You can measure this by putting something with a short rise time (a cross stick sound is good) into one channel, and connect the output of that channel to the input of another channel. Record a few seconds on both channels. Stop the recording, zoom in on the waveform of the two channels, and measure the time difference between them. That's the input/output latency.

Latency when playing samples is trickier because it depends on the program that's supporting the VST instrument. If you have a MIDI keyboard with sounds you could choose a similar sound on the keyboard and from the VST library, connect the analog output of the sample playback channel to one input, connect the synth output to another input, play your sound, record it to two tracks, and look at the time difference between the tracks. That's not totally accurate but it will get you a ballpark measurement.

If you do the following literally (in your code):

filebuffer = Open("e.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length1 = Read(filebuffer,p1,BUFFERSIZE);

filebuffer = Open("a.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length2 = Read(filebuffer,p2,BUFFERSIZE);

filebuffer = Open("d.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length3 = Read(filebuffer,p3,BUFFERSIZE);

filebuffer = Open("g.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length4 = Read(filebuffer,p4,BUFFERSIZE);

filebuffer = Open("b.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length5 = Read(filebuffer,p5,BUFFERSIZE);

filebuffer = Open("ec.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length6 = Read(filebuffer,p6,BUFFERSIZE);

Then your variable "filebuffer" (which is a special pointer to the handle of the file) gets overwritten before the handle is closed.

red:
So i kind of expected something like:
filebuffer = Open("b.raw",MODE_OLDFILE);
if (filebuffer==NULL)
{
  printf("nfilebuffer NULL")
}
else
{
  length5 = Read(filebuffer,p5,BUFFERSIZE);
  if close(filebuffer)
  {
    printf("nfile b.raw closed successfully")
  }
  else
  {
    printf("nfile b.raw did not close properly, but we cannot use the filehandle anymore because it is not valid anymore")
  }
}

you have to unload/free every used or not used but allocated channels/sounds.

For example something like this will loop around until the last allocated channel.

For(chan_no=0;chan_no<num_of_channels;chan_no++) { If(channel[chan_no]) free(channel[chan_no]); }

Maybe If(channel[chan_no]!=NULL)

To make certain you can NULL every sound bank before exit.

Examples

edit

Another example.

Double-buffering is required though.

struct MsgPort    *AHIPort = NULL;
struct AHIRequest *AHIReq = NULL;
BYTE               AHIDevice = -1;
UBYTE              unit = AHI_DEFAULT_UNIT;

static int write_ahi_output (char * output_data, int output_size);
static void close_ahi_output ( void );

static int
open_ahi_output ( void ) {
    if (AHIPort = CreateMsgPort())
    {
        if (AHIReq = (struct AHIRequest *) CreateIORequest(AHIPort, sizeof(struct AHIRequest)))
        {
            AHIReq->ahir_Version = 4;
            if (!(AHIDevice = OpenDevice(AHINAME, unit, (struct IORequest *) AHIReq, NULL)))
            {
                send_output = write_ahi_output;
                close_output = close_ahi_output;
                return 0;
            }
            DeleteIORequest((struct IORequest *) AHIReq);
            AHIReq = NULL;

        }
        DeleteMsgPort(AHIPort);
        AHIPort = NULL;
    }

    return -1;
}

static int
write_ahi_output (char * output_data, int output_size) {
    if (!CheckIO((struct IORequest *) AHIReq))
    {
        WaitIO((struct IORequest *) AHIReq);
        //AbortIO((struct IORequest *) AHIReq);
    }

    AHIReq->ahir_Std.io_Command = CMD_WRITE;
    AHIReq->ahir_Std.io_Flags = 0;
    AHIReq->ahir_Std.io_Data = output_data;
    AHIReq->ahir_Std.io_Length = output_size;
    AHIReq->ahir_Std.io_Offset = 0;
    AHIReq->ahir_Frequency = rate;
    AHIReq->ahir_Type = AHIST_S16S;
    AHIReq->ahir_Volume = 0x10000;
    AHIReq->ahir_Position = 0x8000;
    AHIReq->ahir_Link = NULL;
    SendIO((struct IORequest *) AHIReq);
     
    return 0;
}

static void
close_ahi_output ( void ) {
if (!CheckIO((struct IORequest *) AHIReq)) {
    WaitIO((struct IORequest *) AHIReq);
    AbortIO((struct IORequest *) AHIReq);
}

if (AHIReq) {
    CloseDevice((struct IORequest *) AHIReq);
    AHIDevice = -1;
    DeleteIORequest((struct IORequest *) AHIReq);
    AHIReq = NULL;
}

if (AHIPort) {
    DeleteMsgPort(AHIPort);
    AHIPort = NULL;
}

}

High level ahi for sound playback - The idea is to create several i/o requests and then when you want to play a sound you pick one that is free and then simply start CMD_WRITE to it with BeginIO() and then mark the i/o request as in use (ch->busy field in above code). What the SoundIO() function does is check for replies from ahi.device that some i/o request has finished and then simply marks them as not in use any more. If no i/o request is free the PlaySnd function simply interrupts the one that has been playing for longest with AbortIO()/WaitIO() and then reuses that one.



char *snd_buffer[5];
int sound_file_size[5];

int number;

struct Process *sound_player;
int sound_player_done = 0;

void load_sound(char *name, int number)
{

   FILE *fp_filename;

   if((fp_filename = fopen(name,"rb")) == NULL)
     { printf("can't open sound file\n");exit(0);} ;

   fseek (fp_filename,0,SEEK_END);
   sound_file_size[number] = ftell(fp_filename);
   fseek (fp_filename,0,SEEK_SET);

   snd_buffer[number]=(char *)malloc(sound_file_size[number]);

   fread(snd_buffer[number],sound_file_size[number],1,fp_filename);

   //printf("%d\n",sound_file_size[number]);

   fclose(fp_filename);

 //  free(snd_buffer[number]);

}

void play_sound_routine(void)

{

struct MsgPort    *AHImp_sound     = NULL;
struct AHIRequest *AHIios_sound[2] = {NULL,NULL};
struct AHIRequest *AHIio_sound     = NULL;
BYTE               AHIDevice_sound = -1;
//ULONG sig_sound;

//-----open/setup ahi

    if((AHImp_sound=CreateMsgPort()) != NULL) {
    if((AHIio_sound=(struct AHIRequest *)CreateIORequest(AHImp_sound,sizeof(struct AHIRequest))) != NULL) {
      AHIio_sound->ahir_Version = 4;
      AHIDevice_sound=OpenDevice(AHINAME,0,(struct IORequest *)AHIio_sound,0);
    }
  }

  if(AHIDevice_sound) {
    Printf("Unable to open %s/0 version 4\n",AHINAME);
    goto sound_panic;
  }

  AHIios_sound[0]=AHIio_sound;
  SetIoErr(0);

    AHIios_sound[0]->ahir_Std.io_Message.mn_Node.ln_Pri = 127;
    AHIios_sound[0]->ahir_Std.io_Command  = CMD_WRITE;
    AHIios_sound[0]->ahir_Std.io_Data     = snd_buffer[number];//sndbuf;
    AHIios_sound[0]->ahir_Std.io_Length   = sound_file_size[number];//fib_snd.fib_Size;
    AHIios_sound[0]->ahir_Std.io_Offset   = 0;
    AHIios_sound[0]->ahir_Frequency       = 8000;//44100;
    AHIios_sound[0]->ahir_Type            = AHIST_M8S;//AHIST_M16S;
    AHIios_sound[0]->ahir_Volume          = 0x10000;          // Full volume
    AHIios_sound[0]->ahir_Position        = 0x8000;           // Centered
    AHIios_sound[0]->ahir_Link            = NULL;

    DoIO((struct IORequest *) AHIios_sound[0]);

sound_panic:

  //printf("are we on sound_exit?\n");
  if(!AHIDevice_sound)
    CloseDevice((struct IORequest *)AHIio_sound);
  DeleteIORequest((struct IORequest *)AHIio_sound);
  DeleteMsgPort(AHImp_sound);
  sound_player_done = 1;

}

void stop_sound(void)
{

     Signal(&sound_player->pr_Task, SIGBREAKF_CTRL_C );
     while(sound_player_done !=1){};
     sound_player_done=0;
}

void play_sound(int num)

{
      number=num;

         #ifdef __MORPHOS__

         sound_player = CreateNewProcTags(
							NP_Entry, &play_sound_routine,
							NP_Priority, 1,
							NP_Name, "Ahi raw-sound-player Process",
                          //  NP_Input, Input(),
                          //  NP_CloseInput, FALSE,
                          //  NP_Output, Output(),
                          //  NP_CloseOutput, FALSE,

                            NP_CodeType, CODETYPE_PPC,

							TAG_DONE);

         #else

         sound_player = CreateNewProcTags(
							NP_Entry, &play_sound_routine,
							NP_Priority, 1,
							NP_Name, "Ahi raw-sound-player Process",
                          //  NP_Input, Input(),
                          //  NP_CloseInput, FALSE,
                          //  NP_Output, Output(),
                          //  NP_CloseOutput, FALSE,

							TAG_DONE);
         #endif

         Delay(10); // little delay for make sounds finish

}

Low level for music playback

These steps will allow you to use low-level AHI functions:
- Create message port and AHIRequest with appropriate functions from exec.library.
- Open the device with OpenDevice() giving AHI_NO_UNIT as a unit.
- Get interface to the library with GetInterface() giving as the first parameter io_Device field of IORequest.

struct AHIIFace *IAHI;
struct Library *AHIBase;
struct AHIRequest *ahi_request;
struct MsgPort *mp;

if (mp = IExec->CreateMsgPort())
{   if (ahi_request = (struct AHIRequest *)IExec->CreateIORequest(mp, sizeof(struct AHIRequest)))
   {
      ahi_request->ahir_Version = 4;
      if (IExec->OpenDevice("ahi.device", AHI_NO_UNIT, (struct IORequest *)ahi_request, 0) == 0)
      {
         AHIBase = (struct Library *)ahi_request->ahir_Std.io_Device;
         if (IAHI = (struct AHIIFace *)IExec->GetInterface(AHIBase, "main", 1, NULL))
         {
            // Interface got, we can now use AHI functions
            // ...
            // Once we are done we have to drop interface and free resources
            IExec->DropInterface((struct Interface *)IAHI);
         }
         IExec->CloseDevice((struct IORequest *)ahi_request);
      }
      IExec->DeleteIORequest((struct IORequest *)ahi_request);
   }
   IExec->DeleteMsgPort(mp);
}
Once you get the AHI interface, its functions can be used. To start playing sounds you need to allocate audio (optionally you can ask user for Audio mode and frequency). Then you need to Load samples to use with AHI. You do it with AHI_AllocAudio(), AHI_ControlAudio() and AHI_LoadSound().

struct AHIAudioCtrl *ahi_ctrl;

if (ahi_ctrl = IAHI->AHI_AllocAudio(
   AHIA_AudioID, AHI_DEFAULT_ID,
   AHIA_MixFreq, AHI_DEFAULT_FREQ,
   AHIA_Channels, NUMBER_OF_CHANNELS, // the desired number of channels
   AHIA_Sounds, NUMBER_OF_SOUNDS, // maximum number of sounds used
TAG_DONE))
{
   IAHI->AHI_ControlAudio(ahi_ctrl, AHIC_Play, TRUE, TAG_DONE);
   int i;
   for (i = 0; i < NUMBER_OF_SOUNDS; i++)
   {
      // These variables need to be initialized
      uint32 type;
      APTR samplearray;
      uint32 length;
      struct AHISampleInfo sample;

      sample.ahisi_Type = type; 
      // where type is the type of sample, for example AHIST_M8S for 8-bit mono sound
      sample.ahisi_Address = samplearray; 
      // where samplearray must point to sample data
      sample.ahisi_Length = length / IAHI->AHI_SampleFrameSize(type);
      if (IAHI->AHI_LoadSound(i + 1, AHIST_SAMPLE, &sample, ahi_ctrl)) != 0)
      {
         // error while loading sound, cleanup
      }
   }
   // everything OK, play the sounds
   // ...
   // then unload sounds and free the audio
   for (i = 0; i < NUMBER_OF_SOUNDS; i++)
      IAHI->AHI_UnloadSound(i + 1, ahi_ctrl);
   IAHI->AHI_ControlAudio(ahi_ctrl, AHIC_Play, FALSE, TAG_DONE);
   IAHI->AHI_FreeAudio(ahi_ctrl);
}

use the functions AHI_SetVol() to set volume, AHI_SetFreq() to set frequency, AHI_SetSound() to play the sounds.

#include <devices/ahi.h>
#include <dos/dostags.h>
#include <proto/dos.h>
#include <proto/exec.h>
#include <proto/ptplay.h>

struct UserArgs
{
	STRPTR file;
	LONG   *freq;
};

CONST TEXT Version[] = "$VER: ShellPlayer 1.0 (4.4.06)";

STATIC struct Library *PtPlayBase;
STATIC struct Task *maintask;
STATIC APTR modptr;
STATIC LONG frequency;
STATIC VOLATILE int player_done = 0;

STATIC VOID AbortAHI(struct MsgPort *port, struct IORequest *r1, struct IORequest *r2)
{
	if (!CheckIO(r1))
	{
		AbortIO(r1);
		WaitIO(r1);
	}

	if (!CheckIO(r2))
	{
		AbortIO(r2);
		WaitIO(r2);
	}

	GetMsg(port);
	GetMsg(port);
}

STATIC VOID StartAHI(struct AHIRequest *r1, struct AHIRequest *r2, WORD *buf1, WORD *buf2)
{
	PtRender(modptr, (BYTE *)(buf1), (BYTE *)(buf1+1), 4, frequency, 1, 16, 2);
	PtRender(modptr, (BYTE *)(buf2), (BYTE *)(buf2+1), 4, frequency, 1, 16, 2);

	r1->ahir_Std.io_Command = CMD_WRITE;
	r1->ahir_Std.io_Offset  = 0;
	r1->ahir_Std.io_Data    = buf1;
	r1->ahir_Std.io_Length  = frequency*2*2;
	r2->ahir_Std.io_Command = CMD_WRITE;
	r2->ahir_Std.io_Offset  = 0;
	r2->ahir_Std.io_Data    = buf2;
	r2->ahir_Std.io_Length  = frequency*2*2;

	r1->ahir_Link = NULL;
	r2->ahir_Link = r1;

	SendIO((struct IORequest *)r1);
	SendIO((struct IORequest *)r2);
}

STATIC VOID PlayerRoutine(void)
{
	struct AHIRequest req1, req2;
	struct MsgPort *port;
	WORD *buf1, *buf2;

	buf1 = AllocVec(frequency*2*2, MEMF_ANY);
	buf2 = AllocVec(frequency*2*2, MEMF_ANY);

	if (buf1 && buf2)
	{
		port = CreateMsgPort();

		if (port)
		{
			req1.ahir_Std.io_Message.mn_Node.ln_Pri = 0;
			req1.ahir_Std.io_Message.mn_ReplyPort = port;
			req1.ahir_Std.io_Message.mn_Length = sizeof(req1);
			req1.ahir_Version = 2;

			if (OpenDevice("ahi.device", 0, (struct IORequest *)&req1, 0) == 0)
			{
				req1.ahir_Type           = AHIST_S16S;
				req1.ahir_Frequency      = frequency;
				req1.ahir_Volume         = 0x10000;
				req1.ahir_Position       = 0x8000;

				CopyMem(&req1, &req2, sizeof(struct AHIRequest));

				StartAHI(&req1, &req2, buf1, buf2);

				for (;;)
				{
					struct AHIRequest *io;
					ULONG sigs;

					sigs = Wait(SIGBREAKF_CTRL_C | 1 << port->mp_SigBit);

					if (sigs & SIGBREAKF_CTRL_C)
						break;

					if ((io = (struct AHIRequest *)GetMsg(port)))
					{
						if (GetMsg(port))
						{
							// Both IO request finished, restart

							StartAHI(&req1, &req2, buf1, buf2);
						}
						else
						{
							APTR link;
							WORD *buf;

							if (io == &req1)
							{
								link = &req2;
								buf = buf1;
							}
							else
							{
								link = &req1;
								buf = buf2;
							}

							PtRender(modptr, (BYTE *)buf, (BYTE *)(buf+1), 4, frequency, 1, 16, 2);

							io->ahir_Std.io_Command = CMD_WRITE;
							io->ahir_Std.io_Offset  = 0;
							io->ahir_Std.io_Length  = frequency*2*2;
							io->ahir_Std.io_Data    = buf;
							io->ahir_Link = link;

							SendIO((struct IORequest *)io);
						}
					}
				}

				AbortAHI(port, (struct IORequest *)&req1, (struct IORequest *)&req2);
				CloseDevice((struct IORequest *)&req1);
			}

			DeleteMsgPort(port);
		}
	}

	FreeVec(buf1);
	FreeVec(buf2);

	Forbid();
	player_done = 1;
	Signal(maintask, SIGBREAKF_CTRL_C);
}

int main(void)
{
	struct RDArgs *args;
	struct UserArgs params;

	int rc = RETURN_FAIL;

	maintask = FindTask(NULL);

	args = ReadArgs("FILE/A,FREQ/K/N", (IPTR *)&params, NULL);

	if (args)
	{
		PtPlayBase = OpenLibrary("ptplay.library", 0);

		if (PtPlayBase)
		{
			BPTR fh;

			if (params.freq)
			{
				frequency = *params.freq;
			}

			if (frequency < 4000 || frequency > 96000)
				frequency = 48000;

			fh = Open(params.file, MODE_OLDFILE);

			if (fh)
			{
				struct FileInfoBlock fib;
				APTR buf;

				ExamineFH(fh, &fib);

				buf = AllocVec(fib.fib_Size, MEMF_ANY);

				if (buf)
				{
					Read(fh, buf, fib.fib_Size);
				}

				Close(fh);

				if (buf)
				{
					ULONG type;

					type = PtTest(params.file, buf, 1200);

					modptr = PtInit(buf, fib.fib_Size, frequency, type);

					if (modptr)
					{
						struct Process *player;

						player = CreateNewProcTags(
							NP_Entry, &PlayerRoutine,
							NP_Priority, 1,
							NP_Name, "Player Process",
							#ifdef __MORPHOS__
							NP_CodeType, CODETYPE_PPC,
							#endif
							TAG_DONE);

						if (player)
						{
							rc = RETURN_OK;
							Printf("Now playing \033[1m%s\033[22m at %ld Hz... Press CTRL-C to abort.\n", params.file, frequency);

							do
							{
								Wait(SIGBREAKF_CTRL_C);

								Forbid();
								if (!player_done)
								{
									Signal(&player->pr_Task, SIGBREAKF_CTRL_C);
								}
								Permit();
							}
							while (!player_done);
						}

						PtCleanup(modptr);
					}
					else
					{
						PutStr("Unknown file!\n");
					}
				}
				else
				{
					PutStr("Not enough memory!\n");
				}
			}
			else
			{
				PutStr("Could not open file!\n");
			}

			CloseLibrary(PtPlayBase);
		}

		FreeArgs(args);
	}

	if (rc = RETURN_FAIL)
		PrintFault(IoErr(), NULL);
	return rc;
}

Other Examples

edit

Master volume utility

Anyone can make such a relative simple utility. It's a matter of calling AHI_SetEffect() with a master volume structure. You can make a window with a slider and call said function easily.

You write to AHI device, and AHI will write to a sound card, the native hardware, or even to a file. These options are user-configurable. AHI also performs the software mixing duties so that more than one sound can be played simultaneously.

AHI provides four 'units' for audio. This makes it possible to have a program play on the native hardware, and another play on a sound card by attaching the appropriate AHI driver to a unit number. For the software developer, AHI provides two ways to play audio. One is the AUDIO: DOS device. AHI can create a volume called AUDIO: that works like an AmigaDOS volume. You can read and write data directly to it and it plays through the speakers. This is the easiest way to write PCM, but it's not the best.

First of all, if a user takes the AUDIO: entry out of the mountlist, your program won't work and you get bombarded with stupid support questions like 'I've got AHI, why doesn't it work?'. The better option is to send IORequests to AHI. This allows you to control volume and balance settings while the program runs (with AUDIO: you set these when you open the file and you can't change them without closing and re-opening AUDIO:) and you can use a neat trick called double-buffering to improve efficiency. Double-buffering allows you to fill one audio buffer while another one is playing. This kind of asynchronous operation can prevent 'choppy' audio on slower systems.

We initialize AHI and then prepare and send AHI request to the ahi.device.

It's very important to calculate the number of bytes you want AHI to read from the buffer. You can cause a nasty crash indeed if it's incorrect! To do this, multiply the PCM count by the number of channels by the number of AHI buffers.

A quick note about volume and position: AHI uses a fairly arcane datatype called Fixed. A Fixed number consists of 32 bits: a sign bit, a 15-bit integer part, and a 16-bit fractional part. When I construct the AHI request, I multiply this number by 0x00010000 to convert it to the fixed value. If I use this code as part of a DOS background process, I can change the volume and balance on the fly so the next sample that's queued will be played louder or quieter. It's also possible to interrupt AHI so the change takes effect immediately, but I won't go into that.

Once the request is sent, we put the requisite bits in to check for CTRL-C and any AHI interrupt messages. Then it's time to swap buffers around.

/*
 * Copyright (C) 2005 Mark Olsen
 *
 * This program is free software; you can redistribute it and/or
 * modify it under the terms of the GNU General Public License
 * as published by the Free Software Foundation; either version 2
 * of the License, or (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program; if not, write to the Free Software
 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
 */

#include <exec/exec.h>
#include <devices/ahi.h>
#include <proto/exec.h>
#define USE_INLINE_STDARG
#include <proto/ahi.h>
#include <utility/hooks.h>

#include "../game/q_shared.h"
#include "../client/snd_local.h"

struct AHIdata *ad;

struct AHIChannelInfo
{
	struct AHIEffChannelInfo aeci;
	ULONG offset;
};

struct AHIdata
{
	struct MsgPort *msgport;
	struct AHIRequest *ahireq;
	int ahiopen;
	struct AHIAudioCtrl *audioctrl;
	void *samplebuffer;
	struct Hook EffectHook;
	struct AHIChannelInfo aci;
	unsigned int readpos;
};

#if !defined(__AROS__)
ULONG EffectFunc()
{
	struct Hook *hook = (struct Hook *)REG_A0;
	struct AHIEffChannelInfo *aeci = (struct AHIEffChannelInfo *)REG_A1;

	struct AHIdata *ad;

	ad = hook->h_Data;

	ad->readpos = aeci->ahieci_Offset[0];

	return 0;
}

static struct EmulLibEntry EffectFunc_Gate =
{
	TRAP_LIB, 0, (void (*)(void))EffectFunc
};
#else
AROS_UFH3(ULONG, EffectFunc,
          AROS_UFHA(struct Hook *, hook, A0),
          AROS_UFHA(struct AHIAudioCtrl *, aac, A2),
          AROS_UFHA(struct AHIEffChannelInfo *, aeci, A1)
         )
{
    AROS_USERFUNC_INIT
    
	struct AHIdata *ad;

	ad = hook->h_Data;

	ad->readpos = aeci->ahieci_Offset[0];

	return 0;

    AROS_USERFUNC_EXIT
}
#endif

qboolean SNDDMA_Init(void)
{
	ULONG channels;
	ULONG speed;
	ULONG bits;

	ULONG r;

	struct Library *AHIBase;

	struct AHISampleInfo sample;

	cvar_t *sndbits;
	cvar_t *sndspeed;
	cvar_t *sndchannels;

	char modename[64];

	if (ad)
		return;

	sndbits = Cvar_Get("sndbits", "16", CVAR_ARCHIVE);
	sndspeed = Cvar_Get("sndspeed", "0", CVAR_ARCHIVE);
	sndchannels = Cvar_Get("sndchannels", "2", CVAR_ARCHIVE);

	speed = sndspeed->integer;

	if (speed == 0)
		speed = 22050;

	ad = AllocVec(sizeof(*ad), MEMF_ANY);
	if (ad)
	{
		ad->msgport = CreateMsgPort();
		if (ad->msgport)
		{
			ad->ahireq = (struct AHIRequest *)CreateIORequest(ad->msgport, sizeof(struct AHIRequest));
			if (ad->ahireq)
			{
				ad->ahiopen = !OpenDevice("ahi.device", AHI_NO_UNIT, (struct IORequest *)ad->ahireq, 0);
				if (ad->ahiopen)
				{
					AHIBase = (struct Library *)ad->ahireq->ahir_Std.io_Device;

					ad->audioctrl = AHI_AllocAudio(AHIA_AudioID, AHI_DEFAULT_ID,
					                               AHIA_MixFreq, speed,
					                               AHIA_Channels, 1,
					                               AHIA_Sounds, 1,
					                               TAG_END);

					if (ad->audioctrl)
					{
						AHI_GetAudioAttrs(AHI_INVALID_ID, ad->audioctrl,
						                  AHIDB_BufferLen, sizeof(modename),
						                  AHIDB_Name, (ULONG)modename,
						                  AHIDB_MaxChannels, (ULONG)&channels,
						                  AHIDB_Bits, (ULONG)&bits,
						                  TAG_END);

						AHI_ControlAudio(ad->audioctrl,
						                 AHIC_MixFreq_Query, (ULONG)&speed,
						                 TAG_END);

						if (bits == 8 || bits == 16)
						{
							if (channels > 2)
								channels = 2;

							dma.speed = speed;
							dma.samplebits = bits;
							dma.channels = channels;
#if !defined(__AROS__)
							dma.samples = 2048*(speed/11025);
#else
							dma.samples = 16384*(speed/11025);
#endif
							dma.submission_chunk = 1;

#if !defined(__AROS__)
							ad->samplebuffer = AllocVec(2048*(speed/11025)*(bits/8)*channels, MEMF_ANY);
#else
							ad->samplebuffer = AllocVec(16384*(speed/11025)*(bits/8)*channels, MEMF_ANY);
#endif
							if (ad->samplebuffer)
							{
								dma.buffer = ad->samplebuffer;

								if (channels == 1)
								{
									if (bits == 8)
										sample.ahisi_Type = AHIST_M8S;
									else
										sample.ahisi_Type = AHIST_M16S;
								}
								else
								{
									if (bits == 8)
										sample.ahisi_Type = AHIST_S8S;
									else
										sample.ahisi_Type = AHIST_S16S;
								}

								sample.ahisi_Address = ad->samplebuffer;
#if !defined(__AROS__)
								sample.ahisi_Length = (2048*(speed/11025)*(bits/8))/AHI_SampleFrameSize(sample.ahisi_Type);
#else
								sample.ahisi_Length = (16384*(speed/11025)*(bits/8))/AHI_SampleFrameSize(sample.ahisi_Type);								
#endif

								r = AHI_LoadSound(0, AHIST_DYNAMICSAMPLE, &sample, ad->audioctrl);
								if (r == 0)
								{
									r = AHI_ControlAudio(ad->audioctrl,
									                     AHIC_Play, TRUE,
									                     TAG_END);

									if (r == 0)
									{
										AHI_Play(ad->audioctrl,
										         AHIP_BeginChannel, 0,
										         AHIP_Freq, speed,
										         AHIP_Vol, 0x10000,
										         AHIP_Pan, 0x8000,
										         AHIP_Sound, 0,
										         AHIP_EndChannel, NULL,
										         TAG_END);

										ad->aci.aeci.ahie_Effect = AHIET_CHANNELINFO;
										ad->aci.aeci.ahieci_Func = &ad->EffectHook;
										ad->aci.aeci.ahieci_Channels = 1;

#if !defined(__AROS__)
										ad->EffectHook.h_Entry = (void *)&EffectFunc_Gate;
#else
										ad->EffectHook.h_Entry = (IPTR (*)())&EffectFunc;
#endif
										ad->EffectHook.h_Data = ad;
										AHI_SetEffect(&ad->aci, ad->audioctrl);

										Com_Printf("Using AHI mode \"%s\" for audio output\n", modename);
										Com_Printf("Channels: %d bits: %d frequency: %d\n", channels, bits, speed);

										return 1;
									}
								}
							}
							FreeVec(ad->samplebuffer);
						}
						AHI_FreeAudio(ad->audioctrl);
					}
					else
						Com_Printf("Failed to allocate AHI audio\n");

					CloseDevice((struct IORequest *)ad->ahireq);
				}
				DeleteIORequest((struct IORequest *)ad->ahireq);
			}
			DeleteMsgPort(ad->msgport);
		}
		FreeVec(ad);
	}

	return 0;
}

int SNDDMA_GetDMAPos(void)
{
	return ad->readpos*dma.channels;
}

void SNDDMA_Shutdown(void)
{
	struct Library *AHIBase;

	if (ad == 0)
		return;

	AHIBase = (struct Library *)ad->ahireq->ahir_Std.io_Device;

	ad->aci.aeci.ahie_Effect = AHIET_CHANNELINFO|AHIET_CANCEL;
	AHI_SetEffect(&ad->aci.aeci, ad->audioctrl);
	AHI_ControlAudio(ad->audioctrl,
	                 AHIC_Play, FALSE,
	                 TAG_END);

	AHI_FreeAudio(ad->audioctrl);
	FreeVec(ad->samplebuffer);
	CloseDevice((struct IORequest *)ad->ahireq);
	DeleteIORequest((struct IORequest *)ad->ahireq);
	DeleteMsgPort(ad->msgport);
	FreeVec(ad);

	ad = 0;
}

void SNDDMA_Submit(void)
{
}

void SNDDMA_BeginPainting (void)
{
}
/*
Copyright (C) 2006-2007 Mark Olsen

This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either Version 2
of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA  02111-1307, USA.
*/

#include <exec/exec.h>
#include <devices/ahi.h>
#include <proto/exec.h>
#define USE_INLINE_STDARG
#include <proto/ahi.h>

#include "quakedef.h"
#include "sound.h"

struct AHIChannelInfo
{
	struct AHIEffChannelInfo aeci;
	ULONG offset;
};

struct ahi_private
{
	struct MsgPort *msgport;
	struct AHIRequest *ahireq;
	struct AHIAudioCtrl *audioctrl;
	void *samplebuffer;
	struct Hook EffectHook;
	struct AHIChannelInfo aci;
	unsigned int readpos;
};

ULONG EffectFunc()
{
	struct Hook *hook = (struct Hook *)REG_A0;
	struct AHIEffChannelInfo *aeci = (struct AHIEffChannelInfo *)REG_A1;

	struct ahi_private *p;

	p = hook->h_Data;

	p->readpos = aeci->ahieci_Offset[0];

	return 0;
}

static struct EmulLibEntry EffectFunc_Gate =
{
	TRAP_LIB, 0, (void (*)(void))EffectFunc
};

void ahi_shutdown(struct SoundCard *sc)
{
	struct ahi_private *p = sc->driverprivate;

	struct Library *AHIBase;

	AHIBase = (struct Library *)p->ahireq->ahir_Std.io_Device;

	p->aci.aeci.ahie_Effect = AHIET_CHANNELINFO|AHIET_CANCEL;
	AHI_SetEffect(&p->aci.aeci, p->audioctrl);
	AHI_ControlAudio(p->audioctrl,
	                 AHIC_Play, FALSE,
	                 TAG_END);

	AHI_FreeAudio(p->audioctrl);

	CloseDevice((struct IORequest *)p->ahireq);
	DeleteIORequest((struct IORequest *)p->ahireq);

	DeleteMsgPort(p->msgport);

	FreeVec(p->samplebuffer);

	FreeVec(p);
}

int ahi_getdmapos(struct SoundCard *sc)
{
	struct ahi_private *p = sc->driverprivate;

	sc->samplepos = p->readpos*sc->channels;

	return sc->samplepos;
}

void ahi_submit(struct SoundCard *sc, unsigned int count)
{
}

qboolean ahi_init(struct SoundCard *sc, int rate, int channels, int bits)
{
	struct ahi_private *p;
	ULONG r;

	char name[64];

	struct Library *AHIBase;

	struct AHISampleInfo sample;

	p = AllocVec(sizeof(*p), MEMF_ANY);
	if (p)
	{
		p->msgport = CreateMsgPort();
		if (p->msgport)
		{
			p->ahireq = (struct AHIRequest *)CreateIORequest(p->msgport, sizeof(struct AHIRequest));
			if (p->ahireq)
			{
				r = !OpenDevice("ahi.device", AHI_NO_UNIT, (struct IORequest *)p->ahireq, 0);
				if (r)
				{
					AHIBase = (struct Library *)p->ahireq->ahir_Std.io_Device;

					p->audioctrl = AHI_AllocAudio(AHIA_AudioID, AHI_DEFAULT_ID,
					                               AHIA_MixFreq, rate,
					                               AHIA_Channels, 1,
					                               AHIA_Sounds, 1,
					                               TAG_END);

					if (p->audioctrl)
					{
						AHI_GetAudioAttrs(AHI_INVALID_ID, p->audioctrl,
						                  AHIDB_BufferLen, sizeof(name),
						                  AHIDB_Name, (ULONG)name,
						                  AHIDB_MaxChannels, (ULONG)&channels,
						                  AHIDB_Bits, (ULONG)&bits,
						                  TAG_END);

						AHI_ControlAudio(p->audioctrl,
						                 AHIC_MixFreq_Query, (ULONG)&rate,
						                 TAG_END);

						if (bits == 8 || bits == 16)
						{
							if (channels > 2)
								channels = 2;

							sc->speed = rate;
							sc->samplebits = bits;
							sc->channels = channels;
							sc->samples = 16384*(rate/11025);

							p->samplebuffer = AllocVec(16384*(rate/11025)*(bits/8)*channels, MEMF_CLEAR);
							if (p->samplebuffer)
							{
								sc->buffer = p->samplebuffer;

								if (channels == 1)
								{
									if (bits == 8)
										sample.ahisi_Type = AHIST_M8S;
									else
										sample.ahisi_Type = AHIST_M16S;
								}
								else
								{
									if (bits == 8)
										sample.ahisi_Type = AHIST_S8S;
									else
										sample.ahisi_Type = AHIST_S16S;
								}

								sample.ahisi_Address = p->samplebuffer;
								sample.ahisi_Length = (16384*(rate/11025)*(bits/8))/AHI_SampleFrameSize(sample.ahisi_Type);

								r = AHI_LoadSound(0, AHIST_DYNAMICSAMPLE, &sample, p->audioctrl);
								if (r == 0)
								{
									r = AHI_ControlAudio(p->audioctrl,
									                     AHIC_Play, TRUE,
									                     TAG_END);

									if (r == 0)
									{
										AHI_Play(p->audioctrl,
										         AHIP_BeginChannel, 0,
										         AHIP_Freq, rate,
										         AHIP_Vol, 0x10000,
										         AHIP_Pan, 0x8000,
										         AHIP_Sound, 0,
										         AHIP_EndChannel, NULL,
										         TAG_END);

										p->aci.aeci.ahie_Effect = AHIET_CHANNELINFO;
										p->aci.aeci.ahieci_Func = &p->EffectHook;
										p->aci.aeci.ahieci_Channels = 1;

										p->EffectHook.h_Entry = (void *)&EffectFunc_Gate;
										p->EffectHook.h_Data = p;

										AHI_SetEffect(&p->aci, p->audioctrl);

										Com_Printf("Using AHI mode \"%s\" for audio output\n", name);
										Com_Printf("Channels: %d bits: %d frequency: %d\n", channels, bits, rate);

										sc->driverprivate = p;

										sc->GetDMAPos = ahi_getdmapos;
										sc->Submit = ahi_submit;
										sc->Shutdown = ahi_shutdown;

										return 1;
									}
								}
							}
							FreeVec(p->samplebuffer);
						}
						AHI_FreeAudio(p->audioctrl);
					}
					else
						Com_Printf("Failed to allocate AHI audio\n");

					CloseDevice((struct IORequest *)p->ahireq);
				}
				DeleteIORequest((struct IORequest *)p->ahireq);
			}
			DeleteMsgPort(p->msgport);
		}
		FreeVec(p);
	}

	return 0;
}

SoundInitFunc AHI_Init = ahi_init;

Hooks

edit

An old idea which is best avoided if possible. The hook function should be used to play/control the sample(s). It is called at the frequency that it was initialized with (in your case 100).

So in your 'normal' code you would flip a switch somewhere, telling the hookfunction to start playing a sample (or do with it whatever you want).

In the hookfunction you then start playing the sample and/or apply effects with the ahi ctrl function (and others).

One such example could be for instance be that module (as in well now .mod fileformat) data is being processed for each channel and apply effect, etc.

In your case it would be a bit simpler as a mod player. You want to start playing a note and stop it at your will. For instance when a counter reached a certain value.

The (probable) reason your number it not printing is because that routine is called _a lot_ every second

The gist is that you have to find a mechanism (that suits your purpose the best) that uses your mouseclicks (or keyclicks) to 'feed' the player (hookfunct) and find something that 'tells' the player to do something else with the played sample (stopping it, applying an effect etc).

You can use the data property of the hookfunc to 'give' / push a structure to your 'replay' routine so that you can for example tell the player that a certain sample started being played. The player can then decide (if counter reached a value for example) to actually stop the sample from playing and setting/changing the status in that structure so that the main program knows the sample can be 'played'/triggered again.

#include "backends/platform/amigaos3/amigaos3.h"
#include "backends/mixer/amigaos3/amigaos3-mixer.h"
#include "common/debug.h"
#include "common/system.h"
#include "common/config-manager.h"
#include "common/textconsole.h"

// Amiga includes
#include <clib/exec_protos.h>
#include "ahi-player-hook.h"

#define DEFAULT_MIX_FREQUENCY 11025

AmigaOS3MixerManager* g_mixerManager;

static void audioPlayerCallback() {
     g_mixerManager->callbackHandler();
}

AmigaOS3MixerManager::AmigaOS3MixerManager()
	:
	_mixer(0),
	_audioSuspended(false) {

    g_mixerManager = this;
}

AmigaOS3MixerManager::~AmigaOS3MixerManager() {
	if (_mixer) {
        _mixer->setReady(false);

        if (audioCtrl) {
            debug(1, "deleting AHI_ControlAudio");
    
            // Stop sounds.
            AHI_ControlAudio(audioCtrl, AHIC_Play, FALSE, TAG_DONE);
    
            if (_mixer) {
                _mixer->setReady(false);
            }
    
            AHI_UnloadSound(0, audioCtrl);
            AHI_FreeAudio(audioCtrl);
            audioCtrl = NULL;
        }
    
        if (audioRequest) {
            debug(1, "deleting AHIDevice");
            CloseDevice((struct IORequest*)audioRequest);
            DeleteIORequest((struct IORequest*)audioRequest);
            audioRequest = NULL;
    
            DeleteMsgPort(audioPort);
            audioPort = NULL;
            AHIBase = NULL;
        }
    
        if (sample.ahisi_Address) {
            debug(1, "deleting soundBuffer");
            FreeVec(sample.ahisi_Address);
            sample.ahisi_Address = NULL;
        }
    
    	delete _mixer;
    }
}

void AmigaOS3MixerManager::init() {
    
    audioPort = (struct MsgPort*)CreateMsgPort();
    if (!audioPort) {
        error("Could not create a Message Port for AHI");
    }

    audioRequest = (struct AHIRequest*)CreateIORequest(audioPort, sizeof(struct AHIRequest));
    if (!audioRequest) {
        error("Could not create an IO Request for AHI");
    }

    // Open at least version 4.
    audioRequest->ahir_Version = 4;

    BYTE deviceError = OpenDevice(AHINAME, AHI_NO_UNIT, (struct IORequest*)audioRequest, NULL);
    if (deviceError) {
        error("Unable to open AHI Device: %s version 4", AHINAME);
    }

    // Needed by Audio Control?
    AHIBase = (struct Library *)audioRequest->ahir_Std.io_Device;

    uint32 desiredMixingfrequency = 0;

	// Determine the desired output sampling frequency.
	if (ConfMan.hasKey("output_rate")) {
		desiredMixingfrequency = ConfMan.getInt("output_rate");
    }
    
    if (desiredMixingfrequency == 0) {
		desiredMixingfrequency = DEFAULT_MIX_FREQUENCY;
    }
    
    ULONG audioId = AHI_DEFAULT_ID;
    
    audioCtrl = AHI_AllocAudio(
      AHIA_AudioID, audioId,
      AHIA_MixFreq, desiredMixingfrequency,
      AHIA_Channels, numAudioChannels,
      AHIA_Sounds, 1,
      AHIA_PlayerFunc, createAudioPlayerCallback(audioPlayerCallback),
      AHIA_PlayerFreq, audioCallbackFrequency<<16,
      AHIA_MinPlayerFreq, audioCallbackFrequency<<16,
      AHIA_MaxPlayerFreq, audioCallbackFrequency<<16,
      TAG_DONE);

    if (!audioCtrl) {
        error("Could not create initialize AHI");
    }
    
    // Get obtained mixing frequency.
    ULONG obtainedMixingfrequency = 0;
    AHI_ControlAudio(audioCtrl, AHIC_MixFreq_Query, (Tag)&obtainedMixingfrequency, TAG_DONE);
    debug(5, "Mixing frequency desired = %d Hz", desiredMixingfrequency);
    debug(5, "Mixing frequency obtained = %d Hz", obtainedMixingfrequency);

    // Calculate the sample factor.
    ULONG sampleCount = (ULONG)floor(obtainedMixingfrequency / audioCallbackFrequency);
    debug(5, "Calculated sample rate @ %u times per second  = %u", audioCallbackFrequency, sampleCount);  
    
    // 32 bits (4 bytes) are required per sample for storage (16bit stereo).
    sampleBufferSize = (sampleCount * AHI_SampleFrameSize(AHIST_S16S));

    sample.ahisi_Type = AHIST_S16S;
    sample.ahisi_Address = AllocVec(sampleBufferSize, MEMF_PUBLIC|MEMF_CLEAR);
    sample.ahisi_Length = sampleCount;

    AHI_SetFreq(0, obtainedMixingfrequency, audioCtrl, AHISF_IMM);
    AHI_SetVol(0, 0x10000L, 0x8000L, audioCtrl, AHISF_IMM);
 
    AHI_LoadSound(0, AHIST_DYNAMICSAMPLE, &sample, audioCtrl);
    AHI_SetSound(0, 0, 0, 0, audioCtrl, AHISF_IMM);    
        
    // Create the mixer instance and start the sound processing.
    assert(!_mixer);
	_mixer = new Audio::MixerImpl(g_system, obtainedMixingfrequency);
	assert(_mixer);
    _mixer->setReady(true);
        
          
    // Start feeding samples to sound hardware (and start the AHI callback!)
    AHI_ControlAudio(audioCtrl, AHIC_Play, TRUE, TAG_DONE);
}

void AmigaOS3MixerManager::callbackHandler() {
	assert(_mixer);
	
	_mixer->mixCallback((byte*)sample.ahisi_Address, sampleBufferSize);
}

void AmigaOS3MixerManager::suspendAudio() {
	AHI_ControlAudio(audioCtrl, AHIC_Play, FALSE, TAG_DONE);
	
	_audioSuspended = true;
}

int AmigaOS3MixerManager::resumeAudio() {
	if (!_audioSuspended) {
		return -2;
    }
	
    AHI_ControlAudio(audioCtrl, AHIC_Play, TRUE, TAG_DONE);
    
	_audioSuspended = false;
	
	return 0;
}

AmiArcadia also uses AHI and has c src and ScummVM AGA from AmiNet....all the source code is there. To create the AHI callback hook you'll also need to include the SDI header files.

References

edit

Still to be edited and possibly redone...

You may need to supply AHI_MinPlayerFreq & AHI_MaxPlayerFreq?

AHIA_PlayerFreq (Fixed) - If non-zero, enables timing and specifies how many times per second PlayerFunc will be called. This must be specified if AHIA_PlayerFunc is! Do not use any extreme frequencies. The result of MixFreq/PlayerFreq must fit an UWORD, i.e. it must be less or equal to 65535. It is also suggested that you keep the result over 80. For normal use this should not be a problem. Note that the data type is Fixed, not integer. 50 Hz is 50 16.

Default is a reasonable value. Don't depend on it.

AHIA_MinPlayerFreq (Fixed) - The minimum frequency (AHIA_PlayerFreq) you will use. You MUST supply this if you are using the device's interrupt feature!

AHIA_MaxPlayerFreq (Fixed) - The maximum frequency (AHIA_PlayerFreq) you will use. You MUST supply this if you are using the device's interrupt feature!

I don't see anything in the documentation limiting the high-end frequency, just keep in mind that AHI has to be able to finish the callback function completely in time to be able to make it to the next callback. How big is your callback function?

AHI_GetAudioAttrs() should have TAG_DONE

What is AHIR_DoMixFreq? remove AHIR_DoMixFreq tag from AHI_AllocAudio() call, I don't think it should be there.

decode audio to a sample buffer and feed buffer to AHI using normal double buffering method from a subprocess. can AHI buffer the sound in a music interrupt, to play it later? How would I do what you suggest? never used library API, sorry. I always use CMD_WRITE to play the sound. It does not work. Sooner than later io requests go out of sync due to task switching.

Advice about number of channels set up

CMD_FLUSH
CMD_READ
CMD_RESET
CMD_START
CMD_STOP
CMD_WRITE

CloseDevice
NSCMD_DEVICEQUERY
OpenDevice
ahi.device
if(AHI_GetAudioAttrs(AHI_DEFAULT_ID, NULL, AHIDB_BufferLen, 100, AHIDB_Inputs, &num_inputs))
//if(AHI_GetAudioAttrs(AHI_INVALID_ID, Record_AudioCtrl, AHIDB_BufferLen, 100, AHIDB_Inputs, &num_inputs))
{
printf("getaudioattrs worked\n");

STRPTR input_name[num_inputs-1][100];

printf("num inputs is %i\navailable inputs:\n",num_inputs);

for(int a=0; a!=num_inputs; a++)
{
//AHI_GetAudioAttrs(AHI_INVALID_ID, Record_AudioCtrl, AHIDB_BufferLen, 100, AHIDB_InputArg, a, AHIDB_Input, &input_name[a]);
AHI_GetAudioAttrs(AHI_DEFAULT_ID, NULL, AHIDB_BufferLen, 100, AHIDB_InputArg, a, AHIDB_Input, &input_name[a]);
printf("%i: %s\n",a,input_name[a]);
}

//AHI_ControlAudio(Record_AudioCtrl, AHIC_Input, &selected_input, TAG_DONE);
//AHI_ControlAudio(Record_AudioCtrl, AHIC_Input, 1, TAG_DONE);
//AHI_ControlAudio(NULL, AHIC_Input, 1, TAG_DONE);
//AHI_ControlAudio(NULL, AHIC_Input, &selected_input, TAG_DONE);

}
//changed second argument from..

AHIDevice=OpenDevice(AHINAME,0,(struct IORequest *)AHIio,NULL);

//to this..

AHIDevice=OpenDevice(AHINAME,AHI_NO_UNIT,(struct IORequest *)AHIio,NULL); 

AHI_AllocAudioA

edit
audioctrl = AHI_AllocAudioA( tags );
struct AHIAudioCtrl *AHI_AllocAudioA( struct TagItem * );
audioctrl = AHI_AllocAudio( tag1, ... );
struct AHIAudioCtrl *AHI_AllocAudio( Tag, ... );

AHI_AllocAudioRequestA

edit
requester = AHI_AllocAudioRequestA( tags );
struct AHIAudioModeRequester *AHI_AllocAudioRequestA(struct TagItem * );
requester = AHI_AllocAudioRequest( tag1, ... );
struct AHIAudioModeRequester *AHI_AllocAudioRequest( Tag, ... );

AHI_AudioRequestA

edit
success = AHI_AudioRequestA( requester, tags );
BOOL AHI_AudioRequestA( struct AHIAudioModeRequester *, struct TagItem * );
result = AHI_AudioRequest( requester, tag1, ... );
BOOL AHI_AudioRequest( struct AHIAudioModeRequester *, Tag, ... );

AHI_BestAudioIDA

edit
ID = AHI_BestAudioIDA( tags );
ULONG AHI_BestAudioIDA( struct TagItem * );
ID = AHI_BestAudioID( tag1, ... );
ULONG AHI_BestAudioID( Tag, ... );

AHI_ControlAudioA

edit
error = AHI_ControlAudioA( audioctrl, tags );
ULONG AHI_ControlAudioA( struct AHIAudioCtrl *, struct TagItem * );
error = AHI_ControlAudio( AudioCtrl, tag1, ...);
ULONG AHI_ControlAudio( struct AHIAudioCtrl *, Tag, ... );

AHI_FreeAudio

edit
AHI_FreeAudio( audioctrl );
void AHI_FreeAudio( struct AHIAudioCtrl * );

AHI_FreeAudioRequest

edit
AHI_FreeAudioRequest( requester );
void AHI_FreeAudioRequest( struct AHIAudioModeRequester * );

AHI_GetAudioAttrsA

edit
success = AHI_GetAudioAttrsA( ID, [audioctrl], tags );
BOOL AHI_GetAudioAttrsA( ULONG, struct AHIAudioCtrl *, struct TagItem * );
success = AHI_GetAudioAttrs( ID, [audioctrl], attr1, &result1, ...);
BOOL AHI_GetAudioAttrs( ULONG, struct AHIAudioCtrl *, Tag, ... );

AHI_LoadSound

edit
error = AHI_LoadSound( sound, type, info, audioctrl );
ULONG AHI_LoadSound( UWORD, ULONG, IPTR, struct AHIAudioCtrl * );

AHI_NextAudioID

edit
next_ID = AHI_NextAudioID( last_ID );
ULONG AHI_NextAudioID( ULONG );

AHI_PlayA

edit
AHI_PlayA( audioctrl, tags );
void AHI_PlayA( struct AHIAudioCtrl *, struct TagItem * );
AHI_Play( AudioCtrl, tag1, ...);
void AHI_Play( struct AHIAudioCtrl *, Tag, ... );

AHI_SampleFrameSize

edit
size = AHI_SampleFrameSize( sampletype );
ULONG AHI_SampleFrameSize( ULONG );

AHI_SetEffect

edit
error = AHI_SetEffect( effect, audioctrl );
ULONG AHI_SetEffect( IPTR, struct AHIAudioCtrl * );

AHI_SetFreq

edit
AHI_SetFreq( channel, freq, audioctrl, flags );
void AHI_SetFreq( UWORD, ULONG, struct AHIAudioCtrl *, ULONG );

AHI_SetSound

edit
AHI_SetSound( channel, sound, offset, length, audioctrl, flags );
void AHI_SetSound( UWORD, UWORD, ULONG, LONG, struct AHIAudioCtrl *, ULONG );

AHI_SetVol

edit
AHI_SetVol( channel, volume, pan, audioctrl, flags );
void AHI_SetVol( UWORD, Fixed, sposition, struct AHIAudioCtrl *, ULONG );

AHI_UnloadSound

edit
AHI_UnloadSound( sound, audioctrl );
void AHI_UnloadSound( UWORD, struct AHIAudioCtrl * );