Re: Android Studio: Package Native Libraries into APK
Posted by bmcclint on Aug 01, 2020; 2:25pm
URL: https://forum.jogamp.org/Android-Studio-Package-Native-Libraries-into-APK-tp4040727p4040761.html
Its been a month since I posted this original dilemma of mine. Since then I have attempted to get other variants of OpenAL for Android to compile and be usable withing a Java domain all ending in the same result of failing with a DLL Open error. I was able to get the JOGAMP libraries into a 'lib' folder but the JOGAMP libraries are looking for them in 'natives' so no joy.
So, I spent some time researching the "oh I'll just do it myself" approach. After about 20 hours or so I ended up with a working surrogate using Android's AudioTrack on its own thread in streaming mode using the OpenAL specification as inspiration. I will say I was rather impressed. Multi-sample spatial audio with distance fall-off in a VR environment worked really well. The latency is not what the Internet said, roughly 250ms. The delay from emission of sample and visual representation to hearing the sample is not evident unless I slow the thread to a 10hz cycle or lower. It plugged into the interface shared by both the desktop and Android variants. Test users said there was no noticeable latency and were able to identify which moving sources were emitting what sample in the VR environment.
Thus I have a working Android 3D Audio model (with no external dependency) that when side by side with the desktop variant using JOGAMP OpenAL sound and act virtually identical. Is it a perfect solution to a JNI implementation...nope! I suspect there are GC concerns and potential latency problems. Does it meet the need, solve an immediate problem and function adequately...absolutely! Funny I spent less time building from the ground up than trying to get other 'available' solutions to simply link in. Is the new implementation complete...no! It needs Doppler and other distance effects. The mixer is a simple summing algorithm suspect to over gain that may require clamping. I've feed dozens of simultaneous samples to it, looping and single cycle, and thus far no over-gain concerns. But for a 'listener is here looking this way' and 'source X is here, source Y is here, ...' it works really well.
I hope someday there is a clean way to get these pre-builts in and usable but for now success.