OSDN Git Service

Media framework changes for Tungsten.
authorMike J. Chen <mjchen@google.com>
Mon, 15 Aug 2011 20:28:26 +0000 (13:28 -0700)
committerJohn Grossman <johngro@google.com>
Fri, 26 Aug 2011 22:17:10 +0000 (15:17 -0700)
Squashed merge from master-tungsten of the following changes:

commit 73d09e18c4557e583a1684d44d598a1a02fd0cf2
Author: John Grossman <johngro@google.com>
Date:   Mon Jun 20 13:57:44 2011 -0700

    Remove TungstenMisc and rename LinearTransform

    Change-Id: Ie8aa3e24e09fdbf6ef8996c26deb9c5640e20d1b

commit 3114aabe76ad733b59929d87e49c68229f5ae2e8
Author: John Grossman <johngro@google.com>
Date:   Fri Jun 3 10:47:16 2011 -0700

    Name changes and spelling fixes.

    + Replace the term TungstenTime with the Eugene-approved term CommonTime.
    + Fix a spelling error in a comment I noticed.

    Change-Id: I8c10d618206826d16055f78c7724e24443bb03fd

commit cbf2903ab6893b6e662514e2f6d670e268a419df
Author: John Grossman <johngro@google.com>
Date:   Fri Apr 15 09:27:54 2011 -0700

    Migrate Tungsten code from the HC-Tungsten to the Master-Tungsten branch.

    Change-Id: I95372d913a0761d90168edb4016f5ece0ea74502

commit bc7c46aa629f9883e959ef23de8da297f9eb508b
Author: Jason Simmons <jsimmons@google.com>
Date:   Mon Jun 20 13:59:17 2011 -0700

    Create a separate class for timed AudioTracks

commit 43be3231034ff8537fdd84422a7954780038671f
Author: John Grossman <johngro@google.com>
Date:   Mon Jun 27 18:59:12 2011 -0700

    Move libaah_rtp over from the vendor directory.

    Also move factor PipeEvent out into utils.

    Change-Id: Id3877c66efe22d771cf3ef4877107e431b828e37

commit 17526eb3148c9c3d4365b6d5b47e8dc13bca71b6
Author: John Grossman <johngro@google.com>
Date:   Mon Jun 27 17:06:49 2011 -0700

    Name changes for the TRTP Players s/tungsten/aah/g

    Change-Id: I55e9ad13003f6aa6a36955b54426a7efbe31ac51

commit 423fc1bfc0fda799c421a650c83c4b9293b1a08c
Author: Jason Simmons <jsimmons@google.com>
Date:   Mon Jun 20 17:56:09 2011 -0700

    More timed AudioFlinger changes requested by code review:
    * change trimTimedBufferQueue to trimTimedBufferQueue_l
    * create one timed audio buffer heap per client process instead of one per track
    * grow the silence buffer on demand
    * some error handling fixes in timed getNextBuffer
    * calculate the next output PTS in all mixer and track hooks

    Change-Id: Ifc51a08b55029b7c48902ab2f22933ad7bafe1ad

commit a148e2674b1d3cb73289b82b85c333f0a66824a9
Author: John Grossman <johngro@google.com>
Date:   Mon Jun 20 17:02:24 2011 -0700

    Move the A@H time service into frameworks/base

    Change-Id: I5c570cde70e8931e205516cb33517585804ce841

commit dfa438fa49bdaeeb2ec5fd0d17b30d881608b6b1
Author: John Grossman <johngro@google.com>
Date:   Mon Jun 20 11:55:36 2011 -0700

    Fix the build after Mike's code moving.

    Change-Id: Ia883643ded252168bcc5a70584ab6ce97bb05266

commit 04489474ec8e73efe1bf52918831f41659033162
Author: John Grossman <johngro@google.com>
Date:   Fri Jun 17 14:19:50 2011 -0700

    Refactor the local/common clock services.

    This change is one of a set of 5 changes made to different repositories.  Look
    for this comment in all of them.

    Refactor the local/common clock services in tungsten to match android best
    practice.  Notable changes include

    + The kernel no longer knows anything about common time.  Common time has been
      moved completely up into user land.  This has an impact on the accuracy of the
      timesync debugging code, and the netfilter assisted approach to network based
      timesync is going to have to be modified.
    + The timesync driver used by A@H is now just local time driver.
    + The kernel no longer needs access to the linear transform math code, and it
      has been removed.
    + A new HAL has been introduced to expose the concept of local time to the
      system.
    + A non-slewable stub implementation of the local time HAL based on
      CLOCK_MONOTONIC has been added.
    + The TungstenTime library has been eliminated.  Its functionality has been
      distributed among the common time binder service, the local time hal and the
      linear transform utility code.
    + All clients of the old TungstenTime library have been changed to be clients of
      the binder service, the hal and the utility code.
    + The reset_tt utilities have been removed, they no longer have a purpose in the
      system.
    + more progress has been made in eliminating the word "tungsten" from the code.

    Things left to do include
    + Finish getting rid of tungsten from the time service.
    + Move the time service into the framework; AudioFlinger's new timed mode
      depends on it and the service cannot continue to live in vendor tungsten.

    Change-Id: I999b6cfb4a9d267818a86d747c35eecfc6693101

commit d48194545eed1116a84d81e2fb53315d2b0701a7
Author: Jason Simmons <jsimmons@google.com>
Date:   Thu Jun 16 14:22:46 2011 -0700

    Change the interface of the AudioMixer and AudioBufferProvider to accept a presentation timestamp

    Change-Id: Ice2df5628d45a7f77100e7008103b35b3d3160a4

commit 02561419db82b01ffb28df38000716c612988427
Author: John Grossman <johngro@google.com>
Date:   Tue May 10 14:00:21 2011 -0700

    Put in a hack for controling master volume in the policy manager.
    Fix initial master volume reporting.

    Change-Id: Ia6caf2bbc6083c5f99fab852baa40fff10fc5fc7

commit 549cdc3ba115dc654cdade261fb055c72c6cdb79
Author: John Grossman <johngro@google.com>
Date:   Wed May 4 11:46:17 2011 -0700

    Make certain the logic for computing the output stream mixing point is hardened
    against underflow and overflow when input and output sample rates don't match.

    Change-Id: I5ebea07c9938107b435bec7413418622767e4e16

commit 8043d8ed63f51e76d452d22be7d453d4a7794530
Author: Jason Simmons <jsimmons@google.com>
Date:   Wed Apr 27 18:06:27 2011 -0700

    Add the patch for timed audio support to the mono resampler

    Change-Id: I526f34ae9d1e8e3b0ed2fb05af3d024d5c5fe711

commit 2be89486ef23f0b0b0cc2dc25a4c0ee691043f00
Author: John Grossman <johngro@google.com>
Date:   Wed Apr 27 10:38:57 2011 -0700

    Extend the AudioHWInterface to allow it to specify the initial master volume used by AudioFlinger.

    Change-Id: I8823330801c927494cf7ca31a6b8f9264fbfbb26

commit ff89a4d5e37e6a05a2b03f79ab4e97833dd66393
Author: John Grossman <johngro@google.com>
Date:   Wed Apr 27 09:07:14 2011 -0700

    Fix an issue with inconsistent volume reporting.

    Changed masterVolume() to return the same value as the last call
    to setMasterVolume when the HW layer is implementing master
    volume control.  The masterVolume/setMasterVolume API seems to be
    an idea which was abandonded a long time ago; as of today the
    system only ever sets it to 1.0 at startup and then never changes
    it.  Until we can figure out how the concept of external
    amplifier gain control fits into the Android audio framework,
    Tungsten is exposing this API via a hack-tastic invoke back door
    in the TungstenRXPlayer and needs the getter/setter results to be
    consistent.

    Change-Id: I2ac730fa8fc9ee28c88f1a8e6f2e493eb5b65544

commit 086511b2d19cceb976747ac23e12b73fc7c28bea
Author: Jason Simmons <jsimmons@google.com>
Date:   Mon Apr 25 16:07:19 2011 -0700

    Add handling of timed audio tracks in the generic resampling mixer

    Change-Id: Ic3be1d21b1117f1b233808be543c28a0dcec4792

Change-Id: Id78bba8c002131d8b52b4170259a87fd94e63c73
Signed-off-by: Mike J. Chen <mjchen@google.com>
Signed-off-by: John Grossman <johngro@google.com>
Signed-off-by: Jason Simmons <jsimmons@google.com>
17 files changed:
include/media/AudioTrack.h
include/media/IAudioFlinger.h
include/media/IAudioTrack.h
media/libmedia/AudioTrack.cpp
media/libmedia/IAudioFlinger.cpp
media/libmedia/IAudioTrack.cpp
services/audioflinger/Android.mk
services/audioflinger/AudioBufferProvider.cpp [new file with mode: 0644]
services/audioflinger/AudioBufferProvider.h
services/audioflinger/AudioFlinger.cpp
services/audioflinger/AudioFlinger.h
services/audioflinger/AudioMixer.cpp
services/audioflinger/AudioMixer.h
services/audioflinger/AudioResampler.cpp
services/audioflinger/AudioResampler.h
services/audioflinger/AudioResamplerCubic.cpp
services/audioflinger/AudioResamplerSinc.cpp

index 923518d..99262eb 100644 (file)
@@ -416,7 +416,7 @@ public:
      */
             status_t dump(int fd, const Vector<String16>& args) const;
 
-private:
+protected:
     /* copying audio tracks is not allowed */
                         AudioTrack(const AudioTrack& other);
             AudioTrack& operator = (const AudioTrack& other);
@@ -486,8 +486,27 @@ private:
     int                     mSessionId;
     int                     mAuxEffectId;
     Mutex                   mLock;
+    bool                    mIsTimed;
 };
 
+class TimedAudioTrack : public AudioTrack
+{
+public:
+    TimedAudioTrack();
+
+    /* allocate a shared memory buffer that can be passed to queueTimedBuffer */
+    status_t allocateTimedBuffer(size_t size, sp<IMemory>* buffer);
+
+    /* queue a buffer obtained via allocateTimedBuffer for playback at the
+       given timestamp */
+    status_t queueTimedBuffer(const sp<IMemory>& buffer, int64_t pts);
+
+    /* define a transform between media time and either common time or
+       local time */
+    enum TargetTimeline {LOCAL_TIME, COMMON_TIME};
+    status_t setMediaTimeTransform(const LinearTransform& xform,
+                                   TargetTimeline target);
+};
 
 }; // namespace android
 
index 9e3cb7f..f72452c 100644 (file)
@@ -54,6 +54,7 @@ public:
                                 uint32_t flags,
                                 const sp<IMemory>& sharedBuffer,
                                 int output,
+                                bool isTimed,
                                 int *sessionId,
                                 status_t *status) = 0;
 
index 47d530b..7343a48 100644 (file)
@@ -24,7 +24,7 @@
 #include <utils/Errors.h>
 #include <binder/IInterface.h>
 #include <binder/IMemory.h>
-
+#include <utils/LinearTransform.h>
 
 namespace android {
 
@@ -69,6 +69,23 @@ public:
 
     /* get this tracks control block */
     virtual sp<IMemory> getCblk() const = 0;    
+
+    /* Allocate a shared memory buffer suitable for holding timed audio
+       samples */
+    virtual status_t    allocateTimedBuffer(size_t size,
+                                            sp<IMemory>* buffer) = 0;
+
+    /* Queue a buffer obtained via allocateTimedBuffer for playback at the given
+       timestamp */
+    virtual status_t    queueTimedBuffer(const sp<IMemory>& buffer,
+                                         int64_t pts) = 0;
+
+    /* Define the linear transform that will be applied to the timestamps
+       given to queueTimedBuffer (which are expressed in media time).
+       Target specifies whether this transform converts media time to local time
+       or Tungsten time. The values for target are defined in AudioTrack.h */
+    virtual status_t    setMediaTimeTransform(const LinearTransform& xform,
+                                              int target) = 0;
 };
 
 // ----------------------------------------------------------------------------
index cecedb5..3427548 100644 (file)
@@ -79,7 +79,8 @@ status_t AudioTrack::getMinFrameCount(
 // ---------------------------------------------------------------------------
 
 AudioTrack::AudioTrack()
-    : mStatus(NO_INIT)
+    : mStatus(NO_INIT),
+      mIsTimed(false)
 {
 }
 
@@ -94,7 +95,8 @@ AudioTrack::AudioTrack(
         void* user,
         int notificationFrames,
         int sessionId)
-    : mStatus(NO_INIT)
+    : mStatus(NO_INIT),
+      mIsTimed(false)
 {
     mStatus = set(streamType, sampleRate, format, channelMask,
             frameCount, flags, cbf, user, notificationFrames,
@@ -112,7 +114,8 @@ AudioTrack::AudioTrack(
         void* user,
         int notificationFrames,
         int sessionId)
-    : mStatus(NO_INIT)
+    : mStatus(NO_INIT),
+      mIsTimed(false)
 {
     mStatus = set(streamType, sampleRate, format, channelMask,
             0, flags, cbf, user, notificationFrames,
@@ -520,6 +523,10 @@ status_t AudioTrack::setSampleRate(int rate)
 {
     int afSamplingRate;
 
+    if (mIsTimed) {
+        return INVALID_OPERATION;
+    }
+
     if (AudioSystem::getOutputSamplingRate(&afSamplingRate, mStreamType) != NO_ERROR) {
         return NO_INIT;
     }
@@ -533,6 +540,10 @@ status_t AudioTrack::setSampleRate(int rate)
 
 uint32_t AudioTrack::getSampleRate()
 {
+    if (mIsTimed) {
+        return INVALID_OPERATION;
+    }
+
     AutoMutex lock(mLock);
     return mCblk->sampleRate;
 }
@@ -558,6 +569,10 @@ status_t AudioTrack::setLoop_l(uint32_t loopStart, uint32_t loopEnd, int loopCou
         return NO_ERROR;
     }
 
+    if (mIsTimed) {
+        return INVALID_OPERATION;
+    }
+
     if (loopStart >= loopEnd ||
         loopEnd - loopStart > cblk->frameCount ||
         cblk->server > loopStart) {
@@ -641,6 +656,8 @@ status_t AudioTrack::getPositionUpdatePeriod(uint32_t *updatePeriod)
 
 status_t AudioTrack::setPosition(uint32_t position)
 {
+    if (mIsTimed) return INVALID_OPERATION;
+
     AutoMutex lock(mLock);
     Mutex::Autolock _l(mCblk->lock);
 
@@ -790,6 +807,7 @@ status_t AudioTrack::createTrack_l(
                                                       ((uint16_t)flags) << 16,
                                                       sharedBuffer,
                                                       output,
+                                                      mIsTimed,
                                                       &mSessionId,
                                                       &status);
 
@@ -958,6 +976,7 @@ ssize_t AudioTrack::write(const void* buffer, size_t userSize)
 {
 
     if (mSharedBuffer != 0) return INVALID_OPERATION;
+    if (mIsTimed) return INVALID_OPERATION;
 
     if (ssize_t(userSize) < 0) {
         // sanity-check. user is most-likely passing an error code.
@@ -1020,6 +1039,36 @@ ssize_t AudioTrack::write(const void* buffer, size_t userSize)
 
 // -------------------------------------------------------------------------
 
+TimedAudioTrack::TimedAudioTrack() {
+    mIsTimed = true;
+}
+
+status_t TimedAudioTrack::allocateTimedBuffer(size_t size, sp<IMemory>* buffer)
+{
+    return mAudioTrack->allocateTimedBuffer(size, buffer);
+}
+
+status_t TimedAudioTrack::queueTimedBuffer(const sp<IMemory>& buffer,
+                                           int64_t pts)
+{
+    // restart track if it was disabled by audioflinger due to previous underrun
+    if (mActive && (mCblk->flags & CBLK_DISABLED_MSK)) {
+        mCblk->flags &= ~CBLK_DISABLED_ON;
+        LOGW("queueTimedBuffer() track %p disabled, restarting", this);
+        mAudioTrack->start();
+    }
+
+    return mAudioTrack->queueTimedBuffer(buffer, pts);
+}
+
+status_t TimedAudioTrack::setMediaTimeTransform(const LinearTransform& xform,
+                                                TargetTimeline target)
+{
+    return mAudioTrack->setMediaTimeTransform(xform, target);
+}
+
+// -------------------------------------------------------------------------
+
 bool AudioTrack::processAudioBuffer(const sp<AudioTrackThread>& thread)
 {
     Buffer audioBuffer;
@@ -1434,4 +1483,3 @@ bool audio_track_cblk_t::tryLock()
 // -------------------------------------------------------------------------
 
 }; // namespace android
-
index d58834b..c7b49cd 100644 (file)
@@ -90,6 +90,7 @@ public:
                                 uint32_t flags,
                                 const sp<IMemory>& sharedBuffer,
                                 int output,
+                                bool isTimed,
                                 int *sessionId,
                                 status_t *status)
     {
@@ -105,6 +106,7 @@ public:
         data.writeInt32(flags);
         data.writeStrongBinder(sharedBuffer->asBinder());
         data.writeInt32(output);
+        data.writeInt32(isTimed);
         int lSessionId = 0;
         if (sessionId != NULL) {
             lSessionId = *sessionId;
@@ -684,11 +686,12 @@ status_t BnAudioFlinger::onTransact(
             uint32_t flags = data.readInt32();
             sp<IMemory> buffer = interface_cast<IMemory>(data.readStrongBinder());
             int output = data.readInt32();
+            bool isTimed = data.readInt32();
             int sessionId = data.readInt32();
             status_t status;
             sp<IAudioTrack> track = createTrack(pid,
                     streamType, sampleRate, format,
-                    channelCount, bufferCount, flags, buffer, output, &sessionId, &status);
+                    channelCount, bufferCount, flags, buffer, output, isTimed, &sessionId, &status);
             reply->writeInt32(sessionId);
             reply->writeInt32(status);
             reply->writeStrongBinder(track->asBinder());
index bc8ff34..a496fb8 100644 (file)
@@ -35,7 +35,10 @@ enum {
     FLUSH,
     MUTE,
     PAUSE,
-    ATTACH_AUX_EFFECT
+    ATTACH_AUX_EFFECT,
+    ALLOCATE_TIMED_BUFFER,
+    QUEUE_TIMED_BUFFER,
+    SET_MEDIA_TIME_TRANSFORM,
 };
 
 class BpAudioTrack : public BpInterface<IAudioTrack>
@@ -113,6 +116,52 @@ public:
         }
         return status;
     }
+
+    virtual status_t allocateTimedBuffer(size_t size, sp<IMemory>* buffer) {
+        Parcel data, reply;
+        data.writeInterfaceToken(IAudioTrack::getInterfaceDescriptor());
+        data.writeInt32(size);
+        status_t status = remote()->transact(ALLOCATE_TIMED_BUFFER,
+                                             data, &reply);
+        if (status == NO_ERROR) {
+            status = reply.readInt32();
+            if (status == NO_ERROR) {
+                *buffer = interface_cast<IMemory>(reply.readStrongBinder());
+            }
+        }
+        return status;
+    }
+
+    virtual status_t queueTimedBuffer(const sp<IMemory>& buffer,
+                                      int64_t pts) {
+        Parcel data, reply;
+        data.writeInterfaceToken(IAudioTrack::getInterfaceDescriptor());
+        data.writeStrongBinder(buffer->asBinder());
+        data.writeInt64(pts);
+        status_t status = remote()->transact(QUEUE_TIMED_BUFFER,
+                                             data, &reply);
+        if (status == NO_ERROR) {
+            status = reply.readInt32();
+        }
+        return status;
+    }
+
+    virtual status_t setMediaTimeTransform(const LinearTransform& xform,
+                                           int target) {
+        Parcel data, reply;
+        data.writeInterfaceToken(IAudioTrack::getInterfaceDescriptor());
+        data.writeInt64(xform.a_zero);
+        data.writeInt64(xform.b_zero);
+        data.writeInt32(xform.a_to_b_numer);
+        data.writeInt32(xform.a_to_b_denom);
+        data.writeInt32(target);
+        status_t status = remote()->transact(SET_MEDIA_TIME_TRANSFORM,
+                                             data, &reply);
+        if (status == NO_ERROR) {
+            status = reply.readInt32();
+        }
+        return status;
+    }
 };
 
 IMPLEMENT_META_INTERFACE(AudioTrack, "android.media.IAudioTrack");
@@ -158,10 +207,38 @@ status_t BnAudioTrack::onTransact(
             reply->writeInt32(attachAuxEffect(data.readInt32()));
             return NO_ERROR;
         } break;
+        case ALLOCATE_TIMED_BUFFER: {
+            CHECK_INTERFACE(IAudioTrack, data, reply);
+            sp<IMemory> buffer;
+            status_t status = allocateTimedBuffer(data.readInt32(), &buffer);
+            reply->writeInt32(status);
+            if (status == NO_ERROR) {
+                reply->writeStrongBinder(buffer->asBinder());
+            }
+            return NO_ERROR;
+        } break;
+        case QUEUE_TIMED_BUFFER: {
+            CHECK_INTERFACE(IAudioTrack, data, reply);
+            sp<IMemory> buffer = interface_cast<IMemory>(
+                data.readStrongBinder());
+            uint64_t pts = data.readInt64();
+            reply->writeInt32(queueTimedBuffer(buffer, pts));
+            return NO_ERROR;
+        } break;
+        case SET_MEDIA_TIME_TRANSFORM: {
+            CHECK_INTERFACE(IAudioTrack, data, reply);
+            LinearTransform xform;
+            xform.a_zero = data.readInt64();
+            xform.b_zero = data.readInt64();
+            xform.a_to_b_numer = data.readInt32();
+            xform.a_to_b_denom = data.readInt32();
+            int target = data.readInt32();
+            reply->writeInt32(setMediaTimeTransform(xform, target));
+            return NO_ERROR;
+        } break;
         default:
             return BBinder::onTransact(code, data, reply, flags);
     }
 }
 
 }; // namespace android
-
index fa49592..d4eea3d 100644 (file)
@@ -8,7 +8,8 @@ LOCAL_SRC_FILES:=               \
     AudioResampler.cpp.arm      \
     AudioResamplerSinc.cpp.arm  \
     AudioResamplerCubic.cpp.arm \
-    AudioPolicyService.cpp
+    AudioPolicyService.cpp      \
+    AudioBufferProvider.cpp
 
 LOCAL_C_INCLUDES := \
     system/media/audio_effects/include
@@ -22,7 +23,8 @@ LOCAL_SHARED_LIBRARIES := \
     libhardware_legacy \
     libeffects \
     libdl \
-    libpowermanager
+    libpowermanager \
+    libaah_timesrv_client
 
 LOCAL_STATIC_LIBRARIES := \
     libcpustats \
diff --git a/services/audioflinger/AudioBufferProvider.cpp b/services/audioflinger/AudioBufferProvider.cpp
new file mode 100644 (file)
index 0000000..678fd58
--- /dev/null
@@ -0,0 +1,28 @@
+/*
+ * Copyright (C) 2011 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#undef __STRICT_ANSI__
+#define __STDINT_LIMITS
+#define __STDC_LIMIT_MACROS
+#include <stdint.h>
+
+#include "AudioBufferProvider.h"
+
+namespace android {
+
+const int64_t AudioBufferProvider::kInvalidPTS = INT64_MAX;
+
+}; // namespace android
index 81c5c39..62ad6bd 100644 (file)
@@ -38,8 +38,15 @@ public:
     };
 
     virtual ~AudioBufferProvider() {}
-    
-    virtual status_t getNextBuffer(Buffer* buffer) = 0;
+
+    // value representing an invalid presentation timestamp
+    static const int64_t kInvalidPTS;
+
+    // pts is the local time when the next sample yielded by getNextBuffer
+    // will be rendered.
+    // Pass kInvalidPTS if the PTS is unknown or not applicable.
+    virtual status_t getNextBuffer(Buffer* buffer, int64_t pts) = 0;
+
     virtual void releaseBuffer(Buffer* buffer) = 0;
 };
 
index 744fa50..48be620 100644 (file)
@@ -58,6 +58,9 @@
 #include <powermanager/PowerManager.h>
 // #define DEBUG_CPU_USAGE 10  // log statistics every n wall clock seconds
 
+#include <aah_timesrv/cc_helper.h>
+#include <aah_timesrv/local_clock.h>
+
 // ----------------------------------------------------------------------------
 
 
@@ -150,8 +153,9 @@ static const char *audio_interfaces[] = {
 
 AudioFlinger::AudioFlinger()
     : BnAudioFlinger(),
-        mPrimaryHardwareDev(0), mMasterVolume(1.0f), mMasterMute(false), mNextUniqueId(1),
-        mBtNrec(false)
+      mPrimaryHardwareDev(0), mMasterVolume(1.0f), mMasterVolumeSW(1.0f),
+      mMasterVolumeSupportLvl(MVS_NONE), mMasterMute(false), mNextUniqueId(1),
+      mBtNrec(false)
 {
 }
 
@@ -190,6 +194,33 @@ void AudioFlinger::onFirstRef()
         return;
     }
 
+    // Determine the level of master volume support the primary audio HAL has,
+    // and set the initial master volume at the same time.
+    float initialVolume = 1.0;
+    mMasterVolumeSupportLvl = MVS_NONE;
+    if (0 == mPrimaryHardwareDev->init_check(mPrimaryHardwareDev)) {
+        AutoMutex lock(mHardwareLock);
+        audio_hw_device_t *dev = mPrimaryHardwareDev;
+
+        mHardwareStatus = AUDIO_HW_GET_MASTER_VOLUME;
+        if ((NULL != dev->get_master_volume) &&
+            (NO_ERROR == dev->get_master_volume(dev, &initialVolume))) {
+            mMasterVolumeSupportLvl = MVS_FULL;
+        } else {
+            mMasterVolumeSupportLvl = MVS_SETONLY;
+            initialVolume = 1.0;
+        }
+
+        mHardwareStatus = AUDIO_HW_SET_MASTER_VOLUME;
+        if ((NULL == dev->set_master_volume) ||
+            (NO_ERROR != dev->set_master_volume(dev, initialVolume))) {
+            mMasterVolumeSupportLvl = MVS_NONE;
+        }
+        mHardwareStatus = AUDIO_HW_INIT;
+    }
+
+    // Set the mode for each audio HAL, and try to set the initial volume (if
+    // supported) for all of the non-primary audio HALs.
     for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
         audio_hw_device_t *dev = mAudioHwDevs[i];
 
@@ -201,11 +232,22 @@ void AudioFlinger::onFirstRef()
             mMode = AUDIO_MODE_NORMAL;
             mHardwareStatus = AUDIO_HW_SET_MODE;
             dev->set_mode(dev, mMode);
-            mHardwareStatus = AUDIO_HW_SET_MASTER_VOLUME;
-            dev->set_master_volume(dev, 1.0f);
-            mHardwareStatus = AUDIO_HW_IDLE;
+
+            if ((dev != mPrimaryHardwareDev) &&
+                (NULL != dev->set_master_volume)) {
+                mHardwareStatus = AUDIO_HW_SET_MASTER_VOLUME;
+                dev->set_master_volume(dev, initialVolume);
+            }
+
+            mHardwareStatus = AUDIO_HW_INIT;
         }
     }
+
+    mMasterVolumeSW = (MVS_NONE == mMasterVolumeSupportLvl)
+                    ? initialVolume
+                    : 1.0;
+    mMasterVolume   = initialVolume;
+    mHardwareStatus = AUDIO_HW_IDLE;
 }
 
 status_t AudioFlinger::initCheck() const
@@ -376,6 +418,7 @@ sp<IAudioTrack> AudioFlinger::createTrack(
         uint32_t flags,
         const sp<IMemory>& sharedBuffer,
         int output,
+        bool isTimed,
         int *sessionId,
         status_t *status)
 {
@@ -439,7 +482,7 @@ sp<IAudioTrack> AudioFlinger::createTrack(
         LOGV("createTrack() lSessionId: %d", lSessionId);
 
         track = thread->createTrack_l(client, streamType, sampleRate, format,
-                channelMask, frameCount, sharedBuffer, lSessionId, &lStatus);
+                channelMask, frameCount, sharedBuffer, lSessionId, isTimed, &lStatus);
 
         // move effect chain to this output thread if an effect on same session was waiting
         // for a track to be created
@@ -532,20 +575,29 @@ status_t AudioFlinger::setMasterVolume(float value)
         return PERMISSION_DENIED;
     }
 
+    float swmv = value;
+
     // when hw supports master volume, don't scale in sw mixer
-    { // scope for the lock
-        AutoMutex lock(mHardwareLock);
-        mHardwareStatus = AUDIO_HW_SET_MASTER_VOLUME;
-        if (mPrimaryHardwareDev->set_master_volume(mPrimaryHardwareDev, value) == NO_ERROR) {
-            value = 1.0f;
+    if (MVS_NONE != mMasterVolumeSupportLvl) {
+        for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
+            AutoMutex lock(mHardwareLock);
+            audio_hw_device_t *dev = mAudioHwDevs[i];
+
+            mHardwareStatus = AUDIO_HW_SET_MASTER_VOLUME;
+            if (NULL != dev->set_master_volume) {
+                dev->set_master_volume(dev, value);
+            }
+            mHardwareStatus = AUDIO_HW_IDLE;
         }
-        mHardwareStatus = AUDIO_HW_IDLE;
+
+        swmv = 1.0;
     }
 
     Mutex::Autolock _l(mLock);
-    mMasterVolume = value;
+    mMasterVolume   = value;
+    mMasterVolumeSW = swmv;
     for (uint32_t i = 0; i < mPlaybackThreads.size(); i++)
-       mPlaybackThreads.valueAt(i)->setMasterVolume(value);
+       mPlaybackThreads.valueAt(i)->setMasterVolume(swmv);
 
     return NO_ERROR;
 }
@@ -633,9 +685,27 @@ status_t AudioFlinger::setMasterMute(bool muted)
 
 float AudioFlinger::masterVolume() const
 {
+    if (MVS_FULL == mMasterVolumeSupportLvl) {
+        float ret_val;
+        AutoMutex lock(mHardwareLock);
+
+        mHardwareStatus = AUDIO_HW_GET_MASTER_VOLUME;
+        assert(NULL != mPrimaryHardwareDev);
+        assert(NULL != mPrimaryHardwareDev->get_master_volume);
+
+        mPrimaryHardwareDev->get_master_volume(mPrimaryHardwareDev, &ret_val);
+        mHardwareStatus = AUDIO_HW_IDLE;
+        return ret_val;
+    }
+
     return mMasterVolume;
 }
 
+float AudioFlinger::masterVolumeSW() const
+{
+    return mMasterVolumeSW;
+}
+
 bool AudioFlinger::masterMute() const
 {
     return mMasterMute;
@@ -1356,7 +1426,8 @@ AudioFlinger::PlaybackThread::PlaybackThread(const sp<AudioFlinger>& audioFlinge
 
     readOutputParameters();
 
-    mMasterVolume = mAudioFlinger->masterVolume();
+    mMasterVolume = mAudioFlinger->masterVolumeSW();
+
     mMasterMute = mAudioFlinger->masterMute();
 
     for (int stream = 0; stream < AUDIO_STREAM_CNT; stream++) {
@@ -1466,6 +1537,7 @@ sp<AudioFlinger::PlaybackThread::Track>  AudioFlinger::PlaybackThread::createTra
         int frameCount,
         const sp<IMemory>& sharedBuffer,
         int sessionId,
+        bool isTimed,
         status_t *status)
 {
     sp<Track> track;
@@ -1515,9 +1587,14 @@ sp<AudioFlinger::PlaybackThread::Track>  AudioFlinger::PlaybackThread::createTra
             }
         }
 
-        track = new Track(this, client, streamType, sampleRate, format,
-                channelMask, frameCount, sharedBuffer, sessionId);
-        if (track->getCblk() == NULL || track->name() < 0) {
+        if (!isTimed) {
+            track = new Track(this, client, streamType, sampleRate, format,
+                    channelMask, frameCount, sharedBuffer, sessionId);
+        } else {
+            track = TimedTrack::create(this, client, streamType, sampleRate, format,
+                    channelMask, frameCount, sharedBuffer, sessionId);
+        }
+        if (track == NULL || track->getCblk() == NULL || track->name() < 0) {
             lStatus = NO_MEMORY;
             goto Exit;
         }
@@ -1831,6 +1908,10 @@ bool AudioFlinger::MixerThread::threadLoop()
 
     acquireWakeLock();
 
+    LocalClock lc;
+    uint64_t localTimeFreq;
+    localTimeFreq = lc.getLocalFreq();
+
     while (!exitPending())
     {
 #ifdef DEBUG_CPU_USAGE
@@ -1925,8 +2006,16 @@ bool AudioFlinger::MixerThread::threadLoop()
        }
 
         if (LIKELY(mixerStatus == MIXER_TRACKS_READY)) {
+            // obtain the presentation timestamp of the next output buffer
+            int64_t pts;
+            status_t status = mOutput->stream->get_next_write_timestamp(
+                mOutput->stream, &pts);
+            if (status != NO_ERROR) {
+                pts = AudioBufferProvider::kInvalidPTS;
+            }
+
             // mix buffers...
-            mAudioMixer->process();
+            mAudioMixer->process(pts);
             sleepTime = 0;
             standbyTime = systemTime() + kStandbyTimeInNsecs;
             //TODO: delay standby when effects have a tail
@@ -2041,7 +2130,7 @@ uint32_t AudioFlinger::MixerThread::prepareTracks_l(const SortedVector< wp<Track
         // The first time a track is added we wait
         // for all its buffers to be filled before processing it
         mAudioMixer->setActiveTrack(track->name());
-        if (cblk->framesReady() && track->isReady() &&
+        if (track->framesReady() && track->isReady() &&
                 !track->isPaused() && !track->isTerminated())
         {
             //LOGV("track %d u=%08x, s=%08x [OK] on thread %p", track->name(), cblk->user, cblk->server, this);
@@ -2687,7 +2776,8 @@ bool AudioFlinger::DirectOutputThread::threadLoop()
             // output audio to hardware
             while (frameCount) {
                 buffer.frameCount = frameCount;
-                activeTrack->getNextBuffer(&buffer);
+                activeTrack->getNextBuffer(&buffer,
+                                           AudioBufferProvider::kInvalidPTS);
                 if (UNLIKELY(buffer.raw == 0)) {
                     memset(curBuf, 0, frameCount * mFrameSize);
                     break;
@@ -2954,7 +3044,7 @@ bool AudioFlinger::DuplicatingThread::threadLoop()
         if (LIKELY(mixerStatus == MIXER_TRACKS_READY)) {
             // mix buffers...
             if (outputsReady(outputTracks)) {
-                mAudioMixer->process();
+                mAudioMixer->process(AudioBufferProvider::kInvalidPTS);
             } else {
                 memset(mMixBuffer, 0, mixBufferSize);
             }
@@ -3349,7 +3439,8 @@ void AudioFlinger::PlaybackThread::Track::dump(char* buffer, size_t size)
             (int)mAuxBuffer);
 }
 
-status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(AudioBufferProvider::Buffer* buffer)
+status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
+    AudioBufferProvider::Buffer* buffer, int64_t pts)
 {
      audio_track_cblk_t* cblk = this->cblk();
      uint32_t framesReady;
@@ -3390,10 +3481,14 @@ getNextBuffer_exit:
      return NOT_ENOUGH_DATA;
 }
 
+uint32_t AudioFlinger::PlaybackThread::Track::framesReady() const{
+    return mCblk->framesReady();
+}
+
 bool AudioFlinger::PlaybackThread::Track::isReady() const {
     if (mFillingUpStatus != FS_FILLING || isStopped() || isPausing()) return true;
 
-    if (mCblk->framesReady() >= mCblk->frameCount ||
+    if (framesReady() >= mCblk->frameCount ||
             (mCblk->flags & CBLK_FORCEREADY_MSK)) {
         mFillingUpStatus = FS_FILLED;
         android_atomic_and(~CBLK_FORCEREADY_MSK, &mCblk->flags);
@@ -3562,6 +3657,393 @@ void AudioFlinger::PlaybackThread::Track::setAuxBuffer(int EffectId, int32_t *bu
     mAuxBuffer = buffer;
 }
 
+// timed audio tracks
+
+sp<AudioFlinger::PlaybackThread::TimedTrack>
+AudioFlinger::PlaybackThread::TimedTrack::create(
+            const wp<ThreadBase>& thread,
+            const sp<Client>& client,
+            int streamType,
+            uint32_t sampleRate,
+            uint32_t format,
+            uint32_t channelMask,
+            int frameCount,
+            const sp<IMemory>& sharedBuffer,
+            int sessionId) {
+    if (!client->reserveTimedTrack())
+        return NULL;
+
+    sp<TimedTrack> track = new TimedTrack(
+        thread, client, streamType, sampleRate, format, channelMask, frameCount,
+        sharedBuffer, sessionId);
+
+    if (track == NULL) {
+        client->releaseTimedTrack();
+        return NULL;
+    }
+
+    return track;
+}
+
+AudioFlinger::PlaybackThread::TimedTrack::TimedTrack(
+            const wp<ThreadBase>& thread,
+            const sp<Client>& client,
+            int streamType,
+            uint32_t sampleRate,
+            uint32_t format,
+            uint32_t channelMask,
+            int frameCount,
+            const sp<IMemory>& sharedBuffer,
+            int sessionId)
+    : Track(thread, client, streamType, sampleRate, format, channelMask,
+            frameCount, sharedBuffer, sessionId),
+      mTimedSilenceBuffer(NULL),
+      mTimedSilenceBufferSize(0),
+      mTimedAudioOutputOnTime(false),
+      mMediaTimeTransformValid(false)
+{
+    LocalClock lc;
+    mLocalTimeFreq = lc.getLocalFreq();
+
+    mLocalTimeToSampleTransform.a_zero = 0;
+    mLocalTimeToSampleTransform.b_zero = 0;
+    mLocalTimeToSampleTransform.a_to_b_numer = sampleRate;
+    mLocalTimeToSampleTransform.a_to_b_denom = mLocalTimeFreq;
+    LinearTransform::reduce(&mLocalTimeToSampleTransform.a_to_b_numer,
+                            &mLocalTimeToSampleTransform.a_to_b_denom);
+}
+
+AudioFlinger::PlaybackThread::TimedTrack::~TimedTrack() {
+    mClient->releaseTimedTrack();
+    delete [] mTimedSilenceBuffer;
+}
+
+status_t AudioFlinger::PlaybackThread::TimedTrack::allocateTimedBuffer(
+    size_t size, sp<IMemory>* buffer) {
+
+    Mutex::Autolock _l(mTimedBufferQueueLock);
+
+    trimTimedBufferQueue_l();
+
+    // lazily initialize the shared memory heap for timed buffers
+    if (mTimedMemoryDealer == NULL) {
+        const int kTimedBufferHeapSize = 512 << 10;
+
+        mTimedMemoryDealer = new MemoryDealer(kTimedBufferHeapSize,
+                                              "AudioFlingerTimed");
+        if (mTimedMemoryDealer == NULL)
+            return NO_MEMORY;
+    }
+
+    sp<IMemory> newBuffer = mTimedMemoryDealer->allocate(size);
+    if (newBuffer == NULL) {
+        newBuffer = mTimedMemoryDealer->allocate(size);
+        if (newBuffer == NULL)
+            return NO_MEMORY;
+    }
+
+    *buffer = newBuffer;
+    return NO_ERROR;
+}
+
+// caller must hold mTimedBufferQueueLock
+void AudioFlinger::PlaybackThread::TimedTrack::trimTimedBufferQueue_l() {
+    int64_t mediaTimeNow;
+    {
+        Mutex::Autolock mttLock(mMediaTimeTransformLock);
+        if (!mMediaTimeTransformValid)
+            return;
+
+        int64_t targetTimeNow;
+        status_t res = (mMediaTimeTransformTarget == TimedAudioTrack::COMMON_TIME)
+            ? CCHelper::getCommonTime(&targetTimeNow)
+            : CCHelper::getLocalTime(&targetTimeNow);
+
+        if (OK != res)
+            return;
+
+        if (!mMediaTimeTransform.doReverseTransform(targetTimeNow,
+                                                    &mediaTimeNow)) {
+            return;
+        }
+    }
+
+    size_t trimIndex;
+    for (trimIndex = 0; trimIndex < mTimedBufferQueue.size(); trimIndex++) {
+        if (mTimedBufferQueue[trimIndex].pts() > mediaTimeNow)
+            break;
+    }
+
+    if (trimIndex) {
+        mTimedBufferQueue.removeItemsAt(0, trimIndex);
+    }
+}
+
+status_t AudioFlinger::PlaybackThread::TimedTrack::queueTimedBuffer(
+    const sp<IMemory>& buffer, int64_t pts) {
+
+    {
+        Mutex::Autolock mttLock(mMediaTimeTransformLock);
+        if (!mMediaTimeTransformValid)
+            return INVALID_OPERATION;
+    }
+
+    Mutex::Autolock _l(mTimedBufferQueueLock);
+
+    mTimedBufferQueue.add(TimedBuffer(buffer, pts));
+
+    return NO_ERROR;
+}
+
+status_t AudioFlinger::PlaybackThread::TimedTrack::setMediaTimeTransform(
+    const LinearTransform& xform, TimedAudioTrack::TargetTimeline target) {
+
+    LOGV("%s az=%lld bz=%lld n=%d d=%u tgt=%d", __PRETTY_FUNCTION__,
+         xform.a_zero, xform.b_zero, xform.a_to_b_numer, xform.a_to_b_denom,
+         target);
+
+    if (!(target == TimedAudioTrack::LOCAL_TIME ||
+          target == TimedAudioTrack::COMMON_TIME)) {
+        return BAD_VALUE;
+    }
+
+    Mutex::Autolock lock(mMediaTimeTransformLock);
+    mMediaTimeTransform = xform;
+    mMediaTimeTransformTarget = target;
+    mMediaTimeTransformValid = true;
+
+    return NO_ERROR;
+}
+
+#define min(a, b) ((a) < (b) ? (a) : (b))
+
+// implementation of getNextBuffer for tracks whose buffers have timestamps
+status_t AudioFlinger::PlaybackThread::TimedTrack::getNextBuffer(
+    AudioBufferProvider::Buffer* buffer, int64_t pts)
+{
+    if (pts == AudioBufferProvider::kInvalidPTS) {
+        buffer->raw = 0;
+        buffer->frameCount = 0;
+        return INVALID_OPERATION;
+    }
+
+    // get ahold of the output stream that these samples will be written to
+    sp<ThreadBase> thread = mThread.promote();
+    if (thread == NULL) {
+        buffer->raw = 0;
+        buffer->frameCount = 0;
+        return INVALID_OPERATION;
+    }
+    PlaybackThread* playbackThread = static_cast<PlaybackThread*>(thread.get());
+
+    Mutex::Autolock _l(mTimedBufferQueueLock);
+
+    while (true) {
+
+        // if we have no timed buffers, then fail
+        if (mTimedBufferQueue.isEmpty()) {
+            buffer->raw = 0;
+            buffer->frameCount = 0;
+            return NOT_ENOUGH_DATA;
+        }
+
+        TimedBuffer& head = mTimedBufferQueue.editItemAt(0);
+
+        // calculate the PTS of the head of the timed buffer queue expressed in
+        // local time
+        int64_t headLocalPTS;
+        {
+            Mutex::Autolock mttLock(mMediaTimeTransformLock);
+
+            assert(mMediaTimeTransformValid);
+
+            if (mMediaTimeTransform.a_to_b_denom == 0) {
+                // the transform represents a pause, so yield silence
+                timedYieldSilence(buffer->frameCount, buffer);
+                return NO_ERROR;
+            }
+
+            int64_t transformedPTS;
+            if (!mMediaTimeTransform.doForwardTransform(head.pts(),
+                                                        &transformedPTS)) {
+                // the transform failed.  this shouldn't happen, but if it does
+                // then just drop this buffer
+                LOGW("timedGetNextBuffer transform failed");
+                buffer->raw = 0;
+                buffer->frameCount = 0;
+                mTimedBufferQueue.removeAt(0);
+                return NO_ERROR;
+            }
+
+            if (mMediaTimeTransformTarget == TimedAudioTrack::COMMON_TIME) {
+                if (OK != CCHelper::commonTimeToLocalTime(transformedPTS,
+                                                          &headLocalPTS)) {
+                    buffer->raw = 0;
+                    buffer->frameCount = 0;
+                    return INVALID_OPERATION;
+                }
+            } else {
+                headLocalPTS = transformedPTS;
+            }
+        }
+
+        // adjust the head buffer's PTS to reflect the portion of the head buffer
+        // that has already been consumed
+        int64_t effectivePTS = headLocalPTS +
+                ((head.position() / mCblk->frameSize) * mLocalTimeFreq / sampleRate());
+
+        // Calculate the delta in samples between the head of the input buffer
+        // queue and the start of the next output buffer that will be written.
+        // If the transformation fails because of over or underflow, it means
+        // that the sample's position in the output stream is so far out of
+        // whack that it should just be dropped.
+        int64_t sampleDelta;
+        if (llabs(effectivePTS - pts) >= (static_cast<int64_t>(1) << 31)) {
+            LOGV("*** head buffer is too far from PTS: dropped buffer");
+            mTimedBufferQueue.removeAt(0);
+            continue;
+        }
+        if (!mLocalTimeToSampleTransform.doForwardTransform(
+                (effectivePTS - pts) << 32, &sampleDelta)) {
+            LOGV("*** too late during sample rate transform: dropped buffer");
+            mTimedBufferQueue.removeAt(0);
+            continue;
+        }
+
+        LOGV("*** %s head.pts=%lld head.pos=%d pts=%lld sampleDelta=[%d.%08x]",
+             __PRETTY_FUNCTION__, head.pts(), head.position(), pts,
+             static_cast<int32_t>((sampleDelta >= 0 ? 0 : 1) + (sampleDelta >> 32)),
+             static_cast<uint32_t>(sampleDelta & 0xFFFFFFFF));
+
+        // if the delta between the ideal placement for the next input sample and
+        // the current output position is within this threshold, then we will
+        // concatenate the next input samples to the previous output
+        const int64_t kSampleContinuityThreshold =
+                (static_cast<int64_t>(sampleRate()) << 32) / 10;
+
+        // if this is the first buffer of audio that we're emitting from this track
+        // then it should be almost exactly on time.
+        const int64_t kSampleStartupThreshold = 1LL << 32;
+
+        if ((mTimedAudioOutputOnTime && llabs(sampleDelta) <= kSampleContinuityThreshold) ||
+            (!mTimedAudioOutputOnTime && llabs(sampleDelta) <= kSampleStartupThreshold)) {
+            // the next input is close enough to being on time, so concatenate it
+            // with the last output
+            timedYieldSamples(buffer);
+
+            LOGV("*** on time: head.pos=%d frameCount=%u", head.position(), buffer->frameCount);
+            return NO_ERROR;
+        } else if (sampleDelta > 0) {
+            // the gap between the current output position and the proper start of
+            // the next input sample is too big, so fill it with silence
+            uint32_t framesUntilNextInput = (sampleDelta + 0x80000000) >> 32;
+
+            timedYieldSilence(framesUntilNextInput, buffer);
+            LOGV("*** silence: frameCount=%u", buffer->frameCount);
+            return NO_ERROR;
+        } else {
+            // the next input sample is late
+            uint32_t lateFrames = static_cast<uint32_t>(-((sampleDelta + 0x80000000) >> 32));
+            size_t onTimeSamplePosition =
+                    head.position() + lateFrames * mCblk->frameSize;
+
+            if (onTimeSamplePosition > head.buffer()->size()) {
+                // all the remaining samples in the head are too late, so
+                // drop it and move on
+                LOGV("*** too late: dropped buffer");
+                mTimedBufferQueue.removeAt(0);
+                continue;
+            } else {
+                // skip over the late samples
+                head.setPosition(onTimeSamplePosition);
+
+                // yield the available samples
+                timedYieldSamples(buffer);
+
+                LOGV("*** late: head.pos=%d frameCount=%u", head.position(), buffer->frameCount);
+                return NO_ERROR;
+            }
+        }
+    }
+}
+
+// Yield samples from the timed buffer queue head up to the given output
+// buffer's capacity.
+//
+// Caller must hold mTimedBufferQueueLock
+void AudioFlinger::PlaybackThread::TimedTrack::timedYieldSamples(
+    AudioBufferProvider::Buffer* buffer) {
+
+    const TimedBuffer& head = mTimedBufferQueue[0];
+
+    buffer->raw = (static_cast<uint8_t*>(head.buffer()->pointer()) +
+                   head.position());
+
+    uint32_t framesLeftInHead = ((head.buffer()->size() - head.position()) /
+                                 mCblk->frameSize);
+    size_t framesRequested = buffer->frameCount;
+    buffer->frameCount = min(framesLeftInHead, framesRequested);
+
+    mTimedAudioOutputOnTime = true;
+}
+
+// Yield samples of silence up to the given output buffer's capacity
+//
+// Caller must hold mTimedBufferQueueLock
+void AudioFlinger::PlaybackThread::TimedTrack::timedYieldSilence(
+    uint32_t numFrames, AudioBufferProvider::Buffer* buffer) {
+
+    // lazily allocate a buffer filled with silence
+    if (mTimedSilenceBufferSize < numFrames * mCblk->frameSize) {
+        delete [] mTimedSilenceBuffer;
+        mTimedSilenceBufferSize = numFrames * mCblk->frameSize;
+        mTimedSilenceBuffer = new uint8_t[mTimedSilenceBufferSize];
+        memset(mTimedSilenceBuffer, 0, mTimedSilenceBufferSize);
+    }
+
+    buffer->raw = mTimedSilenceBuffer;
+    size_t framesRequested = buffer->frameCount;
+    buffer->frameCount = min(numFrames, framesRequested);
+
+    mTimedAudioOutputOnTime = false;
+}
+
+void AudioFlinger::PlaybackThread::TimedTrack::releaseBuffer(
+    AudioBufferProvider::Buffer* buffer) {
+
+    Mutex::Autolock _l(mTimedBufferQueueLock);
+
+    if (buffer->raw != mTimedSilenceBuffer) {
+        TimedBuffer& head = mTimedBufferQueue.editItemAt(0);
+        head.setPosition(head.position() + buffer->frameCount * mCblk->frameSize);
+        if (static_cast<size_t>(head.position()) >= head.buffer()->size()) {
+            mTimedBufferQueue.removeAt(0);
+        }
+    }
+
+    buffer->raw = 0;
+    buffer->frameCount = 0;
+}
+
+uint32_t AudioFlinger::PlaybackThread::TimedTrack::framesReady() const {
+    Mutex::Autolock _l(mTimedBufferQueueLock);
+
+    uint32_t frames = 0;
+    for (size_t i = 0; i < mTimedBufferQueue.size(); i++) {
+        const TimedBuffer& tb = mTimedBufferQueue[i];
+        frames += (tb.buffer()->size() - tb.position())  / mCblk->frameSize;
+    }
+
+    return frames;
+}
+
+AudioFlinger::PlaybackThread::TimedTrack::TimedBuffer::TimedBuffer()
+        : mPTS(0), mPosition(0) {}
+
+AudioFlinger::PlaybackThread::TimedTrack::TimedBuffer::TimedBuffer(
+    const sp<IMemory>& buffer, int64_t pts)
+        : mBuffer(buffer), mPTS(pts), mPosition(0) {}
+
 // ----------------------------------------------------------------------------
 
 // RecordTrack constructor must be called with AudioFlinger::mLock held
@@ -3598,7 +4080,7 @@ AudioFlinger::RecordThread::RecordTrack::~RecordTrack()
     }
 }
 
-status_t AudioFlinger::RecordThread::RecordTrack::getNextBuffer(AudioBufferProvider::Buffer* buffer)
+status_t AudioFlinger::RecordThread::RecordTrack::getNextBuffer(AudioBufferProvider::Buffer* buffer, int64_t pts)
 {
     audio_track_cblk_t* cblk = this->cblk();
     uint32_t framesAvail;
@@ -3920,7 +4402,8 @@ AudioFlinger::Client::Client(const sp<AudioFlinger>& audioFlinger, pid_t pid)
     :   RefBase(),
         mAudioFlinger(audioFlinger),
         mMemoryDealer(new MemoryDealer(1024*1024, "AudioFlinger::Client")),
-        mPid(pid)
+        mPid(pid),
+        mTimedTrackCount(0)
 {
     // 1 MB of address space is good for 32 tracks, 8 buffers each, 4 KB/buffer
 }
@@ -3936,6 +4419,31 @@ const sp<MemoryDealer>& AudioFlinger::Client::heap() const
     return mMemoryDealer;
 }
 
+// Reserve one of the limited slots for a timed audio track associated
+// with this client
+bool AudioFlinger::Client::reserveTimedTrack()
+{
+    const int kMaxTimedTracksPerClient = 4;
+
+    Mutex::Autolock _l(mTimedTrackLock);
+
+    if (mTimedTrackCount >= kMaxTimedTracksPerClient) {
+        LOGW("can not create timed track - pid %d has exceeded the limit",
+             mPid);
+        return false;
+    }
+
+    mTimedTrackCount++;
+    return true;
+}
+
+// Release a slot for a timed audio track
+void AudioFlinger::Client::releaseTimedTrack()
+{
+    Mutex::Autolock _l(mTimedTrackLock);
+    mTimedTrackCount--;
+}
+
 // ----------------------------------------------------------------------------
 
 AudioFlinger::NotificationClient::NotificationClient(const sp<AudioFlinger>& audioFlinger,
@@ -4007,6 +4515,38 @@ status_t AudioFlinger::TrackHandle::attachAuxEffect(int EffectId)
     return mTrack->attachAuxEffect(EffectId);
 }
 
+status_t AudioFlinger::TrackHandle::allocateTimedBuffer(size_t size,
+                                                         sp<IMemory>* buffer) {
+    if (!mTrack->isTimedTrack())
+        return INVALID_OPERATION;
+
+    PlaybackThread::TimedTrack* tt =
+            reinterpret_cast<PlaybackThread::TimedTrack*>(mTrack.get());
+    return tt->allocateTimedBuffer(size, buffer);
+}
+
+status_t AudioFlinger::TrackHandle::queueTimedBuffer(const sp<IMemory>& buffer,
+                                                     int64_t pts) {
+    if (!mTrack->isTimedTrack())
+        return INVALID_OPERATION;
+
+    PlaybackThread::TimedTrack* tt =
+            reinterpret_cast<PlaybackThread::TimedTrack*>(mTrack.get());
+    return tt->queueTimedBuffer(buffer, pts);
+}
+
+status_t AudioFlinger::TrackHandle::setMediaTimeTransform(
+    const LinearTransform& xform, int target) {
+
+    if (!mTrack->isTimedTrack())
+        return INVALID_OPERATION;
+
+    PlaybackThread::TimedTrack* tt =
+            reinterpret_cast<PlaybackThread::TimedTrack*>(mTrack.get());
+    return tt->setMediaTimeTransform(
+        xform, static_cast<TimedAudioTrack::TargetTimeline>(target));
+}
+
 status_t AudioFlinger::TrackHandle::onTransact(
     uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
 {
@@ -4244,7 +4784,8 @@ bool AudioFlinger::RecordThread::threadLoop()
             }
 
             buffer.frameCount = mFrameCount;
-            if (LIKELY(mActiveTrack->getNextBuffer(&buffer) == NO_ERROR)) {
+            if (LIKELY(mActiveTrack->getNextBuffer(
+                    &buffer, AudioBufferProvider::kInvalidPTS) == NO_ERROR)) {
                 size_t framesOut = buffer.frameCount;
                 if (mResampler == 0) {
                     // no resampling
@@ -4523,7 +5064,7 @@ status_t AudioFlinger::RecordThread::dump(int fd, const Vector<String16>& args)
     return NO_ERROR;
 }
 
-status_t AudioFlinger::RecordThread::getNextBuffer(AudioBufferProvider::Buffer* buffer)
+status_t AudioFlinger::RecordThread::getNextBuffer(AudioBufferProvider::Buffer* buffer, int64_t pts)
 {
     size_t framesReq = buffer->frameCount;
     size_t framesReady = mFrameCount - mRsmpInIndex;
index 1141f6c..3aec877 100644 (file)
@@ -84,6 +84,7 @@ public:
                                 uint32_t flags,
                                 const sp<IMemory>& sharedBuffer,
                                 int output,
+                                bool isTimed,
                                 int *sessionId,
                                 status_t *status);
 
@@ -97,6 +98,7 @@ public:
     virtual     status_t    setMasterMute(bool muted);
 
     virtual     float       masterVolume() const;
+    virtual     float       masterVolumeSW() const;
     virtual     bool        masterMute() const;
 
     virtual     status_t    setStreamVolume(int stream, float value, int output);
@@ -188,6 +190,7 @@ public:
         AUDIO_HW_SET_MIC_MUTE,
         AUDIO_SET_VOICE_VOLUME,
         AUDIO_SET_PARAMETER,
+        AUDIO_HW_GET_MASTER_VOLUME,
     };
 
     // record interface
@@ -235,12 +238,18 @@ private:
         pid_t               pid() const { return mPid; }
         sp<AudioFlinger>    audioFlinger() { return mAudioFlinger; }
 
+        bool reserveTimedTrack();
+        void releaseTimedTrack();
+
     private:
                             Client(const Client&);
                             Client& operator = (const Client&);
         sp<AudioFlinger>    mAudioFlinger;
         sp<MemoryDealer>    mMemoryDealer;
         pid_t               mPid;
+
+        Mutex               mTimedTrackLock;
+        int                 mTimedTrackCount;
     };
 
     // --- Notification Client ---
@@ -346,7 +355,9 @@ private:
                                 TrackBase(const TrackBase&);
                                 TrackBase& operator = (const TrackBase&);
 
-            virtual status_t getNextBuffer(AudioBufferProvider::Buffer* buffer) = 0;
+            virtual status_t getNextBuffer(
+                AudioBufferProvider::Buffer* buffer,
+                int64_t pts) = 0;
             virtual void releaseBuffer(AudioBufferProvider::Buffer* buffer);
 
             uint32_t format() const {
@@ -609,7 +620,6 @@ private:
                     int16_t     *mainBuffer() { return mMainBuffer; }
                     int         auxEffectId() { return mAuxEffectId; }
 
-
         protected:
             friend class ThreadBase;
             friend class TrackHandle;
@@ -620,7 +630,11 @@ private:
                                 Track(const Track&);
                                 Track& operator = (const Track&);
 
-            virtual status_t getNextBuffer(AudioBufferProvider::Buffer* buffer);
+            virtual status_t getNextBuffer(
+                AudioBufferProvider::Buffer* buffer,
+                int64_t pts);
+            virtual uint32_t framesReady() const;
+
             bool isMuted() { return mMute; }
             bool isPausing() const {
                 return mState == PAUSING;
@@ -636,6 +650,8 @@ private:
                 return (mStreamType == AUDIO_STREAM_CNT);
             }
 
+            virtual bool isTimedTrack() const { return false; }
+
             // we don't really need a lock for these
             float               mVolume[2];
             volatile bool       mMute;
@@ -653,6 +669,78 @@ private:
             bool                mHasVolumeController;
         };  // end of Track
 
+        class TimedTrack : public Track {
+          public:
+            static sp<TimedTrack> create(const wp<ThreadBase>& thread,
+                                         const sp<Client>& client,
+                                         int streamType,
+                                         uint32_t sampleRate,
+                                         uint32_t format,
+                                         uint32_t channelMask,
+                                         int frameCount,
+                                         const sp<IMemory>& sharedBuffer,
+                                         int sessionId);
+            ~TimedTrack();
+
+            class TimedBuffer {
+              public:
+                TimedBuffer();
+                TimedBuffer(const sp<IMemory>& buffer, int64_t pts);
+                const sp<IMemory>& buffer() const { return mBuffer; }
+                int64_t pts() const { return mPTS; }
+                int position() const { return mPosition; }
+                void setPosition(int pos) { mPosition = pos; }
+              private:
+                sp<IMemory> mBuffer;
+                int64_t mPTS;
+                int mPosition;
+            };
+
+            virtual bool isTimedTrack() const { return true; }
+
+            virtual uint32_t framesReady() const;
+
+            virtual status_t getNextBuffer(AudioBufferProvider::Buffer* buffer,
+                                           int64_t pts);
+            virtual void releaseBuffer(AudioBufferProvider::Buffer* buffer);
+            void timedYieldSamples(AudioBufferProvider::Buffer* buffer);
+            void timedYieldSilence(uint32_t numFrames,
+                                   AudioBufferProvider::Buffer* buffer);
+
+            status_t    allocateTimedBuffer(size_t size,
+                                            sp<IMemory>* buffer);
+            status_t    queueTimedBuffer(const sp<IMemory>& buffer,
+                                         int64_t pts);
+            status_t    setMediaTimeTransform(const LinearTransform& xform,
+                                              TimedAudioTrack::TargetTimeline target);
+            void        trimTimedBufferQueue_l();
+
+          private:
+            TimedTrack(const wp<ThreadBase>& thread,
+                       const sp<Client>& client,
+                       int streamType,
+                       uint32_t sampleRate,
+                       uint32_t format,
+                       uint32_t channelMask,
+                       int frameCount,
+                       const sp<IMemory>& sharedBuffer,
+                       int sessionId);
+
+            uint64_t            mLocalTimeFreq;
+            LinearTransform     mLocalTimeToSampleTransform;
+            sp<MemoryDealer>    mTimedMemoryDealer;
+            Vector<TimedBuffer> mTimedBufferQueue;
+            uint8_t*            mTimedSilenceBuffer;
+            uint32_t            mTimedSilenceBufferSize;
+            mutable Mutex       mTimedBufferQueueLock;
+            bool                mTimedAudioOutputOnTime;
+
+            Mutex               mMediaTimeTransformLock;
+            LinearTransform     mMediaTimeTransform;
+            bool                mMediaTimeTransformValid;
+            TimedAudioTrack::TargetTimeline mMediaTimeTransformTarget;
+        };
+
 
         // playback track
         class OutputTrack : public Track {
@@ -726,6 +814,7 @@ private:
                                     int frameCount,
                                     const sp<IMemory>& sharedBuffer,
                                     int sessionId,
+                                    bool isTimed,
                                     status_t *status);
 
                     AudioStreamOut* getOutput();
@@ -910,6 +999,12 @@ private:
         virtual void        setVolume(float left, float right);
         virtual sp<IMemory> getCblk() const;
         virtual status_t    attachAuxEffect(int effectId);
+        virtual status_t    allocateTimedBuffer(size_t size,
+                                                sp<IMemory>* buffer);
+        virtual status_t    queueTimedBuffer(const sp<IMemory>& buffer,
+                                             int64_t pts);
+        virtual status_t    setMediaTimeTransform(const LinearTransform& xform,
+                                                  int target);
         virtual status_t onTransact(
             uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags);
     private:
@@ -957,7 +1052,9 @@ private:
                                 RecordTrack(const RecordTrack&);
                                 RecordTrack& operator = (const RecordTrack&);
 
-            virtual status_t getNextBuffer(AudioBufferProvider::Buffer* buffer);
+            virtual status_t getNextBuffer(
+                AudioBufferProvider::Buffer* buffer,
+                int64_t pts);
 
             bool                mOverflow;
         };
@@ -993,7 +1090,8 @@ private:
                 AudioStreamIn* clearInput();
                 virtual audio_stream_t* stream();
 
-        virtual status_t    getNextBuffer(AudioBufferProvider::Buffer* buffer);
+        virtual status_t    getNextBuffer(AudioBufferProvider::Buffer* buffer,
+                                          int64_t pts);
         virtual void        releaseBuffer(AudioBufferProvider::Buffer* buffer);
         virtual bool        checkForNewParameters_l();
         virtual String8     getParameters(const String8& keys);
@@ -1369,6 +1467,12 @@ private:
     friend class RecordThread;
     friend class PlaybackThread;
 
+    enum master_volume_support {
+        MVS_NONE,
+        MVS_SETONLY,
+        MVS_FULL,
+    };
+
     mutable     Mutex                               mLock;
 
                 DefaultKeyedVector< pid_t, wp<Client> >     mClients;
@@ -1382,6 +1486,8 @@ private:
                 DefaultKeyedVector< int, sp<PlaybackThread> >  mPlaybackThreads;
                 PlaybackThread::stream_type_t       mStreamTypes[AUDIO_STREAM_CNT];
                 float                               mMasterVolume;
+                float                               mMasterVolumeSW;
+                master_volume_support               mMasterVolumeSupportLvl;
                 bool                                mMasterMute;
 
                 DefaultKeyedVector< int, sp<RecordThread> >    mRecordThreads;
index 6e9319d..760981b 100644 (file)
@@ -18,6 +18,7 @@
 #define LOG_TAG "AudioMixer"
 //#define LOG_NDEBUG 0
 
+#include <assert.h>
 #include <stdint.h>
 #include <string.h>
 #include <stdlib.h>
@@ -30,6 +31,9 @@
 
 #include <system/audio.h>
 
+#include <aah_timesrv/local_clock.h>
+#include <aah_timesrv/cc_helper.h>
+
 #include "AudioMixer.h"
 
 namespace android {
@@ -47,6 +51,9 @@ static inline int16_t clamp16(int32_t sample)
 AudioMixer::AudioMixer(size_t frameCount, uint32_t sampleRate)
     :   mActiveTrack(0), mTrackNames(0), mSampleRate(sampleRate)
 {
+    LocalClock lc;
+    mLocalTimeFreq = lc.getLocalFreq();
+
     mState.enabledTracks= 0;
     mState.needsChanged = 0;
     mState.frameCount   = frameCount;
@@ -74,6 +81,7 @@ AudioMixer::AudioMixer(size_t frameCount, uint32_t sampleRate)
         t->in = 0;
         t->mainBuffer = NULL;
         t->auxBuffer = NULL;
+        t->localTimeFreq = mLocalTimeFreq;
         t++;
     }
 }
@@ -293,6 +301,7 @@ bool AudioMixer::track_t::setResampler(uint32_t value, uint32_t devSampleRate)
             if (resampler == 0) {
                 resampler = AudioResampler::create(
                         format, channelCount, devSampleRate);
+                resampler->setLocalTimeFreq(localTimeFreq);
             }
             return true;
         }
@@ -340,13 +349,13 @@ status_t AudioMixer::setBufferProvider(AudioBufferProvider* buffer)
 
 
 
-void AudioMixer::process()
+void AudioMixer::process(int64_t pts)
 {
-    mState.hook(&mState);
+    mState.hook(&mState, pts);
 }
 
 
-void AudioMixer::process__validate(state_t* state)
+void AudioMixer::process__validate(state_t* state, int64_t pts)
 {
     LOGW_IF(!state->needsChanged,
         "in process__validate() but nothing's invalid");
@@ -450,7 +459,7 @@ void AudioMixer::process__validate(state_t* state)
         countActiveTracks, state->enabledTracks,
         all16BitsStereoNoResample, resampling, volumeRamp);
 
-   state->hook(state);
+   state->hook(state, pts);
 
    // Now that the volume ramp has been done, set optimal state and
    // track hooks for subsequent mixer process
@@ -859,7 +868,7 @@ void AudioMixer::ditherAndClamp(int32_t* out, int32_t const *sums, size_t c)
 }
 
 // no-op case
-void AudioMixer::process__nop(state_t* state)
+void AudioMixer::process__nop(state_t* state, int64_t pts)
 {
     uint32_t e0 = state->enabledTracks;
     size_t bufSize = state->frameCount * sizeof(int16_t) * MAX_NUM_CHANNELS;
@@ -889,7 +898,9 @@ void AudioMixer::process__nop(state_t* state)
             size_t outFrames = state->frameCount;
             while (outFrames) {
                 t1.buffer.frameCount = outFrames;
-                t1.bufferProvider->getNextBuffer(&t1.buffer);
+                int64_t outputPTS = calculateOutputPTS(
+                    t1, pts, state->frameCount - outFrames);
+                t1.bufferProvider->getNextBuffer(&t1.buffer, outputPTS);
                 if (!t1.buffer.raw) break;
                 outFrames -= t1.buffer.frameCount;
                 t1.bufferProvider->releaseBuffer(&t1.buffer);
@@ -899,7 +910,7 @@ void AudioMixer::process__nop(state_t* state)
 }
 
 // generic code without resampling
-void AudioMixer::process__genericNoResampling(state_t* state)
+void AudioMixer::process__genericNoResampling(state_t* state, int64_t pts)
 {
     int32_t outTemp[BLOCKSIZE * MAX_NUM_CHANNELS] __attribute__((aligned(32)));
 
@@ -911,7 +922,7 @@ void AudioMixer::process__genericNoResampling(state_t* state)
         e0 &= ~(1<<i);
         track_t& t = state->tracks[i];
         t.buffer.frameCount = state->frameCount;
-        t.bufferProvider->getNextBuffer(&t.buffer);
+        t.bufferProvider->getNextBuffer(&t.buffer, pts);
         t.frameCount = t.buffer.frameCount;
         t.in = t.buffer.raw;
         // t.in == NULL can happen if the track was flushed just after having
@@ -965,7 +976,9 @@ void AudioMixer::process__genericNoResampling(state_t* state)
                     if (t.frameCount == 0 && outFrames) {
                         t.bufferProvider->releaseBuffer(&t.buffer);
                         t.buffer.frameCount = (state->frameCount - numFrames) - (BLOCKSIZE - outFrames);
-                        t.bufferProvider->getNextBuffer(&t.buffer);
+                        int64_t outputPTS = calculateOutputPTS(
+                            t, pts, numFrames + (BLOCKSIZE - outFrames));
+                        t.bufferProvider->getNextBuffer(&t.buffer, outputPTS);
                         t.in = t.buffer.raw;
                         if (t.in == NULL) {
                             enabledTracks &= ~(1<<i);
@@ -994,7 +1007,7 @@ void AudioMixer::process__genericNoResampling(state_t* state)
 
 
   // generic code with resampling
-void AudioMixer::process__genericResampling(state_t* state)
+void AudioMixer::process__genericResampling(state_t* state, int64_t pts)
 {
     int32_t* const outTemp = state->outputTemp;
     const size_t size = sizeof(int32_t) * MAX_NUM_CHANNELS * state->frameCount;
@@ -1033,6 +1046,7 @@ void AudioMixer::process__genericResampling(state_t* state)
             // acquire/release the buffers because it's done by
             // the resampler.
             if ((t.needs & NEEDS_RESAMPLE__MASK) == NEEDS_RESAMPLE_ENABLED) {
+                t.resampler->setPTS(pts);
                 (t.hook)(&t, outTemp, numFrames, state->resampleTemp, aux);
             } else {
 
@@ -1040,7 +1054,8 @@ void AudioMixer::process__genericResampling(state_t* state)
 
                 while (outFrames < numFrames) {
                     t.buffer.frameCount = numFrames - outFrames;
-                    t.bufferProvider->getNextBuffer(&t.buffer);
+                    int64_t outputPTS = calculateOutputPTS(t, pts, outFrames);
+                    t.bufferProvider->getNextBuffer(&t.buffer, outputPTS);
                     t.in = t.buffer.raw;
                     // t.in == NULL can happen if the track was flushed just after having
                     // been enabled for mixing.
@@ -1060,7 +1075,8 @@ void AudioMixer::process__genericResampling(state_t* state)
 }
 
 // one track, 16 bits stereo without resampling is the most common case
-void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state)
+void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state,
+                                                           int64_t pts)
 {
     const int i = 31 - __builtin_clz(state->enabledTracks);
     const track_t& t = state->tracks[i];
@@ -1075,7 +1091,8 @@ void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state)
     const uint32_t vrl = t.volumeRL;
     while (numFrames) {
         b.frameCount = numFrames;
-        t.bufferProvider->getNextBuffer(&b);
+        int64_t outputPTS = calculateOutputPTS(t, pts, out - t.mainBuffer);
+        t.bufferProvider->getNextBuffer(&b, outputPTS);
         int16_t const *in = b.i16;
 
         // in == NULL can happen if the track was flushed just after having
@@ -1118,7 +1135,8 @@ void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state)
 // 2 tracks is also a common case
 // NEVER used in current implementation of process__validate()
 // only use if the 2 tracks have the same output buffer
-void AudioMixer::process__TwoTracks16BitsStereoNoResampling(state_t* state)
+void AudioMixer::process__TwoTracks16BitsStereoNoResampling(state_t* state,
+                                                            int64_t pts)
 {
     int i;
     uint32_t en = state->enabledTracks;
@@ -1152,7 +1170,9 @@ void AudioMixer::process__TwoTracks16BitsStereoNoResampling(state_t* state)
 
         if (frameCount0 == 0) {
             b0.frameCount = numFrames;
-            t0.bufferProvider->getNextBuffer(&b0);
+            int64_t outputPTS = calculateOutputPTS(t0, pts,
+                                                   out - t0.mainBuffer);
+            t0.bufferProvider->getNextBuffer(&b0, outputPTS);
             if (b0.i16 == NULL) {
                 if (buff == NULL) {
                     buff = new int16_t[MAX_NUM_CHANNELS * state->frameCount];
@@ -1166,7 +1186,9 @@ void AudioMixer::process__TwoTracks16BitsStereoNoResampling(state_t* state)
         }
         if (frameCount1 == 0) {
             b1.frameCount = numFrames;
-            t1.bufferProvider->getNextBuffer(&b1);
+            int64_t outputPTS = calculateOutputPTS(t1, pts,
+                                                   out - t0.mainBuffer);
+            t1.bufferProvider->getNextBuffer(&b1, outputPTS);
             if (b1.i16 == NULL) {
                 if (buff == NULL) {
                     buff = new int16_t[MAX_NUM_CHANNELS * state->frameCount];
@@ -1213,6 +1235,11 @@ void AudioMixer::process__TwoTracks16BitsStereoNoResampling(state_t* state)
     }
 }
 
+int64_t AudioMixer::calculateOutputPTS(const track_t& t, int64_t basePTS,
+                                       int outputFrameIndex)
+{
+    return basePTS + ((outputFrameIndex * t.localTimeFreq) / t.sampleRate);
+}
+
 // ----------------------------------------------------------------------------
 }; // namespace android
-
index 75c9170..d32a366 100644 (file)
@@ -85,7 +85,7 @@ public:
     status_t    setParameter(int target, int name, void *value);
 
     status_t    setBufferProvider(AudioBufferProvider* bufferProvider);
-    void        process();
+    void        process(int64_t pts);
 
     uint32_t    trackNames() const { return mTrackNames; }
 
@@ -125,7 +125,7 @@ private:
     struct state_t;
     struct track_t;
 
-    typedef void (*mix_t)(state_t* state);
+    typedef void (*mix_t)(state_t* state, int64_t pts);
     typedef void (*hook_t)(track_t* t, int32_t* output, size_t numOutFrames, int32_t* temp, int32_t* aux);
     static const int BLOCKSIZE = 16; // 4 cache lines
 
@@ -163,6 +163,8 @@ private:
         int32_t*           mainBuffer;
         int32_t*           auxBuffer;
 
+        uint64_t    localTimeFreq;
+
         bool        setResampler(uint32_t sampleRate, uint32_t devSampleRate);
         bool        doesResample() const;
         void        resetResampler();
@@ -185,6 +187,8 @@ private:
     uint32_t        mTrackNames;
     const uint32_t  mSampleRate;
 
+    int64_t         mLocalTimeFreq;
+
     state_t         mState __attribute__((aligned(32)));
 
     void invalidateState(uint32_t mask);
@@ -196,12 +200,17 @@ private:
     static void volumeRampStereo(track_t* t, int32_t* out, size_t frameCount, int32_t* temp, int32_t* aux);
     static void volumeStereo(track_t* t, int32_t* out, size_t frameCount, int32_t* temp, int32_t* aux);
 
-    static void process__validate(state_t* state);
-    static void process__nop(state_t* state);
-    static void process__genericNoResampling(state_t* state);
-    static void process__genericResampling(state_t* state);
-    static void process__OneTrack16BitsStereoNoResampling(state_t* state);
-    static void process__TwoTracks16BitsStereoNoResampling(state_t* state);
+    static void process__validate(state_t* state, int64_t pts);
+    static void process__nop(state_t* state, int64_t pts);
+    static void process__genericNoResampling(state_t* state, int64_t pts);
+    static void process__genericResampling(state_t* state, int64_t pts);
+    static void process__OneTrack16BitsStereoNoResampling(state_t* state,
+                                                          int64_t pts);
+    static void process__TwoTracks16BitsStereoNoResampling(state_t* state,
+                                                           int64_t pts);
+
+    static int64_t calculateOutputPTS(const track_t& t, int64_t basePTS,
+                                      int outputFrameIndex);
 };
 
 // ----------------------------------------------------------------------------
index dca795c..2dc49df 100644 (file)
@@ -118,7 +118,8 @@ AudioResampler::AudioResampler(int bitDepth, int inChannelCount,
         int32_t sampleRate) :
     mBitDepth(bitDepth), mChannelCount(inChannelCount),
             mSampleRate(sampleRate), mInSampleRate(sampleRate), mInputIndex(0),
-            mPhaseFraction(0) {
+            mPhaseFraction(0), mLocalTimeFreq(0),
+            mPTS(AudioBufferProvider::kInvalidPTS) {
     // sanity check on format
     if ((bitDepth != 16) ||(inChannelCount < 1) || (inChannelCount > 2)) {
         LOGE("Unsupported sample format, %d bits, %d channels", bitDepth,
@@ -152,6 +153,23 @@ void AudioResampler::setVolume(int16_t left, int16_t right) {
     mVolume[1] = right;
 }
 
+void AudioResampler::setLocalTimeFreq(uint64_t freq) {
+    mLocalTimeFreq = freq;
+}
+
+void AudioResampler::setPTS(int64_t pts) {
+    mPTS = pts;
+}
+
+int64_t AudioResampler::calculateOutputPTS(int outputFrameIndex) {
+
+    if (mPTS == AudioBufferProvider::kInvalidPTS) {
+        return AudioBufferProvider::kInvalidPTS;
+    } else {
+        return mPTS + ((outputFrameIndex * mLocalTimeFreq) / mSampleRate);
+    }
+}
+
 void AudioResampler::reset() {
     mInputIndex = 0;
     mPhaseFraction = 0;
@@ -198,7 +216,8 @@ void AudioResamplerOrder1::resampleStereo16(int32_t* out, size_t outFrameCount,
         // buffer is empty, fetch a new one
         while (mBuffer.frameCount == 0) {
             mBuffer.frameCount = inFrameCount;
-            provider->getNextBuffer(&mBuffer);
+            provider->getNextBuffer(&mBuffer,
+                                    calculateOutputPTS(outputIndex / 2));
             if (mBuffer.raw == NULL) {
                 goto resampleStereo16_exit;
             }
@@ -292,7 +311,8 @@ void AudioResamplerOrder1::resampleMono16(int32_t* out, size_t outFrameCount,
         // buffer is empty, fetch a new one
         while (mBuffer.frameCount == 0) {
             mBuffer.frameCount = inFrameCount;
-            provider->getNextBuffer(&mBuffer);
+            provider->getNextBuffer(&mBuffer,
+                                    calculateOutputPTS(outputIndex / 2));
             if (mBuffer.raw == NULL) {
                 mInputIndex = inputIndex;
                 mPhaseFraction = phaseFraction;
@@ -602,4 +622,3 @@ void AudioResamplerOrder1::AsmStereo16Loop(int16_t *in, int32_t* maxOutPt, int32
 // ----------------------------------------------------------------------------
 }
 ; // namespace android
-
index 9f06c1c..da736ab 100644 (file)
@@ -49,6 +49,10 @@ public:
     virtual void init() = 0;
     virtual void setSampleRate(int32_t inSampleRate);
     virtual void setVolume(int16_t left, int16_t right);
+    virtual void setLocalTimeFreq(uint64_t freq);
+
+    // set the PTS of the next buffer output by the resampler
+    virtual void setPTS(int64_t pts);
 
     virtual void resample(int32_t* out, size_t outFrameCount,
             AudioBufferProvider* provider) = 0;
@@ -72,6 +76,8 @@ protected:
     AudioResampler(const AudioResampler&);
     AudioResampler& operator=(const AudioResampler&);
 
+    int64_t calculateOutputPTS(int outputFrameIndex);
+
     int32_t mBitDepth;
     int32_t mChannelCount;
     int32_t mSampleRate;
@@ -86,6 +92,8 @@ protected:
     size_t mInputIndex;
     int32_t mPhaseIncrement;
     uint32_t mPhaseFraction;
+    uint64_t mLocalTimeFreq;
+    int64_t mPTS;
 };
 
 // ----------------------------------------------------------------------------
index 4d721f6..e864b30 100644 (file)
@@ -65,7 +65,7 @@ void AudioResamplerCubic::resampleStereo16(int32_t* out, size_t outFrameCount,
     // fetch first buffer
     if (mBuffer.frameCount == 0) {
         mBuffer.frameCount = inFrameCount;
-        provider->getNextBuffer(&mBuffer);
+        provider->getNextBuffer(&mBuffer, mPTS);
         if (mBuffer.raw == NULL)
             return;
         // LOGW("New buffer: offset=%p, frames=%dn", mBuffer.raw, mBuffer.frameCount);
@@ -95,7 +95,8 @@ void AudioResamplerCubic::resampleStereo16(int32_t* out, size_t outFrameCount,
                 inputIndex = 0;
                 provider->releaseBuffer(&mBuffer);
                 mBuffer.frameCount = inFrameCount;
-                provider->getNextBuffer(&mBuffer);
+                provider->getNextBuffer(&mBuffer,
+                                        calculateOutputPTS(outputIndex / 2));
                 if (mBuffer.raw == NULL)
                     goto save_state;  // ugly, but efficient
                 in = mBuffer.i16;
@@ -130,7 +131,7 @@ void AudioResamplerCubic::resampleMono16(int32_t* out, size_t outFrameCount,
     // fetch first buffer
     if (mBuffer.frameCount == 0) {
         mBuffer.frameCount = inFrameCount;
-        provider->getNextBuffer(&mBuffer);
+        provider->getNextBuffer(&mBuffer, mPTS);
         if (mBuffer.raw == NULL)
             return;
         // LOGW("New buffer: offset=%p, frames=%d\n", mBuffer.raw, mBuffer.frameCount);
@@ -160,7 +161,8 @@ void AudioResamplerCubic::resampleMono16(int32_t* out, size_t outFrameCount,
                 inputIndex = 0;
                 provider->releaseBuffer(&mBuffer);
                 mBuffer.frameCount = inFrameCount;
-                provider->getNextBuffer(&mBuffer);
+                provider->getNextBuffer(&mBuffer,
+                                        calculateOutputPTS(outputIndex / 2));
                 if (mBuffer.raw == NULL)
                     goto save_state;  // ugly, but efficient
                 // LOGW("New buffer: offset=%p, frames=%dn", mBuffer.raw, mBuffer.frameCount);
@@ -181,4 +183,3 @@ save_state:
 // ----------------------------------------------------------------------------
 }
 ; // namespace android
-
index 9e5e254..6184795 100644 (file)
@@ -204,7 +204,8 @@ void AudioResamplerSinc::resample(int32_t* out, size_t outFrameCount,
         // buffer is empty, fetch a new one
         while (buffer.frameCount == 0) {
             buffer.frameCount = inFrameCount;
-            provider->getNextBuffer(&buffer);
+            provider->getNextBuffer(&buffer,
+                                    calculateOutputPTS(outputIndex / 2));
             if (buffer.raw == NULL) {
                 goto resample_exit;
             }
@@ -355,4 +356,3 @@ void AudioResamplerSinc::interpolate(
 
 // ----------------------------------------------------------------------------
 }; // namespace android
-