Lambda Labs Week II

8 minute read

Individual Accomplishments

This week I got basic sign-in with Google working, as well as logging in to our back end with the IDToken from Google. I had started building a test motion manager over the weekend, and I got that plugged into the app. I added Models for the motion data and the daily data, as well as a controller to handle that data. I wrote networking code for POSTing daily data up the the back end and started working on GETting it back to display to the user. I added a basic alarm manager that lets the user start, stop and snooze an alarm sound. I added the Charts framework to use for displaying the user’s data and got a basic line chart set up and displaying one day’s worth of sample data. I also made some tweaks to the UI to get the app more visually in line with the front end.

Sleep Tracking Screen Stats Screen

At the beginning of the week the biggest unknown, and probably the thing I was most worried about, was OAuth. I decided to just use the most simple version of Google’s plug-and-play sign in framework to get started, so that we would have something to test the back end with, and so that I could start getting the app filled out with some functionality without spending a bunch of time working on OAuth. So while I was nervous to begin with, getting simple OAuth set up hasn’t proved to be as intimidating as I originally thought.

I was also a little hesitant about the Charts framework because it was originally written for Android (I think in Java?) and was somehow adapted to Swift. There is also not documentation for it in Swift, they just direct you to the Android docs. I thought it was worth a shot though, because it had all the charts I was looking for and it is really customizable. After messing around with it for most of yesterday (and finding a good tag for it on StackOverflow) I have a decent understanding of how it works and I was able to get it to look pretty much how I wanted.

Detailed Analysis

The most interesting challenge I worked on this week was how to get the user’s motion data into a format that made sense. After sitting with the problem for a while, I decided to sample the acceleration once a second, average that across all three axes, and then add that to a total acceleration value for the current interval. This would give us a rough idea of how much movement the user had within a given period. It would also mean we’d have some variables we could tweak if we needed more or less data points. To start with I have been sampling every ten minutes, which leads to 36-48 data points over the course of a night, but I couldn’t really know if that would be too much or too few until we were able to actually plot it on a chart. After seeing it on the chart, I think it either 10 or 15 minute intervals will probably be reasonable.

The motion manager works by having two timers, one for actually sampling the data, and one for resetting and starting a new interval.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
private func startTrackingMotion() {
        if motion.isDeviceMotionAvailable {
            self.motion.deviceMotionUpdateInterval = readTime
            self.motion.showsDeviceMovementDisplay = true
            self.motion.startDeviceMotionUpdates(using: .xMagneticNorthZVertical)

            // Configure a timer to fetch the motion data.
            self.readTimer = Timer(fire: Date(), interval: (readTime), repeats: true,
                                   block: { (timer) in
                                    if let data = self.motion.deviceMotion {

                                        // Get the acceleration of the phone minus gravity
                                        let x = data.userAcceleration.x
                                        let y = data.userAcceleration.y
                                        let z = data.userAcceleration.z

                                        // Average it out and add it to the accumulator
                                        let average = abs(x) + abs(y) + abs(z) / 3
                                        self.accumulator += average
                                    }
            })

            // Add the timer to the current run loop.
            RunLoop.current.add(self.readTimer!, forMode: RunLoop.Mode.default)
        }

This one is pretty simple (and pretty much ripped straight out of the documentation). It sets up the CMMotionManager, it makes a timer that reads the .userAcceleration once a second, it averages the x, y, and z values, and then it adds that to the accumulator.

1
2
3
4
5
6
7
8
9
10
11
12
13
@objc private func startNewInterval() {
        // Stop the current read timer
        readTimer?.invalidate()
        readTimer = nil

        // Add the accumulated data to the array and reset
        let motionData = MotionData(motion: accumulator, timestamp: Int(Date().timeIntervalSince1970))
        motionDataArray.append(motionData)
        accumulator = 0

        // Start the timer again
        startTrackingMotion()
    }

This one resets the readTimer from the previous method, adds the accumulated data to an array, resets the accumulator, and then starts tracking the motion again.

Then there is just a simple public API for starting and stopping the tracking:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
var intervalTime: TimeInterval = 60.0 * 10
var readTime: TimeInterval = 1.0

var isTracking: Bool { return intervalTimer != nil }

func startTracking() {
        startTrackingMotion()
        intervalTimer = Timer.scheduledTimer(withTimeInterval: intervalTime, repeats: true, block: { _ in
            self.startNewInterval()
        })
        delegate?.motionManager(self, didChangeTrackingTo: isTracking)
    }

    func stopTracking() {
        intervalTimer?.invalidate()
        intervalTimer = nil
        saveToPersistentStore()
        delegate?.motionManager(self, didChangeTrackingTo: isTracking)
    }

    func toggleTracking() {
        if isTracking {
            stopTracking()
        } else {
            startTracking()
        }
    }

Team Reflections

I didn’t really put a lot of thought into picking or forming my team. None of the Labs projects really hit me as something that I absolutely wanted to work on, so I basically just waited until someone mentioned that they need an iOS developer on their team in the Labs slack channel and the Sleep Tracker team was the first one that I noticed. I wasn’t really sure what to expect because I hadn’t really worked with any of the Web folks before outside of the hackathon in January, but I was glad to meet some new people and learn some things from a different discipline.

The first week we pretty much spent all day every day in Zoom meetings, which I think helped us get a feel for each other’s personalities and how everyone thought through things. It also helped us start getting on the same page about the structure of our app. At the same time, it was exhausting and this week we haven’t spent nearly as much time in meetings together (at least I haven’t, I don’t know if other people pair-programmed or anything).

I think our biggest challenge as a team is making timely, informed decisions. We all get along pretty well, and I think that everyone works hard and pulls their weight, and everyone seems to be pretty good at what they do (as far as I can tell). But when we come together we seem to either gloss over decisions that will have a big effect on the app, or we spend a lot of time talking about small decisions that don’t really matter. It seems to me that this is because no one is the clear leader of the group (either prescriptive or descriptive), and I don’t know if there is anything that we can do to fix that. As time goes on we seem to be getting better at communicating with each other and not wasting time on things that don’t matter. But it has been a challenge.