Transcripts
1. Welcome: Hello, my name is
Oliver aspirin. Welcome to the course, one app for every screen. In this course, I'm
going to show you just how easy it is to
build Android apps that fit all mobile
device form factors will be covering phones, tablets, desktops, and yes, foldable devices
in all postures. Before we dive into
what we're building, I'd like to discuss
why we're taking the time to build one
app for every screen. I see for particular
reasons to do that. The first reason is
because Google is placing a renewed emphasis on
large screens with a new Android operating
system called Android 12 L. This is a forthcoming
operating system that's designed specifically
for large screens. Particularly they're making
several major improvements. The first improvement
they're making is around multitasking and being able to put apps into split screen mode. Then they've taken
their existing UIs and adaptive them to
take advantage of the larger real estate that a tablet and a large
screen has to offer. They've also improved
compatibility with applications that don't support large
screens by default. You can think of this as being traditional phone-based
applications. This is Google's biggest
effort to date to take on the market for
large screen devices. The second reason is because
Android Studio now offers excellent tooling for targeting all of these different
kinds of devices. First and foremost, they have great emulator support for all of these different
form factors. In fact, they've recently introduced an emulator
type that can mimic every single type of device available
on the market, whether it be a phone, a tablet, a desktop,
or foldable. You won't even have to
switch emulators anymore in order to try out all these
different form factors, you can stick to
one emulator and test every single
type of device. Whenever you're
building your UIs inside of Android Studio, especially if you're
using jetpack compose, they have advanced previewing tools so that you don't
even have to build your application to
see what it looks like on a variety
of form factors. It'll show you right
inside of the tool. The third reason is because
the markets momentum is growing for more device types
than just Android phones. Recently, Google has registered 100 million new Android tablets, and that brings the total
worldwide market for a large screen Android
devices to 250 million. That includes tablets foldable, as well as Chrome OS devices. Speaking of Chrome OS, it's one of the fastest growing operating
systems on the market. All of these devices are
considered large screen devices and they've seen a
92% year-over-year growth. Each of these devices are capable of running
Android apps out of the box because they all come pre-installed with Google Play. These are Google's
official numbers for the fourth quarter of 2021. And if things keep going
the way that they have, they're only going
to get better. The fourth and final reason for building these kinds
of apps is because major players in the market are all pushing it
in this direction. As I mentioned earlier, Google's pushing in this direction
with Android T12. But let's talk about Microsoft
and Android for a minute. With the introduction
of Windows 11, Microsoft has promised us support for Android
applications. This is going to come by way
of the Amazon app store, and it opens up a
whole new market that previously
wasn't available. So as you can see,
the landscape for building applications
just changed quite a bit even over
the past few years. It used to be you would just
build an application to targeted phone and maybe if you wanted to tablets as well. But the guidance on how to do an application for both of them was pretty vague and the
process was rather difficult. So that means that the tablet adoption rate was rather low. People just stuck with
building apps for phones. That's no longer the case. As app developers, we can see what the market is going
and it behooves us to accommodate the
market before it gets there and your application
is forced to play catch up. I have four major
goals for this course. The first one, as I
mentioned earlier, is to build an
application that's optimized to run on phones, tablets, desktops, and
foldable in all postures. Now, what do I mean by
a foldable posture? There are two primary categories of foldable on the market. One example is the
Samsung Galaxy Z Fold and it's natural posture. You're going to open
up the device like a book from left to right. This is usually
called Book mode. However, you can turn the
device 90 degrees and have it sit on a tabletop oriented
in the opposite direction. This is called tabletop mode. In-between book mode
and tabletop mode, you have two different postures. Another device on the market
is to Samsung Galaxy Z Flip, which by default opens in
the opposite orientation. In tabletop mode, our application is
going to be designed to run on all of these
devices and will be optimized for each
of these postures. The second goal of this
course is we're going to build this entirely
in jetpack compose. We're not going to be using
XML views for anything. This is going to be an
incredibly modern application. The third goal is we're going
to be using material three, or more commonly known
as material you. This is the next-generation of Google's material design system. It's designed to work in
light mode and dark mode, just like in previous years. But now the user can select
custom color palettes from their background on devices
running Android 12 and above. I'm not going to go into too
much detail on this topic in this course since it's not
one of the primary goals, but we will be following
unnecessary rules in order to make it work. The fourth and final goal of this course is to be able
to support rotation. This is gonna be necessary
for us to be able to support both postures
on foldable devices. And plus it's just gonna be a better user experience overall. We're going to use some
very similar practices to address each of these
goals in one sweep. Now, let's move on to
our very first topic. We're going to look at
what the final project this course is
going to look like.
2. The Final Product: Now that you know the four
primary goals for this course, we're ready to take
our next step. Before we dive into writing
our first line of code, Let's take a moment to see
what the final product of this course looks like on
a few different devices. Throughout this course, I'm
going to be using and testing your application on
three different devices. The first one, It's a
Google Pixel to Excel, and it will stand in as just our basic non foldable phone. Next up I have a Samsung
Galaxy Z Fold two, which of course is going
to be our foldable device. Finally, I have a
Google Pixel Slate. Now technically this
isn't an Android device, It's a Chrome OS device, but since they're all capable of running Android applications, it will stand in as our
Android tablet, just fine. Don't worry if you
don't have access to all of these different
kinds of devices. In fact, most of
you probably only have a few of these
ones laying around. As I mentioned earlier, google has great support built into their emulators allow you to emulate these kinds of devices. I'll dive into how to
set those up later on. But it just feels more
impactful to be able to test on devices like this if
you have access to them. Now, let's turn our
attention to the app. Let's look at what
the final product of this course looks like. Here I am going to
show this to you on a Samsung Galaxy Z full two. Now I chose this device
because I think it's showcases the best properties
of our application. Or application is
relatively simple. It only has three screens and I'm looking at the
very first one here. This one is just a portal
to the other two screens. If I tap the first button, you can see I'm taken
to a screen which just provides information
about my display. It shows the width and
the height and dp, as well as something
called size classes, which we'll get into a
little bit more later on. This part is relatively
uninteresting. You can see when I
rotate the device, we definitely do have
rotations support. I'll put it back into
its upright position. And now I'm going to fold
the device. Just like this. You can see the information
on screen has changed. Now it says this
is in Book mode. That's because I'm
holding it like I'd be reading a book naturally. In the information on here
has been changed to show me information about the hinge
where it is on the screen, the size of it, and
other pieces of information that the
OS has to offer. Now if I take the
device and I rotate it, this puts it into
what's called a tabletop mode because
this is excellent. If you wanted to sit
the device down on a table and say,
watch your video, the information is relatively
the same except now it's telling me that
the hinge is going in the opposite orientation. Orient this backup
a folded back out. Let's go back to the previous
screen so that we can see how we can use this information
in our application. Back on the application
home screen. If I press on the second button, you can see I'm
taken to a list of numbers from one to 50. This is what it
looks like on phones and other smaller
screen devices. This implements what's
called a master detail view. If I press on, say,
one of these numbers, you can say I go to a
detail view which gives me information about the
number I just pressed. I can go back to the list, price and other number
and get the same result. On a tablet, I would
have a list on the left and say
details on the right. Unless I have a foldable device, in which case a folded device
right down the center. And you can see I get
a very similar result. I still have my
numbers on the left with details on the right. And as I press
different numbers, you can see the details
on the right change. If I orient this in the opposite direction
in tabletop mode, you can see the information is then stacked
from top to bottom, and I get the exact
same behavior. Even though this is a
simple application, it will teach you all the fundamentals that you
need to build this anymore complex app in line with the four goals that I presented
earlier on in the course. It's simple so that we can focus on these goals
without distraction. Not because I'm leaving out important information
that you're gonna need whenever you go
and build your app someday. Next, let's take a look
at the starter code for this course and
get her IDE set-up.
3. Project Setup: With our project's goals and final product firmly in place, let's pull down the sample
code and get her ID setup. I have the full
project source code available online
GitLab repository. I've organized it and
built that project in the exact same manner
that we're going to go through it in this course. The first thing I'm
going to do is clone this repository to
my local machine. I'll copy the URL from here and then paste
it into my terminal. That will give you the final
product for the course. You're welcome to
explore it on your own. But I'm going to jump
back to an earlier point in time by checking
out some tags. You can see that if I
list all of the tags and this repository am given
one that says Start, I'll checkout that tag to
jump to the starting code. Each of these tags represent a significant milestone that we reach inside of our project. The start tag is
obviously going to be the starting position and the end tag is going to
be the final product. And then there's a series
of tags in the middle which represent a significant
feature that we complete. For example, the
very first feature we're going to complete is measuring the screen size and
displaying it in the app. Once we do that, I'll have a tag inside of the repository
so that you can go and reference it later on
in this process repeats as we complete major features
inside of the project. Before you can run this app, you're obviously
either going to need a real device or an emulator. If you do decide to
use an emulator, I recommend that you set up to specific kinds
before proceeding. At minimum, I would recommend a tablet and a foldable device, a tablet we could
think of as working as both a tablet and a desktop, thanks to its large screen size. For a foldable device, we can think of it as
obviously a foldable, but when the screen
is folded shut, we can think of it
as a regular phone. Let's start setting
up the tablet first. I've opened up my project
inside of Android Studio, and then I'm going to open up the device manager and then
select create a device. I usually pick the pixel C because of its
generous screen size, but you're welcome to select
a tablet of any size. I'll also select
the latest version of Android ID for
this system image. The version you select is
up to you as long as you pick API 21 or above. Finally, I'll create the device. Now let's move on to setting
up our foldable emulator. Back in the Device Manager, I'll select the 7.6 inch
folding without our display. I chose this because it best emulates the Samsung
Galaxy Z Fold two, since the expanded
display folds shut and there's another smaller
display on the outside. Again, this device
definition is up to you, but I'd recommend using
some kind of foldable. I just selected one
that was representative of a majority of the
foldable market. I'll use the same version of Android and create
this emulator. With our emulators in place. Let's run the app
for our first run, I'm going to build this
for a foldable device. As you can see, this app has all the screens and routing
in place to simplify the setup from the home screen out route to view
screen information. This screen has
just a scaffolding and placeholder text
in place for now. Similarly with the
Adaptive Layout screen, all I have is a list of
numbers from one to 50. I can't even get into
the detail screen for the list of numbers
that I showed you in the demo just yet, there's nothing adaptive
about our application. So it wouldn't matter
if I showed this to you on a phone, foldable, or a tablet, everything
would behave the exact same. Now that we have
our project up and running and we understand
the lay of the land. Let's dive into measuring the screen and showing
information about it. We'll do that in the next video.
4. Measuring Flat Screens: Since our application has all of the sample screens in place, it's time to populate it
with irrelevant information. We're going to start doing
that by extracting the metrics the operating system provides to us about the device's screen. While the information
will obtain works on all Android and Chrome
OS device types, it works best for
flat screen devices or those without
a foldable hinge. We'll get into how
to measure for doubles in a subsequent video. First, let's create a sealed interface called
screen classifier. I'm going to put this
inside of a package called utils dot screen. Inside of there, create
a fully open data class. The data class should
have a width and height property and inherit
from the screen classify or interface will add more content to
this interface shortly. Notice that I didn't specify
a type for these properties. That's because I
want them to model to different aspects
of each dimension. The size in DPS, and what Android
calls a size class. Let's start with the size class. According to Google's
official recommendations, they classify all screen
types into three sizes, compact, medium and expanded. Notice how they stray from using terminologies such as phone, tablet or family AT. This is mainly because these
are brittle definitions. For example, it's easy to follow this approach
in a tablet by resizing the window
space assigned to your app to
look like a phone. If your app thinks of itself
as a phone or a tablet, we've already found a
way to mislead it logic. Instead of building a
device centric Apps, google recommends we
consider the amount of screen space
allocated to our app, regardless of
whether it consumes the whole screen or
just a part of it. That concept is what
they call a size class. Let's create a new
enum inside of the utils dot screen package
called window size class. Inside of there, I'll
specify each class type, compact, medium and expanded. That's all for this file. Let's add it to a data
class called dimension. I'll create a new
data class called dimension and give it to properties DP and size class. The DP property will be of type dp as given to us
by jetpack and pose. You'll see how that
works in a minute. The size class will of
course, use our enum. Finally, we can set the
dimension as the width and the height property types for
our fully open data class. That's all we need to
do to model our data. Let's start extracting it. I'll need access to a library
called window manager. In my apps Gradle file, I'll add a reference to
that library, synchronize. The next part of this project begins in the main activity. I'll call a function inside
of the set content block called remember window DP
size and save its output. This function doesn't
exist just yet, but we'll make it in a moment. It will need to access
this activity itself. So let's pass that in. Next. I'll create the function. I'll put this under the
utils dot screen package. Once that's created, it'll
return a DP size object. The following few
lines of code are pretty much straight
from the books. I'll save a reference to the current screens
configuration. I'll be sure to pass
the configuration into the remember function to
survive recomposition. Then I'll get the current window metrics from
the given activity. Now I can transform
those metrics into a DP size object like this. I'll get the bounds
of the measured size, convert it into a rectangle, and extract the DP size
from that rectangle. Finally, our turn the output
of that last operation. At this point, we
have the width and height of the screen that
is allocated to our app. It's important to
know that it may not be the entire screen size, but rather the amount of real estate that's
given to our app. That's all that
matters to us since the OS takes care of the rest. To get these figures from the DP size into our
dimension model, I'll create a class to
transform it for us. I'll call it screen info and add it to the utils dot
screen package as well. Let's do the work inside of
a function called create classifier that takes
in the DP size object. From here, it's as simple as mapping the width
and the height to the respective
window size class as recommended at Android's
documentation. Last hour turn a fully
open object with the math, the window size class, and the given DPS for both
the width and height. Now all of the
groundwork is done. Let's start using
the information on the screen information view. Starting at the root,
the main activity, I'll pass to dp size, to the multimodal composable. I like to keep the activity as clean as possible since
it's hard to test. So I'll defer the mapping
work to the composable. This composable would need both the window DP size along
with the screen info class. On the first line
of this function, I'll create the classifier
and save its output. This information is then sent
into the navigation graph. Then to the screen info route. Finally to the screen itself. Now we can replace the sample content with the info we extracted from
the classifier. I'll update the file like so. The top texts will have the name of the
classifier class name. The next one will have a
two string representation of the same object. I'll leave the next few
texts composable is alone since they deal
exclusively with foldable. For application is ready
for its first test. I have a build of this
app on to emulators. First, let's explore the phone. Going into the screen
information view, you can see our output has sensible measurements and
window size class values. If I rotate the device, the metrics are the same, just along the opposite axis. Now let's see if this still makes sense on a large tablet. Going to the same
screen on a tablet, I can see that these
values still makes sense. Now you can see that we have
classified this device as expanded in both the
width and the height. Rotating this device
yields a similar outcome. We've cornered
actionable insights from the device's screen that will
soon put to work for us. But before we do that,
let's take some time to understand how
foldable screen works, particularly around
the conventions and terminology that
we'll be using. We'll do that in the next video.
5. Learning Foldable Terminology: In the last video, we learned
how to measure and properly classify the size of a
screen on a phone or tablet. Now, we're going to pivot our attention to
foldable devices. In particular, we need
to understand where the hinges that separates
the two screens, how the hinge can affect
the screen real estate in whether it's an occluding
or seamless hinge. To properly understand and utilize the best of
a foldable device, it's important to consider its natural and rotated postures. For the first folded posture, google calls it book mode, since it sits in your
hand like a book. The second posture is
called tabletop mode. This posture, you have a
device folded on the table, much like a laptop. As you can see here, the
orientation of the device has one screen facing you and
one screen facing upward. It's important to
note that both of these phones that I
just showed you can be oriented in either
direction to be in tabletop or book mode. The only thing that
differs between these two scenarios is their
natural upright posture. For us developers we need to build for each of
these scenarios. Next, we'll look at the hinge. A foldable device can
have one of two types of hinges and occluding hinge
or a seamless hinge. An occluding hinge is when the hinge divides a
portion of the screen. An example of this would be
the Microsoft Surface Duo. As you can see,
there are clearly two separate portions
of the screen, and it's up to the
application developer to respond appropriately. A seamless hinge
is one that allows two screens to
overlap completely. This way you can
see both screens simultaneously as if it
were one complete display. Samsung's Galaxy Z Fold
has a seamless display. Google would call this a
non occluding display. And that could potentially
allow us to make subtle design changes to our app to accommodate this
kind of screen. Finally, we need to
consider whether the device is folded at all. Just because they user
has a foldable phone, it doesn't mean
that they're using the feature at that moment. When the device is fully opened or flat, as
Google calls it, it can be thought of as
a non foldable device, like a regular phone or tablet. When the phone is half opened, meaning that the hinges are
followed this one degree, then we can engage
the foldable logic. At this point. Let's start by understanding what information
we can extract from the Android APIs to
determine whether the phone is in Book mode
or tabletop node. To do this, we'll look at the
folded state of the device. We can query the Android
APIs to determine if the device has what Google
calls a folding feature. If the device has that feature, then the auditor will contain structured data with all the information
that we need to know. In particular, these are what pieces of information
we can extract, whether the device is
fully open or half opened. When Google uses the
term half opened, that doesn't indicate how much the hinge is opened or shut, but rather whether the device
no longer perceives itself as flat and the user has closed at past a
certain threshold. Next, we can determine
the hinge orientation. Keep in mind that
this is separate from the devices orientation. It tells you whenever
the device is in its natural upright position indicating which way
the hinge is running, either horizontal or vertical. However, it does adjust for
the devices orientation. For example, when the
Samsung Galaxy Z Fold is in the upright position with the hinge going
from top to bottom, the API will return vertical. Likewise with the Samsung
Galaxy Z Flip devices, the natural orientation
will show as horizontal. Rotating each of these devices onto their side will result in the hinge orienting in the other direction as
reported by the API. Next, we can determine
the hinge location. While most hinges on most modern foldable devices
go straight down the center, that's not always
guaranteed to be the case. Thus, we can extract where
the hinge is located on that screen so that
we can better suit our interface to that
particular device. Finally, we can determine
whether the hinge is separating or occluding. In essence, these metrics help us understand
whether the screen is one continuous
display or broken up into two separate displays with a visible divider in-between. The OS makes these
kinds of decisions for us and it's up
to us to decide if for applications design should differ slightly based
on the hardware build. At this point, we have
a good understanding of what information we can
extract from the Android APIs. In order to determine
the various states and properties of
foldable screens, we will need to
continue to be aware of more device form factors
as they come out in order to ensure
that our app works well with more scenarios
as they emerge. As manufacturers experiment with new form factors will be
increasingly important for us as developers
to make sure that our apps are able to
adapt seamlessly. In the next video, we'll begin extracting the relevant
information from foldable screens in
a similar manner to how we did that
from flat screens.
6. Reading Foldable Data: With the terminology and common user expectations for foldable devices
firmly under our belt. Let's start modelling and
extracting this information. We can build our app to
adapt to these devices. Like our previous effort
with non foldable devices. Let's begin by modelling
what data will receive. Here is a quick overview
of what we can get. First, we can get
the hinge position. That is describing where the
hinge divides the screen, whether horizontally
or vertically. Next, we have a derived number that I call it the
hinge separation ratio. This value isn't explicitly provided to us by
the Android APIs, but it's something
that can be calculated from the previous
value to tell us whether the hinge
goes straight down the center as it
does on most phones, or is slightly off center. This isn't likely to
be very helpful right now for applications running
in full-screen mode, as most phones evenly divide the screen
straight down the center. But as manufacturers experiment
with new form factors, this will allow us to
compensate for hinges that may not evenly divide the screen
into two equal parts. It also will allow us to
build applications that allow split screen mode
or floating window mode. We may find the hinge
separation ratio to be helpful whenever the user
has moved the application slightly off center
so that the hinge no longer goes straight
down the center of the screen as it would if the application we're
in full-screen mode. Next we can get whether
the hinge is separating. This is provided by the OS to tell us whether these
two screens should be considered visually separate
based on the hardware build. Finally, we have
the occlusion type in this value is
also provided by the OS to tell us whether a visually occluding hinge divides the screen
into two parts. Or if there is a
flexible screen to join the two parts to appear as
one continuous display. With that in mind, let's build a model to hold this
information for us. In Android Studio,
I'm going to open up the screen classifier
interface again. And let's add another
sealed interface inside of it called half opened. This interface will model the foldable devices with a single hinge that
are partially folded. It should inherit from the
screen classifier interface, just like the fully
open data class does. Inside of it out add
four properties. Hinge position of type wrecked, hinge separation ratio as a float is separating as a Boolean. The occlusion type of
type of folding feature dot occlusion tight. Right now this is a
generic representation of some kind of foldable phone. Now we can become more specific. I'll create a data class inside of the half
opened interface called Book mode to model devices held in
the book mode posture. It'll inherit from the half opened interface
will be responsible for overriding all of the abstract values
from the interface. I'll create one more
data class right below called tabletop mode for
the opposite posture. Once again, it'll inherit from the same interface and
override the same values. That's all we need to
do to model our data. Let's start extracting it. I'll need to access to a
library called Window Manager. In my apps Gradle file, I'll add a reference to
that library, synchronize. The next part of the project continues in the main activity, I'll use the window info tracker to extract information about the current window
and save its output into a variable called
device posture. Since this function
uses Kotlin flows, I'll have to provide
a default value using the state in function. I'll bind this flow to
the lifecycle scope. Start at eagerly. Provide an empty initial value. Notice how all of
this was written outside the set content block. Unlike our previous work, I'll pass the
information we've just obtained it to the
multimodal composable. Once inside of the composable, I'll transform this state
flow into a state like this. Note that anytime the
flowable updates the state, it will trigger a recomposition. Therefore, we will
always be sure to have the latest information reflected in our app thanks
to this behavior. Finally, I'll pass this value as another parameter to my
creative classifier function. From here, we can
transform this data into the model we created
just a moment ago. Since this function
is no longer focusing solely on reading the
size of a flat screen. I'll move its current behavior
to a standalone function called create fully
open device and pass it the window DP size will
use the space inside of the create classifier
function to make a few high-level
decisions forest. First, I'll need to know if this device is a folding screen. I can get that information by asking the window
layout info object. If it has a folding feature, I can do that like this. I'll call it device posture dot display features, dot find. And they'll look
to see if any of those features is
a folding feature. Then I'll cast the outcome of that operation and
save it as a variable. This variable will be null on standard phones and tablets, as well as on
foldable devices when the screen is either
fully open or if it's folded shut
and the user is simply using the outer
smaller display, like on the Samsung
Galaxy Z Fold device. Next, I'll create a few
helper functions to help me determine if the device is in
Book mode or tabletop mode. The first function I'll create
is called his book mode. And I'll provide
the folding feature device as a parameter. From here we can check
if the device is half opened and if the
orientation is vertical. If these checks pass, the phone is in Book mode. On a similar note, let's create that is tabletop mode function. Nothing here should
surprise you. Once again, I'll check
for the half-open state. And if the orientation
is horizontal. Let's go back to
the top function and start using
these new functions. At this point, I'll double-check
to ensure to create classifier is returning a
screen classifier type. That is the root sealed
interface of our data model. Since we're now
dealing with a more generic class of devices, dysfunction must be able
to model and transform all of these types under the
folding feature variable, I'll create an if else chain. First. If the folding
feature is null, that means we're
not doing anything special with a foldable device, so we can return the result of create fully opened a device. Next, I'll check if
we're in Book mode. If so, will return the
result of a new function, I'll call create
book mode object with a folding feature
as its only parameter. This function will return
screen classifier dot half opened dot book mode will populate that
function in a moment. Next, I'll check for
tabletop mode and create a similar function for that
transformation operation. Here, it'll return screen
classifier dot half opened dot tabletop mode. Finally, we need a catch-all
scenario which I'll simply consider to be
a non foldable device. I'll be sure to return two
values of each function. Now we can turn our attention to modeling the book mode posture. Most of these properties
are simple to map. The hinge position
comes straight from the balanced property
of the following feature. Is separating an occlusion type are mapped to the same way. I'll need to manually calculate the hinge separation ratio. I can do that by determining
the window width by adding both the left and right
bounds of the hinge together. Then I can get the left side of the hinge and divide it by
the total window width. Notice how everything is
converted to a float, since the only
possible values for this operation will
lie between 01. In most scenarios,
that value will be 0.5 for hinges that separate the screen
straight down the middle. And when the application is operating in full-screen mode, if the hinge we're slightly
to the left of center, that number would decrease. If the hinge we're slightly to the right, it would increase. Finally, I'll use
that property to set the hinge separation ratio. That's all we need
for this function. The tabletop mode posture
will be very similar. However, instead of calculating the hinge from left to right, I'll do the same math using the top and bottom
values instead. Now, we've modeled and
extracted just about everything in the Android APIs can tell
us about foldable screens. Let's start to use these values. We already have all of
this information being sent straight to the
screen Info view. Now we can just start
using this information. For the bottom three text
and space are composable. I'll surround it
with an if check. I want these values
to show up on foldable devices when
they're being folded. So I can check if the
screen classifier is of type Screen classifier dot half opened inside of the
if statement body. I'll keep the first
text value at the same. However, I'll display
the hinges data inside of the second one. I have a string resource already available
to help us out. That resource is called
hinge position coordinates. It takes four
parameters for the top, bottom, left, and right
bounds respectively. Down on the last
text composable, I'll display the hinges
width and height. Again, I have a string all
ready to go for this purpose, it's called hinge position size, and it takes the width and
height as the two parameters. Everything is ready
for our app to start reporting the data it
receives from the hinge. Let's try it out
on a real device. Going into the screen
information view, you can see the output
looks similar to how it did before we added
foldable support. However, once I begin
to fold the hinge, the UI suddenly changes. The data is showing us at the
hinge is running from top to bottom and is in
Book mode posture. Rotating the device shows
similar information, but in tabletop mode
posture instead, at this point or at this smart but not incredibly
exciting, are very useful. It's time to turn
our attention toward building an interface
that can adapt this information and really start to leverage
the advantages of each form factor will begin
that work in the next video.
7. Building an Adaptive Layout: Congratulations, you've reached the first major
milestone of this course. Thus far, we build an
app that can extract nearly everything
there is to know about the screen on a phone, tablet, desktop, and
foldable device. Now let's dive into the
principles of how to build an interface which can adapt to these kinds
of modalities. Before we can start building
our interface and code, let's take the time to
design how this application should behave on
each device type. In particular, I'm establishing
the look and behavior of our adaptive interface on
phones and small tablets, large tablets and desktops, and foldable devices in the
book in tabletop postures. If you recall back to the
first video in this course, we had a master detail view. The Master view was a list
of numbers from one to 50. And the detail view will
reflect which number was selected when the user pressed
on different row items. This could be thought of as
mimicking the behaviors of a very simple email
or messaging client. Let's dive into some
specific so that we can see how I intend
our app to work. First, let's look at
flat screen devices only for compact and medium. A window size class devices, which would include
devices like phones, small tablets and many tablets, would like our app to show
a full-screen Master view. That is, the view with
a list of numbers. Tapping on any one of those numbers should
take the user to the Detail View and would show the number
the user selected. Of course, pressing the back
button from the Detail View will land the user back
on the master view. For expanded devices like
large tablets and desktops, the interface would
change slightly. The Master view would show on the left 1 third of the screen, while the detail
view would consume the other two-thirds
of the screen, the behavior would be the same as in the previous scenario, with two minor differences. If the user has just landed on the screen and is not selected
a number from the list. The detail view should
show some kind of default text value prompting
the user to select number. Once the number
has been selected, the details would be updated
to display that value. The other change
is a modification to the back button behavior, regardless of whether the
user has selected a number, pressing the back button
should always exit the master detail view and take them back to the
application's home screen. For foldable devices in
the book mode posture, this view would behave the
exact same as it would unexpanded devices
with one minor change. Instead of dividing
the interface into one-thirds and two-thirds
for the list in detail view, this interface mode would divide itself based on the
hinge location. The list will always
be to the left of the hinge and the details
will be to the right. This will likely
look just fine on most devices in
full-screen mode. But if the user has put your
app in a floating window, you may experience some
awkward positioning of the hinge in terms of
how it divides your app. Your app, you may wish to set some boundaries around
how large or small of a hinge separation ratio you are willing to
work with before switching your view to a more suitable arrangement
for this app. I'm not going to go into
that level of detail, but it's certainly
something to keep in mind. Finally, for foldable devices in the tabletop mode posture, I will be building the app
just like in Book mode, except the list in detail views will be
stacked from top to bottom. The great thing about
this process is that it affords you a high
level of freedom. Is no right or wrong way to
build this kind of interface, because the needs of
your app may vary from another one who decided to implement these
details differently? My app is just one of many ways that you
could have done this. I chose this design because
it highlights and displays the many opportunities that you have as a developer
when building your app. Next, let's move on to
building these interfaces. First, we need to
pass the screen classifier to the
adaptive layouts route. Once the route has the
necessary information, I'm going to use it to map a generic descriptor
of our device type and posture to a
specific description of how it should behave
for this screen. I'll do that like this. I'll create an enum inside of the adaptive layouts route file called Adaptive
Layout screen type. To describe how to lay
out the interface, I will have one value
for each scenario. I'll have list only for compact in medium devices
to show just the list. Next, I'll have detail only for those same devices to
show just a detail view. Next, I'll have list
1 third and detailed two-thirds for large
tablets to show the list and detail
side-by-side. Next, I'll have list
half and detail half for foldable devices in
Book mode to show the list and detail
side-by-side. Finally, I'll have List detail stacked her foldable devices in tabletop mode to stack the list and detailed composable
on top of each other. Notice how I strayed
from using terms like phone, tablet,
and foldable. I'm describing how this
interface should be laid out, not the hardware profile. Next, I'll map the screen classifier to one
of these states. I'll create a
private extension on the screen classifier
interface for this job. It will require a Boolean to indicate when a row is selected. This information is key for the compact and
medium devices to jump from the list
to the detail view. I'll hard-code those values
in for now in a later video, we'll handle how to
get this information. This function will return an adaptive layout
screen type enum. For the first scenario, I'll check if this
device is fully open and it has an
expanded width. If so, this will be mapped to the list 1 third and
detailed two-thirds type. Next, if this is fully opened
and a row is not selected, then return the list only type. Notice that since I
already handled at tablets and desktops
for the first scenario, now I know that I'm working with smaller devices which
will only either show the list or detail at any
given time, but not both. Similarly, for the next case, if this is fully opened
and the row is selected, then return the
detail only type. Next, if we were in Book mode, our return list half
and detail half. After that, if we're
in the tabletop mode, I'll return list detail stacked. Finally, for the
catch-all scenario, I'll return list only. This last scenario is only
to keep the compiler from warning us against
non-exhaustive scenarios. It's actually not
possible for us to hit this case since all of the above conditions satisfy all of the possible
combinations. Now let's map these types to composable that will
create in a few minutes. Back in the adaptive
layouts route function, I'll clear out the reference
to the existing screen. Next I'll create
a variable called Adaptive Layout screen type
that we'll remember salable, a default state of list only. This state will
quickly get replaced. So don't worry if
it doesn't match the actual initial
state of the device, then I'll update the state
with the value determined by my mapper and I'll supply a value of false
for the parameter, will take care of this
in a later video. Finally, I'll create a switch
statement that will provide the correct composable
for each of these states. I don't have these composable
is built just yet, so I'll simply
provide a stub value for each of these cases. Notice how this approach doesn't use the
navigation graph. I use the graph to get me from the home screen to the
screen information route, and also from the home screen to the adaptive layouts route. But once I have a more
complex view to build, it's up to me to make it work. The graph is only
responsible for bringing me to a
particular route. The context of my app, you could think of
it as bringing me to a particular feature. And once I'm there,
I'm on my own. Okay. Let's go
through a checklist of what we already
have at our disposal. For the list only scenario, I have the adaptive
layouts list screen, so I don't have to
build that layout. Also for the detail
only scenario, I have the adaptive
layouts details screen. I also don't need to
build that interface. However, the other
cases don't have any matching composable
is to make them work. So I'll have to
build these layouts. I'll start with the one for
large tablets and desktops. I'll create a new
composable called adaptive layouts list 1 third and
detailed two-thirds. All I need to do is
use a row to place the list and details
side-by-side. Since I want my list to take up 1 third of the
screen on the left, I'll surround it with a
box and use the modifier of film max width
with a value of 0.33. The details will get
the same treatment except the film
max width function won't get any value so
that it can stretch to fill the remaining
empty space. That wasn't too bad. Let's build our next layout
for foldable in Book mode, I'll create another composable
called adaptive layouts, list half and detail half. This function,
we'll need a screen classifier so that I can get
access to the hinge data. This parameter will
be of type Screen classifier dot half-open
dot book mode. Since this layout only
deals with book mode, I'll use a row and
surrounding boxes once again to lay out my
list and details. Composable the film max
width modifier for the list, we'll use the hinge
separation ratio to properly divide itself
with respect to the hinge. Just like last time, the details will fill the remaining width. Last I'll create one more
composable called Adaptive Layout stacked for
foldable in tabletop mode. The only difference is
from the last layout is that the parameter type is screened classifier dot
half-open dot tabletop mode. I'll be stacking these with
a column instead of a row. And I'll use to fill max height modifiers instead
of film max width. Now, let's put all this together and build our
adaptive interface. Back in the adaptive
layouts route, I'll start to populate the
switch statement cases. List only will use the
adaptive layouts list screen detail only gets the adaptive layouts
to detail screen. List 1 third and details 2 third will have
not surprisingly, adaptive layouts list 1 third
and detailed two-thirds. I'll have to do one
minor check for the last two scenarios
since I'm passing in the screen classifier for
list half and detail half, I'll do a check to ensure
that the screen classifier, it's a type of screen classifier dot half-open DAP book mode before passing it to
the adaptive layouts list half and detail
half composable. This is another one
of those checks to make sure the compiler is happy. We've set up the
scenario to only ever run when the
device is in Book mode. But our compiler
doesn't know that. Last for the list
detail stacked, I'll do a similar check
for tabletop mode and pass it into the Adaptive
Layout stacked composable. Without much effort, we began to transform this
part of our app. Let's see what it looks
like on some emulators starting with a non
foldable phone. As you can see, I can only get
to the ListView right now. I don't yet have the plumbing
in place to react to taps on the list items
on a large tablet, I have the list in
details side-by-side. Once again, tapping the list
item doesn't do anything, but we're certainly taking a
step in the right direction. On a foldable device
that has opened flat, I can see a similar layout
to that of a basic phone. However, as I start to fold
device shut into book mode, the details appear on the right, just like it does on a tablet. Finally, on a foldable
device in tabletop mode, I can see the list and details stacked on
top of each other. We're definitely making
away into the homestretch. It feels like we've made so much progress in just one video. And that's because
we can finally start to see the
results of our work. There's a bit more plumbing
recording to need to do to make this function
as we would expect. View models are going to be
key to get these separate composable talking as
one cohesive unit. We're going to take
care of those final details in the next video.
8. The Grand Finale: It's time to put the final
touches onto our app. We have an interface
that can adapt itself or various
screen types of sizes, but the two major pieces that comprise its layout
behave separately. We're going to use a view
model to act like a form of plumbing to tie
this all together. Let's begin by defining what our ViewModel
should contain. For this simple application, I'm going to have it
contain two items. A number the user has selected, and a flag indicating whether the user has
ever selected an item. The selected value has a
rather obvious purpose. However, that flag will
make my job easier to decide when to show the detail view with
a selected number. That way I don't
have to resort to awkward conventions
like negative numbers, something like that to indicate
one evalue was selected. Next, let's examine what kind of high level user
interactions I would expect to perform
on this view model. Since I'm just
selecting a number from a list and showing
it on the details. I would boil this down to opening and closing
a detail view. From an interaction perspective, I would expect my view
model to be able to do these kinds of
manipulations to my state. Of course, this view
model is simple because my app is simple and a
more complex scenario. I would probably want
the data back in my list to also
appear in this model. However, for this simple app, I decided to hardcode those
numbers into my view. Nevertheless, it's
worth considering the other options that
we have at our disposal. Now let's move on to creating
the view model encode. Back in Android Studio, I'm going to put
my view model file right beside my other
composable files. Ideally for larger projects, you should consider having a package for all of your models. But to keep things simple, I'll keep my model
close to the views. I'll create a class called
adaptive layouts view model and how it inherit
from the Android X. If you model, this class will contain all of the business
logic to manipulate my data, but I don't want it to serve
as my actual container. It's role is simply to
be the interfacing layer between my view and
the actual model. Down below the class, I'll create a data class for this purpose to describe
the state of my UI. I'll call it adaptive
layouts UI state. This is why those
properties that I mentioned earlier will live. Adding a property of
numbers selected as a boolean and set it to false. Now add in a number
from list property as an int and default
its value to 0. My view model will need to hold a reference to this class
and also be able to broadcast updates from it when one or more of its
properties change. I'll do that like this. I'll create a private Val called View Model
state and set it equal to a mutable state flow which receives an instance
of our data class. I'll need to expose
the state flow so that my view can access it. Conventionally, this
is done by creating a public Val with the
name of UI state, setting it equal to
my view model state. Using the state in function
to start a hot state flow. I'll provide the ViewModel
scope as the scope parameter. Start this flow eagerly and
provide a default construct, an instance of our data
class as an initial value. Let's start to manipulate
our state with the two actions I
described earlier. I'll create a function
called open details that takes a selected number
as an int parameter. Then I'll update the
view model state by copying the original value. In updating the number
of selected to true, the number from list
with the provided value. For the other scenario, I'll
create a no argument closed details function that does a similar operation on
the ViewModel state. In this case, the number
of selected is false, and I'll reset the
number from list to 0. Now we need to propagate this model throughout the
relevant parts of our app. Now open up the
multimodal nap graph. And inside of the composable that takes us to the
adaptive layouts route our request access to our view model with the
view model function. I'll do that with a
variable called view model. Specify the type and set it equal to the
view model function. Then I'll need to send
that to our route. Similar to how we are handling updates from our
screen classifier, I'll transform the UI
state flow into a state. I'll create a Val called
UI state and delegate setting its property by calling adaptive layouts view model, dot state, dot collect estate. Like before. Anytime the flow updates
the state of this variable, it will trigger a
recomposition in jetpack compose with our state
firmly in place. I can pass UI state, gotten number selected
as the parameter to the two Adaptive Layout
screen type function. Now, our screen type
mapper will have the proper data to make
the correct decisions. Ultimately the parts
of the screen that need access to the
UI state variable or the adaptive layouts
detail screen and its ancestor composable
such as adaptive layout, stacked and adaptive layouts
list half in detail half. The only composable that doesn't require the UI
state is the list. Since it's capable of
generating its own data. You have to list
was for example, generated by a call to a server, then it would need the UI
state to populate this data. But once again, since
this is a simple app, we don't need it
for this scenario. I've spared you the time
of watching me update every relevant composable with a new parameter called UI state. But as you can see here, the route file now passes this new variable as a parameter to four
of our composable. Intermediate, composable such
as adaptive layouts stacked simply act as mediators to pass the state onto
the Detail View. There's nothing too
complex going on here. I'd like to have my
Detail screen update itself whenever the view
model state changes. For this example, I'll
surround my texts composable with an
if else statement. Moving the existing text into the else block for
the condition, I'll say UI state
dot number selected. For the true scenario. I'll create a new text composable similar
to the existing one, except the text
property we'll use R dot string dot selected item
from my resources file. And we'll provide the
selected number as the second argument to
the GetString function. This value will show
the number the user has selected once the first number
is picked from the list. At this point, I have data going down into
each of my views, but I need some events
to bubbled back up from these views once a user has interacted with
them, for this case, the interaction
will be selecting an item from the list so that the view model can update its state will start in
the adaptive layout. So let's screen and work
our way back up to the top. On our list screen. I'm jumping straight to the
row with number composable. I'll add a clickable modifier to the surrounding box and
specify an onclick handler. I'll create a new parameter
on this composable to start the process of bubbling
our event up to the top. I'll call it on select number. And it will be a Lambda with a single int parameter
that doesn't return anything alcohol that lambda from my onclick handler
of my modifier. Of course, I'll need to
propagate this the whole way up. Back in the adaptive layouts
list screen composable. I'll add another on select number lambda with the same signature
as a parameter. I'll set this lambda as the second argument to my call to the row with
number composable. Next, I'll go into all of the intermediate composable and make the necessary
updates there. Just like the list composable, they will need their
own unselect number of parameter to bubble
the event handler up to a higher level. Back on the adaptive
layouts route, we're at a place where we can finally handle these events. Like to make one small chain to keep things a bit more tidy. I'd like one function
to manage updating the view model based on the events received
from the view. And another function to manage
what we're currently doing was laying out the view based on the given screen classifier. I feel as though doing
both state management and view classification in one function couldn't get confusing and would dilute
the meaning of both of them. This is more of a
personal preference, but I feel as though it makes
the end result more clear. Beneath this
composable, I'll add another composable function with the same name but with
a different signature. It'll need the
screen classifier. The unwrapped UI state of type
adaptive layouts UI state. You'll see why the type
changed in a minute. And our friend on select
number one more time. I'll then move everything
from the top function into the bottom function except
for the UI state variable. I'll also wire up the unselect number to each of the composable
that needed. What do we gain by doing this? Well, you'll see once I
fill out the top function, I'll call the second composable from inside of the first one. It'll get the screen classifier, the unwrapped and
ready to use UI state. For the unselect
number parameter. I'll just give it
a Lambda inside of their alcohol open details on my view model and pass
in the supplied number. Now you can see that all of my event handling is happening here and it doesn't get lost inside of a switch
statement cases. Plus since this unselect number is getting passed to
multiple composable, if I ever need to make a change
to how I handle us event, I can do it one time here instead of multiple times
in the code below it. Let's see what this looks
like on a real device, I have my foldable with the current state of
the app already loaded. Let's try it first
in fully open mode, you can see whenever I select
a number from the list, I'm taken directly to the details screen with the
selected number on display. That's our first time seeing this screen on a smaller
device like this. Let's try going back. You can see it took me back
to the app home screen, but not to the list of numbers. That's something we're
going to have to take care of in a minute. Going back to the list screen, I'll scroll my list
down a little bit. Folded device and
see what happens. If you'll notice, I've
jumped to my folded state, but the list has jumped
back up to the top. That's one more thing we're
going to need to fix. Google recommends not
changing your state of your UI unnecessarily
whenever changing postures. Nevertheless, this is
looking pretty good. Pressing on different
numbers on the left causes the state to update every
single time on the right. If I go back, you can see I do land on the home
screen as I would expect. It's only for smaller devices. What the full-screen
details that has a problem with
a back button. Go back to the list, select a number, and then
open up my device fully. As you can see, it takes
me to the detail screen, which is a sensible
decision since I've already selected
this number. Fold my device once again, go back home and go into
the list one more time. But this time I'm not
going to select a number. If I open up the device fully, you can see it keeps me
on the list of numbers, which is a good choice since
I didn't select the number. We're looking pretty
good right now. Let's go and fix those
last two issues. The back handler
issue is easy to fix. As mentioned earlier,
this only happens on compact and medium devices which display the details
in full-screen. For the second adaptive
layouts route function, I'll add another
function parameter called on back pressed, which doesn't take any argument and it doesn't return anything. Down below. For the detail only scenario, I'll add in a back
Handler block. This will allow us to provide
our own custom behavior whenever the back button is
pressed for this scenario. Inside of there, I'll
call on back pressed. The first composable at the top, I'll implement the
behavior of this callback. I'll use a lambda which calls the close details
function on my few model. One problem fixed in
one more to go to retain the scroll position of the list whenever the
device is posture changes, I'll create a vow inside of my second composable called list state and set it equal to
remember lazy list state. This variable needs to
get past the hallway down to the lazy column
on the list screen. I have to update all the
corresponding composable to accept this parameter. Once again, I've updated
all of my composable except for the adaptive
layouts details screen, since that is the only one that doesn't
reference that list. Finally, on the adaptive
layouts list screen itself out past that argument to
the state parameter of the lazy column. One last time, let's run
our app and see it for issues are corrected and only look at what
we fixed this time. Keep in mind device flat. I'll select a number
from the list and try to go back
from the details view. This time I landed on the list,
which is what I intended. I'll scroll down a bit
and fold my device. As you can see, the
list remembered my scroll position between
posture changes in, didn't jump back to the top. With that, we've wrapped up the last few details
for this course. Thank you for coming with
me along this journey. I hope that you found that
insightful as you go and build your absolute if future for every kind of Android device
that we hit the market. I invite you to join my
Discord group to discuss this course with me and with others who
are in your shoes. Stay curious, stay sharp, and is always happy coding.