Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/ Recent content in Articles on Smashing Magazine — For Web Designers And Developers Thu, 24 Nov 2022 07:50:50 GMT https://validator.w3.org/feed/docs/rss2.html manual en Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/images/favicon/app-icon-512x512.png https://www.smashingmagazine.com/ All rights reserved 2022, Smashing Media AG Development Design UX Mobile Front-end <![CDATA[Optimizing A Vue App]]> https://smashingmagazine.com/2022/11/optimizing-vue-app/ https://smashingmagazine.com/2022/11/optimizing-vue-app/ Tue, 22 Nov 2022 16:00:00 GMT Single Page Applications (SPAs) can provide a rich, interactive user experience when dealing with real-time, dynamic data. But they can also be heavy, bloated, and perform poorly. In this article, we’ll walk through some of the front-end optimization tips to keep our Vue apps relatively lean and only ship the JS we need when it’s needed.

Note: Some familiarity with Vue and the Composition API is assumed, but there will hopefully be some useful takeaways regardless of your framework choice.

As a front-end developer at Ada Mode, my job involves building Windscope, a web app for wind farm operators to manage and maintain their fleet of turbines. Due to the need to receive data in real time and the high level of interactivity required, an SPA architecture was chosen for the project. Our web app is dependent on some heavy JS libraries, but we want to provide the best experience for the end user by fetching data and rendering as quickly and efficiently as possible.

Choosing A Framework

Our JS framework of choice is Vue, partly chosen as it’s the framework I’m most familiar with. Previously Vue had a smaller overall bundle size compared to React. However, since recent React updates, the balance appears to have shifted in React’s favor. That doesn’t necessarily matter, as we’ll look at how to only import what we need in the course of this article. Both frameworks have excellent documentation and a large developer ecosystem, which was another consideration. Svelte is another possible choice, but it would have required a steeper learning curve due to unfamiliarity, and being newer, it has a less developed ecosystem.

As an example to demonstrate the various optimizations, I’ve built a simple Vue app that fetches data from an API and renders some charts using D3.js.

Note: Please refer to the example GitHub repository for the full code.

We’re using Parcel, a minimal-config build tool, to bundle our app, but all of the optimizations we’ll cover here are applicable to whichever bundler you choose.

Tree Shaking, Compression, And Minification With Build Tools

It’s good practice to only ship the code you need, and right out of the box, Parcel removes unused Javascript code during the build process (tree shaking). It also minifies the result and can be configured to compress the output with Gzip or Brotli.

As well as minification, Parcel also employs scope hoisting as part of its production process, which can help make minification even more efficient. An in-depth guide to scope hoisting is outside of the scope (see what I did there?) of this article. Still, if we run Parcel’s build process on our example app with the --no-optimize and --no-scope-hoist flags, we can see the resulting bundle is 510kB — around 5 times higher than the optimized and minified version. So, whichever bundler you’re using, it’s fair to say you’ll probably want to make sure it’s carrying out as many optimizations as possible.

But the work doesn’t end here. Even if we’re shipping a smaller bundle overall, it still takes time for the browser to parse and compile our JS, which can contribute to a slower user experience. This article on Bundle Size Optimization by Calibre explains how large JS bundles affect performance metrics.

Let’s look at what else we can do to reduce the amount of work the browser has to do.

Vue Composition API

Vue 3 introduced the Composition API, a new set of APIs for authoring components as an alternative to the Options API. By exclusively using the Composition API, we can import only the Vue functions that we need instead of the whole package. It also enables us to write more reusable code using composables. Code written using the Composition API lends itself better to minification, and the whole app is more susceptible to tree-shaking.

Note: You can still use the Composition API if you’re using an older version of Vue: it was backported to Vue 2.7, and there is an official plugin for older versions.

Importing Dependencies

A key goal was to reduce the size of the initial JS bundle downloaded by the client. Windscope makes extensive use of D3 for data visualization, a large library and wide-ranging in scope. However, Windscope only needs part of it (there are entire modules in the D3 library that we don’t need at all). If we examine the entire D3 package on Bundlephobia, we can see that our app uses less than half of the available modules and perhaps not even all of the functions within those modules.

One of the easiest ways to keep our bundle size as small as possible is only to import the modules we need.

Let’s take D3’s selectAll function. Instead of using a default import, we can just import the function we need from the d3-selection module:

// Previous:
import * as d3 from 'd3'

// Instead:
import { selectAll } from 'd3-selection'
Code Splitting With Dynamic Imports

There are certain packages that are used in a bunch of places throughout Windscope, such as the AWS Amplify authentication library, specifically the Auth method. This is a large dependency that contributes heavily to our JS bundle size. Rather than import the module statically at the top of the file, dynamic imports allow us to import the module exactly where we need it in our code.

Instead of:

import { Auth } from '@aws-amplify/auth'

const user = Auth.currentAuthenticatedUser()

We can import the module when we want to use it:

import('@aws-amplify/auth').then(({ Auth }) => {
    const user = Auth.currentAuthenticatedUser()
})

This means that the module will be split out into a separate JS bundle (or “chunk”), which will only be downloaded by the browser if and when it is needed. Additionally, the browser can cache these dependencies, which may change less frequently than the code for the rest of our app.

Lazy Loading Routes With Vue Router

Our app uses Vue Router for navigation. Similarly to dynamic imports, we can lazyload our route components, so they will only be imported (along with their associated dependencies) when a user navigates to that route.

In our index/router.js file:

// Previously:
import Home from "../routes/Home.vue";
import About = "../routes/About.vue";

// Lazyload the route components instead:
const Home = () => import("../routes/Home.vue");
const About = () => import("../routes/About.vue");

const routes = [
  {
    name: "home",
    path: "/",
    component: Home,
  },
  {
    name: "about",
    path: "/about",
    component: About,
  },
];

The code for the ‘About’ route will only be loaded when the user clicks the ‘About’ link and navigates to the route.

Async Components

In addition to lazyloading each route, we can also lazyload individual components using Vue’s defineAsyncComponent method.

const KPIComponent = defineAsyncComponent(() => import('../components/KPI.vue))

This means the code for the KPI component will be dynamically imported, as we saw in the router example. We can also provide some components to display while it’s in a loading or error state (useful if we’re loading a particularly large file).

const KPIComponent = defineAsyncComponent({
  loader: () => import('../components/KPI.vue),
  loadingComponent: Loader,
  errorComponent: Error,
  delay: 200,
  timeout: 5000,
});
Splitting API Requests

Our application is primarily concerned with data visualization and relies heavily on fetching large amounts of data from the server. Some of these requests can be quite slow, as the server has to perform a number of computations on the data. In our initial prototype, we made a single request to the REST API per route. Unfortunately, we found this resulted in users having to wait a long time — sometimes up to 10 seconds, watching a loading spinner before the app successfully received the data and could begin rendering the visualizations.

We made the decision to split the API into several endpoints and make a request for each widget. While this could increase the response time overall, it means the app should become usable much quicker, as users will see parts of the page rendered while they’re still waiting for others. Additionally, any error that might occur will be localized while the rest of the page remains usable.

You can see the difference illustrated here:

Conditionally Load Components

Now we can combine this with async components to only load a component when we’ve received a successful response from the server. Here we’re fetching the data, then importing the component when our fetch function returns successfully:

<template>
  <div>
    <component :is="KPIComponent" :data="data"></component>
  </div>
</template>

<script>
import {
  defineComponent,
  ref,
  defineAsyncComponent,
} from "vue";
import Loader from "./Loader";
import Error from "./Error";

export default defineComponent({
    components: { Loader, Error },

    setup() {
        const data = ref(null);

        const loadComponent = () => {
          return fetch('https://api.npoint.io/ec46e59905dc0011b7f4')
            .then((response) => response.json())
            .then((response) => (data.value = response))
            .then(() => import("../components/KPI.vue") // Import the component
            .catch((e) => console.error(e));
        };

        const KPIComponent = defineAsyncComponent({
          loader: loadComponent,
          loadingComponent: Loader,
          errorComponent: Error,
          delay: 200,
          timeout: 5000,
        });

        return { data, KPIComponent };
    }
}

To handle this process for every component, we created a higher order component called WidgetLoader, which you can see in the repository.

This pattern can be extended to any place in the app where a component is rendered upon user interaction. For example, in Windscope, we load a map component (and its dependencies) only when the user clicks on the ‘Map’ tab. This is known as Import on interaction.

CSS

If you run the example code, you will see that clicking the ‘Locations’ navigation link loads the map component. As well as dynamically importing the JS module, importing the dependency within the component’s <style> block will lazyload the CSS too:

// In MapView.vue
<style>
@import "../../node_modules/leaflet/dist/leaflet.css";

.map-wrapper {
  aspect-ratio: 16 / 9;
}
</style>

Refining The Loading State

At this point, we have our API requests running in parallel, with components being rendered at different times. One thing we might notice is the page appears janky, as the layout will be shifting around quite a bit.

A quick way to make things feel a bit smoother for users is to set an aspect ratio on the widget that roughly corresponds to the rendered component so the user doesn’t see quite as big a layout shift. We could pass in a prop for this to account for different components, with a default value to fall back to.

// WidgetLoader.vue
<template>
  <div class="widget" :style="{ 'aspect-ratio': loading ? aspectRatio : '' }">
    <component :is="AsyncComponent" :data="data"></component>
  </div>
</template>

<script>
import { defineComponent, ref, onBeforeMount, onBeforeUnmount } from "vue";
import Loader from "./Loader";
import Error from "./Error";

export default defineComponent({
  components: { Loader, Error },

  props: {
    aspectRatio: {
      type: String,
      default: "5 / 3", // define a default value
    },
    url: String,
    importFunction: Function,
  },

  setup(props) {
      const data = ref(null);
      const loading = ref(true);

        const loadComponent = () => {
          return fetch(url)
            .then((response) => response.json())
            .then((response) => (data.value = response))
            .then(importFunction
            .catch((e) => console.error(e))
            .finally(() => (loading.value = false)); // Set the loading state to false
        };

    /* ...Rest of the component code */

    return { data, aspectRatio, loading };
  },
});
</script>
Aborting API Requests

On a page with a large number of API requests, what should happen if the user navigates away before all the requests have been completed? We probably don’t want those requests to continue running in the background, slowing down the user experience.

We can use the AbortController interface, which enables us to abort API requests as desired.

In our setup function, we create a new controller and pass its signal into our fetch request parameters:

setup(props) {
    const controller = new AbortController();

    const loadComponent = () => {
      return fetch(url, { signal: controller.signal })
        .then((response) => response.json())
        .then((response) => (data.value = response))
        .then(importFunction)
        .catch((e) => console.error(e))
        .finally(() => (loading.value = false));
        };
}

Then we abort the request before the component is unmounted, using Vue’s onBeforeUnmount function:

onBeforeUnmount(() => controller.abort());

If you run the project and navigate to another page before the requests have been completed, you should see errors logged in the console stating that the requests have been aborted.

Stale While Revalidate

So far, we’ve done a pretty good of optimizing our app. But when a user navigates to the second view and then back to the previous one, all the components remount and are returned to their loading state, and we have to wait for the request responses all over again.

Stale-while-revalidate is an HTTP cache invalidation strategy where the browser determines whether to serve a response from the cache if that content is still fresh or “revalidate” and serve from the network if the response is stale.

In addition to applying cache-control headers to our HTTP response (out of the scope of this article, but read this article from Web.dev for more detail), we can apply a similar strategy to our Vue component state, using the SWRV library.

First, we must import the composable from the SWRV library:

import useSWRV from "swrv";

Then we can use it in our setup function. We’ll rename our loadComponent function to fetchData, as it will only deal with data fetching. We’ll no longer import our component in this function, as we’ll take care of that separately.

We’ll pass this into the useSWRV function call as the second argument. We only need to do this if we need a custom function for fetching data (maybe we need to update some other pieces of state). As we’re using an Abort Controller, we’ll do this; otherwise, the second argument can be omitted, and SWRV will use the Fetch API:

// In setup()
const { url, importFunction } = props;

const controller = new AbortController();

const fetchData = () => {
  return fetch(url, { signal: controller.signal })
    .then((response) => response.json())
    .then((response) => (data.value = response))
    .catch((e) => (error.value = e));
};

const { data, isValidating, error } = useSWRV(url, fetchData);

Then we’ll remove the loadingComponent and errorComponent options from our async component definition, as we’ll use SWRV to handle the error and loading states.

// In setup()
const AsyncComponent = defineAsyncComponent({
  loader: importFunction,
  delay: 200,
  timeout: 5000,
});

This means we’ll need to include the Loader and Error components in our template and show and hide them depending on the state. The isValidating return value tells us whether there is a request or revalidation happening.

<template>
  <div>
    <Loader v-if="isValidating && !data"></Loader>
    <Error v-else-if="error" :errorMessage="error.message"></Error>
    <component :is="AsyncComponent" :data="data" v-else></component>
  </div>
</template>

<script>
import {
  defineComponent,
  defineAsyncComponent,
} from "vue";
import useSWRV from "swrv";

export default defineComponent({
  components: {
    Error,
    Loader,
  },

  props: {
    url: String,
    importFunction: Function,
  },

  setup(props) {
    const { url, importFunction } = props;

    const controller = new AbortController();

    const fetchData = () => {
      return fetch(url, { signal: controller.signal })
        .then((response) => response.json())
        .then((response) => (data.value = response))
        .catch((e) => (error.value = e));
    };

    const { data, isValidating, error } = useSWRV(url, fetchData);

    const AsyncComponent = defineAsyncComponent({
      loader: importFunction,
      delay: 200,
      timeout: 5000,
    });

    onBeforeUnmount(() => controller.abort());

    return {
      AsyncComponent,
      isValidating,
      data,
      error,
    };
  },
});
</script>

We could refactor this into its own composable, making our code a bit cleaner and enabling us to use it anywhere.

// composables/lazyFetch.js
import { onBeforeUnmount } from "vue";
import useSWRV from "swrv";

export function useLazyFetch(url) {
  const controller = new AbortController();

  const fetchData = () => {
    return fetch(url, { signal: controller.signal })
      .then((response) => response.json())
      .then((response) => (data.value = response))
      .catch((e) => (error.value = e));
  };

  const { data, isValidating, error } = useSWRV(url, fetchData);

  onBeforeUnmount(() => controller.abort());

  return {
    isValidating,
    data,
    error,
  };
}
// WidgetLoader.vue
<script>
import { defineComponent, defineAsyncComponent, computed } from "vue";
import Loader from "./Loader";
import Error from "./Error";
import { useLazyFetch } from "../composables/lazyFetch";

export default defineComponent({
  components: {
    Error,
    Loader,
  },

  props: {
    aspectRatio: {
      type: String,
      default: "5 / 3",
    },
    url: String,
    importFunction: Function,
  },

  setup(props) {
    const { aspectRatio, url, importFunction } = props;
    const { data, isValidating, error } = useLazyFetch(url);

    const AsyncComponent = defineAsyncComponent({
      loader: importFunction,
      delay: 200,
      timeout: 5000,
    });

    return {
      aspectRatio,
      AsyncComponent,
      isValidating,
      data,
      error,
    };
  },
});
</script>

Updating Indicator

It might be useful if we could show an indicator to the user while our request is revalidating so that they know the app is checking for new data. In the example, I’ve added a small loading indicator in the corner of the component, which will only be shown if there is already data, but the component is checking for updates. I’ve also added a simple fade-in transition on the component (using Vue’s built-in Transition component), so there is not such an abrupt jump when the component is rendered.

<template>
  <div
    class="widget"
    :style="{ 'aspect-ratio': isValidating && !data ? aspectRatio : '' }"
  >
    <Loader v-if="isValidating && !data"></Loader>
    <Error v-else-if="error" :errorMessage="error.message"></Error>
    <Transition>
        <component :is="AsyncComponent" :data="data" v-else></component>
    </Transition>

    <!--Indicator if data is updating-->
    <Loader
      v-if="isValidating && data"
      text=""
    ></Loader>
  </div>
</template>
Conclusion

Prioritizing performance when building our web apps improves the user experience and helps ensure they can be used by as many people as possible. We’ve successfully used the above techniques at Ada Mode to make our applications faster. I hope this article has provided some pointers on how to make your app as efficient as possible — whether you choose to implement them in full or in part.

SPAs can work well, but they can also be a performance bottleneck. So, let’s try to build them better.

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Michelle Barker)
<![CDATA[A Guide To Keyboard Accessibility: JavaScript (Part 2)]]> https://smashingmagazine.com/2022/11/guide-keyboard-accessibility-javascript-part2/ https://smashingmagazine.com/2022/11/guide-keyboard-accessibility-javascript-part2/ Mon, 21 Nov 2022 10:00:00 GMT In the previous article, we talked about how to improve accessibility for keyboard users using HTML and CSS. Those languages can do the job most of the time, but certain design requirements and the nature of certain components create the need for more complex interactions, and this is where JavaScript comes into play.

For keyboard accessibility purposes, most of the job is done with basic tools that open many possibilities for keyboard interactivity. This article covers a toolset that you can mix into different components to improve accessibility for keyboard users.

The Basics

Most of the time, your job with JavaScript to enhance components’ keyboard accessibility will be done with just a handful of tools, including the use of event listeners and certain JavaScript methods of a couple of Web APIs that can help us in this task.

One of the most important tools we have to add interactivity to our projects is the existence of events, which is the execution of functions that trigger when the element you’re checking receives a change.

keydown Event

One example of an event you can listen to with this Web API is the keydown event, which checks when a key is pressed.

Now, this isn’t used to add keyboard accessibility to elements like buttons or links because, by default, when you add a click event listener to them, this will also trigger the event when you use the Enter (for button and links) and Space keys (button only). Instead, the utility of the keydown event comes when you need to add functionality to other keys.

To add an example, let’s come back to the tooltip we created in the first part of this article. I mentioned that this tooltip needs to be closed when you press the Esc key. We’d need a keydown event listener to check if the pressed key is Esc. For that, we need to detect the event’s pressed key. In this case, we’ll check the event key’s property.

We’ll use keycode.info to check the event dump for this key. If you press the Esc key on this page, you’ll notice that e.key is equal to "Escape".

Note: There are two other ways to detect the pressed key, and those are checking e.keyCode and e.which. They will return a number. In the case of the Esc key, it’ll be 27. But, keep in mind those are deprecated alternatives, and while they work, e.key is the preferred option.

With that, we need to select our buttons and add the event listener. My approach to this matter is to use this event listener to add a class to the button and add this class as an exception to show it using the :not() pseudo-class. Let’s start changing our CSS a bit:

button:not(.hide-tooltip):hover + [role="tooltip"],
button:not(.hide-tooltip):focus + [role="tooltip"],
[role="tooltip"]:hover {
  display: block;
}

Now, with this exception added, let’s create our event listener!

const buttons = [...document.querySelectorAll("button")]

buttons.forEach(element => {
  element.addEventListener("keydown", (e) => {
    if (e.key === "Escape") {
      element.classList.add("hide-tooltip")
    }
  })
})

And there you have it! With just a sprinkle of JavaScript, we have added an accessibility function to our tooltip. And that was just the start of what we can do with a keydown event listener. It’ll be a crucial tool to improve keyboard accessibility for multiple components, but there is another event listener we should take into consideration.

blur Event

There is another event we’ll use often. This one detects when the element stops receiving focus. This event listener is important, and most of the time, you’ll use it to reverse the possible changes you have made with the keydown event listener.

Let’s come back to the tooltip. Right now, it has a problem: if you press the Esc key to close the tooltip, and then you focus on the same element again, the tooltip won’t appear. Why? Because we added the hide-tooltip class when you press the Esc key, but we never removed this class. This is where blur comes into play. Let’s add an event listener to revert this functionality.

element.addEventListener("blur", (e) => {
  if (element.classList.contains("hide-tooltip")) {
    element.classList.remove("hide-tooltip");
  }
});

Other Event Listeners (And Why You Might Don’t Need Them)

I mentioned that we’re going to need two event listeners in our toolkit, but there are other event listeners you could use, like focusout or focus. However, I think use cases for them are quite scarce. There is a special mention to focus because even if you can find good use cases for it, you need to be very careful. After all, if you don’t use it properly, you can cause a change of context.

A change of context is defined by WCAG as “major changes that, if made without user awareness, can disorient users who are not able to view the entire page simultaneously.” Some examples of change of context include:

  • Opening a new window;
  • Changing the layout of your site significantly;
  • Moving the focus to another part of the site.

This is important to keep in mind because creating a change of context at the moment of focusing on an element is a failure of WCAG criteria 3.2.1:

When any user interface component receives focus, it does not initiate a change of context.

— Success Criterion 3.2.1: Focus order

If you’re not careful, bad use of a function that listens to the focus event can create a change of context. Does that mean you shouldn’t use it? Not really, but to be honest, I can hardly find a use for this event. Most of the time, you’ll be using the :focus pseudo-class to create similar functionalities.

With that said, there is at least one component pattern that can benefit from this event listener in some cases, but I’ll cover it later when I start talking about components, so let’s put a pin on that topic for now.

focus() Method

Now, this is something we’ll be using with some frequency! This method from the HTMLElement API allows us to bring the keyboard focus to a particular element. By default, it’ll draw the focus indicator in the element and will scroll the page to the element’s location. This behavior can be changed with a couple of parameters:

  • preventScroll
    When it’s set to true, will make the browser doesn’t scroll until the programmatically focused element.
  • focusVisible
    When set to false, it will make the programmatically focused element doesn’t display its focus indicator. This property works only on Firefox right now.

Keep in mind that to focus the element, it needs to be either focusable or tabbable. If you need to bring the focus to a normally not tabbable element (like a dialog window), you’ll need to add the attribute tabindex with a negative integer to make it focusable. You can check out how tabindex works in the first part of this guide.

<button id="openModal">Bring focus</button>
<div id="modal" role="dialog" tabindex="-1">
  <h2>Modal content</h2>
</div>

Then we’ll add a click event listener to the button to make the dialog window focused:

const button = document.querySelector("#openModal");
const modal = document.querySelector("#modal")

button.addEventListener("click", () => {
  modal.focus()
})

And there you have it! This method will be very handy in a lot of components in tandem with the keydown attribute, so understanding how both of them work is crucial.

Changing HTML Attributes With JavaScript

Certain HTML attributes need to be modified with JavaScript to create accessibility in complex component patterns. Two of the most important ones for keyboard accessibility are tabindex and the more recently added inert. tabindex can be modified using setAttribute. This attribute requires two parameters:

  • name
    It checks the name of the attribute you want to modify.
  • value
    It will add the string this attribute requires if it doesn’t require a particular attribute (for example, if you add the attributes hidden or contenteditable, you’ll need to use an empty string).

Let’s check a quick example of how to use it:

const button = document.querySelector("button")

button.setAttribute("tabindex", "-1")

setAttribute will help a lot for accessibility in general. (I use it a lot to change ARIA attributes when needed!) But, when we talk about keyboard accessibility, tabindex is almost the only attribute you’ll be modifying with this method.

I mentioned the inert attribute before, and this one works a bit differently because it has its own property in the HTMLElement Web API. HTMLElement.inert is a boolean value that will let us toggle the inert attribute.

Keep in mind a couple of things before thinking about using this attribute:

  • You’ll need a polyfill because it’s not fully implemented in all browsers and is still quite recent. This polyfill created by Chrome engineers works pretty well in the tests I have made, so if you need this property, this is a safe approach, but keep in mind that it might have unexpected behaviors.
  • You can use setAttribute to change this attribute as well! Both work equally well, even with a polyfill. Whichever you decide to use is up to you.
const button = document.querySelector("button")

// Syntax with HTMLElement.inert
button.inert = true

// Syntax with Element.setAttribute()
button.setAttribute("inert", "")

This combination of tools will be handy for keyboard accessibility purposes. Now let’s start to see them in action!

Component Patterns

Toggletips

We learned how to make a tooltip in the previous part, and I mentioned how to enhance it with JavaScript, but there is another pattern for this kind of component called toggletip, which is a tooltip that works when you click them, instead of hovering on them.

Let’s check a quick list of what we need to make sure it happens:

  • When you press the button, the information should be announced to screen readers. That should happen when you press the button again. Pressing the button won’t close the toggletip.
  • The toggletip will be closed when you either click outside the toggletip, stop focusing the button, or press the Esc key.

I’ll take Heydon Pickering’s approach that he talks about in his book Inclusive Components. So, let’s start with the markup:

<p>If you need to check more information, check here
  <span class="toggletip-container">
    <button class="toggletip-button">
      <span class="toggletip-icon" aria-hidden="true">?</span>
      <div class="sr-only">Más información</div>
    </button>
    <span role="status" class="toggletip-info"></span>
  </span>
</p>

The idea is to inject the necessary HTML inside the element with the role="status". That’ll make screen readers announce the content when you click it. We’re using a button element to make it tabbable. Now, let’s create the script to show the content!

toggletipButton.addEventListener("click", () => {
  toggletipInfo.innerHTML = "";
  setTimeout(() => {
    toggletipInfo.innerHTML = toggletipContent;
  }, 100);
});

As Heydon mentions in his book, we use this approach of first removing the container’s HTML content and then using setTimeout to add it to make sure every time you click it, it’ll be announced for screen reader users. Now we need to check that when you’re clicking elsewhere, the content stops showing.

document.addEventListener("click", (e) => {
  if (toggletipContainer !== e.target) {
    toggletipInfo.innerHTML = ""
  }
})

With that out of the way, it’s time to add keyboard accessibility to this component. We don’t need to make the toggletip’s content show when you press the button because a good HTML semantic does that for us already. We need to make the toggletip’s content stop showing when you press the Esc key and when you stop focusing on this button. It works very similarly to what we did for tooltips in the previous section as examples, so let’s start working with that. First, we’ll use the keydown event listener to check when the Esc key is pressed:

toggletipContainer.addEventListener("keydown", (e) => {
  if (e.key === "Escape") {
    toggletipInfo.innerHTML = ""
    }
})

And now, we need to check the blur event to do the same. This one should be on the button element instead of the container itself.


toggletipButton.addEventListener("blur", () => {
  toggletipInfo.innerHTML = "";
});

And this is our result!

Roving tabindex

Tabbed interfaces are patterns that you can still see from time to time. They have a very interesting functionality when we talk about keyboard navigation: when you press the Tab key, it’ll go to the active tab panel. To navigate between the tab list, you’ll need to use the Arrow keys. This is a technique called roving tabindex that consists in removing the ability of the non-active elements to be tababble by adding the attribute tabindex="-1" and then using other keys to allow the navigation between those items.

With tabs, this is the expected behavior for those:

  • When you press Left or Up keys, it’ll move the keyboard focus onto the previous tab. If the focus is on the first tab, it’ll move the focus to the last tab.
  • When you press the Right or Down keys, it’ll move the keyboard focus onto the next tab. If the focus is on the first tab, it’ll move the focus to the last tab.

Creating this functionality is a mix of three techniques we saw before: modifying tabindex with setAttribute, the keydown event listener, and the focus() method. Let’s start by checking the markup of this component:

<ul role="tablist">
  <li role="presentation">
    <button id="tab1" role="tab" aria-selected="true">Tomato</button>
  </li>
  <li role="presentation">
    <button id="tab2" role="tab" tabindex="-1">Onion</button>
  </li>
  <li role="presentation">
    <button id="tab3" role="tab" tabindex="-1">Celery</button>
  </li>
  <li role="presentation">
    <button id="tab4" role="tab" tabindex="-1">Carrot</button>
  </li>
</ul>
<div class="tablist-container">
  <section role="tabpanel" aria-labelledby="tab1" tabindex="0">
  </section>
  <section role="tabpanel" aria-labelledby="tab2" tabindex="0" hidden>
  </section>
  <section role="tabpanel" aria-labelledby="tab3" tabindex="0" hidden>
  </section>
  <section role="tabpanel" aria-labelledby="tab4" tabindex="0" hidden>
  </section>
</div>

We are using aria-selected="true" to show which is the active tab, and we’re adding tabindex="-1" to make the non-active tabs unable to be selected with the Tab key. Tabpanels should be tabbable if there is no tabbable element inside of it, so this is why I added the attribute tabindex="0" and the non-active tabpanels are hidden by using the attribute hidden.

Time to add the navigation with the arrow keys. For this, we’ll need to create an array with the tabs and then create a function for it. Our next step is to check which is the first and last tab in the list. This is important because the action that will happen when you press a key will change if the keyboard focus is on one of those elements.

const TABS = [...TABLIST.querySelectorAll("[role='tab']")];

const createKeyboardNavigation = () => {
  const firstTab = TABS[0];
  const lastTab = TABS[TABS.length - 1];
}

After that, we’ll add a keydown event listener to each tab. I’ll start by adding the functionality with Left and Up arrows.

// Previous code of the createKeyboardNavigation function
TABS.forEach((element) => {
  element.addEventListener("keydown", function (e) {
    if (e.key === "ArrowUp" || e.key === "ArrowLeft") {
      e.preventDefault();
      if (element === firstTab) {
        lastTab.focus();
      } else {
        const focusableElement = TABS.indexOf(element) - 1;
        TABS[focusableElement].focus();
      }
    }
  }
}

This is what’s happening here:

  • First, we check that the pressed key is the Up or Left arrow. For that, we check the event.key.
  • If that’s true, we need to prevent those keys scroll the page because, remember, by default, they do that. We can use e.preventDefault() for this goal.
  • If the focused key is the first tab, it’ll automatically bring the keyboard focus to the last one. This is made by calling the method focus() to focus the last tab (which we store in a variable).
  • If it’s not the case, we need to check which is the position of the active tab. As we store the tab elements in an array, we can use the method indexOf() to check the position.
  • As we’re trying to navigate to the previous tab, we can subtract 1 from the result of indexOf() and then search the corresponding element in the TABS array and programmatically focus it with the focus() method.

Now we need to do a very similar process with the Down and Right keys:

// Previous code of the createKeyboardNavigation function
else if (e.key === "ArrowDown" || e.key === "ArrowRight") {
  e.preventDefault();
  if (element == lastTab) {
    firstTab.focus();
  } else {
    const focusableElement = TABS.indexOf(element) + 1;
    TABS[focusableElement].focus();
  }
}

As I mentioned, it’s a very similar process. Instead of subtracting one from the indexOf() result, we add 1 because we want to bring the keyboard focus to the next element.

Showing The Content And Changing HTML Attributes

We created the navigation, and now we need to show and hide the content as well as manipulate the attributes aria-selected and tabindex. Remember, we need to make that when the keyboard focus is on the active panel, and you press Shift + Tab, the focus should be in the active tab.

First, let’s create the function that shows the panel.

const showActivePanel = (element) => {
  const selectedId = element.target.id;
  TABPANELS.forEach((e) => {
    e.hidden = "true";
  });
  const activePanel = document.querySelector(
    `[aria-labelledby="${selectedId}"]`
  );
  activePanel.removeAttribute("hidden");
};
<
    const showActivePanel = (element) => {
      const selectedId = element.target.id;
      TABPANELS.forEach((e) => {
        e.hidden = "true";
      });
      const activePanel = document.querySelector(
        `[aria-labelledby="${selectedId}"]`
      );
      activePanel.removeAttribute("hidden");
    };

What we’re doing here is checking the id of the tab is being pressed, then hiding all the tab panels, and then looking for the tab panel we want to activate. We’ll know it’s the tab because it has the attribute aria-labelledby and uses the same value as the tab’s id. Then we show it by removing the attribute hidden.

Now we need to create a function to change the attributes:

const handleSelectedTab = (element) => {
  const selectedId = element.target.id;
  TABS.forEach((e) => {
    const id = e.getAttribute("id");
    if (id === selectedId) {
      e.removeAttribute("tabindex", "0");
      e.setAttribute("aria-selected", "true");
    } else {
      e.setAttribute("tabindex", "-1");
      e.setAttribute("aria-selected", "false");
    }
  });
};

What we’re doing here is, again, checking the id attribute and then looking at each tab. We’ll check if this tab’s id corresponds with the pressed element’s id.

If it’s the case, we’ll make it keyboard tabbable by either removing the attribute tabindex (because it’s a button, so it’s keyboard tabbable by default) or by adding the attribute tabindex="0". Additionally, we’ll add an indicator to screen reader users that this is the active tab by adding the attribute aria-selected="true".

If it doesn’t correspond, tabindex and aria-selected will be set to -1 and false, respectively.

Now, all we need to do is add a click event listener to each tab to handle both functions.

TABS.forEach((element) => {
  element.addEventListener("click", (element) => {
    showActivePanel(element),
    handleSelectedTab(element);
  });
});

And that’s it! We created the functionality to make tabs work, but we can do a little something else if needed.

Activate Tab On Focus

Do you remember what I mentioned about the focus event listener? You should be careful when you use it because it can create a change of context by accident, but it has some use, and this component is a perfect opportunity to use it!

According to ARIA Authoring Practices Guide (APG), we can make the displayed content show when you focus on the tab. This concept is often referred to as a follow focus and can be helpful for keyboard and screen reader users because it allows navigating more easily through the content.

However, you need to keep a couple of considerations about it:

  • If showing the content means making a lot of petitions and, by extension, making the network slower, making the displayed content follow the focus is not desired.
  • If it changes the layout in a significant way, that can be considered a change of context. That depends on the kind of content you want to show, and doing a change of context on focus is an accessibility issue, as I explained previously.

In this case, the amount of content doesn’t suppose a big change in either network or layout, so I’ll make the displayed content follows the focus of the tabs. This is a very simple task with the focus event listener. We can just literally copy and paste the event listener we created and just change click to focus.

TABS.forEach((element) => {
  element.addEventListener("click", (element) => {
    showActivePanel(element),
    handleSelectedTab(element);
  });

  element.addEventListener("focus", (element) => {
    showActivePanel(element),
    handleSelectedTab(element);
  });
});

And there you have it! Now the displayed content will work without the need to click the tab. Doing that or making it only work with a click is up to you and is surprisingly a very nuanced question. Personally, I’d stick just with making it shows when you press the tab because I think the experience of changing the attribute aria-selected by just focusing on the element can be slightly confusing. Still, it’s just a hypothesis on my part so take what I say with a grain of salt and always check it with users.

Additional keydown Event Listeners

Let’s come back to the createKeyboardNavigation for a moment. There are a couple of keys we can add. We can make the Home and End key brings the keyboard focus to the first and last tab, respectively. This is completely optional, so it’s ok if you don’t do it, but just to reiterate how a keydown event listener helps out, I’ll do that.

It’s a very easy task. We can create another couple of if statements to check if the Home and End keys are being pressed, and because we have stored the first and last tabs in variables, we can you focus them with the focus() method.

// Previous code of the createKeyboardNavigation function
else if (e.key === "Home") {
  e.preventDefault();
  firstTab.focus()
} else if (e.key === "End") {
  e.preventDefault();
  lastTab.focus()
}

And this is our result!

Opening And Closing The Modal

Modals are quite a complex pattern when we talk about keyboard accessibility, so let’s start with an easy task — opening and closing the modal.

It is indeed easy, but you need to keep something in mind: it’s very likely the button opens the modal, and the modal is far away in the DOM. So you need to manage the focus programmatically when you manage this component. There is a little catch here: you need to store which element opened the modal so we can return the keyboard focus returns to this element at the moment we close it.

Luckily, there is an easy way to do that, but let’s start by creating the markup of our site:

<body>
  <header>
    <!-- Header's content -->
  </header>
  <main>
    <!-- Main's content -->
    <button id="openModal">Open modal</button>
  </main>
  <footer>
    <!-- Footer's content -->
  </footer>
  <div role="dialog"
    aria-modal="true"
    aria-labelledby="modal-title"
    hidden
    tabindex="-1">
    <div class="dialog__overlay"></div>
    <div class="dialog__content">
      <h2 id="modal-title">Modal content</h2>
      <ul>
        <li><a href="#">Modal link 1</a></li>
        <li><a href="#">Modal link 2</a></li>
        <li><a href="#">Modal link 3</a></li>
      </ul>
      <button id="closeModal">Close modal</button>
    </div>
  </div>
</body>

As I mentioned, the modal and the button are far away from each other in the DOM. This will make it easier to create a focus trap later, but for now, let’s check the modal’s semantics:

  • role="dialog" will give the element the required semantics for screen readers. It needs to have a label to be recognized as a dialog window, so we’ll use the modal’s title as the label using the attribute aria-labelledby.
  • aria-modal="true" helps to make a screen reader user can only read the content of the element’s children, so it blocks access from screen readers. However, as you can see on the aria-modal page for a11ysupport.com, it’s not fully supported, so you can’t rely just on that for this task. It’ll be useful for screen readers who support it, but you’ll see there is another way to ensure screen reader users don’t interact with anything besides the modal once it’s opened.
  • As I mentioned, we need to bring the keyboard focus to our modal, so this is why we added the attribute tabindex="-1".

With that in mind, we need to create the function to open our modal. We need to check which was the element that opened it, and for that, we can use the property document.activeElement to check which element is being keyboard-focused right now and store it in a variable. This is my approach for this task:

let focusedElementBeforeModal

const modal = document.querySelector("[role='dialog']");
const modalOpenButton = document.querySelector("#openModal")
const modalCloseButton = document.querySelector("#closeModal")

const openModal = () => {
  focusedElementBeforeModal = document.activeElement

  modal.hidden = false;
  modal.focus();
};

It’s very simple:

  1. We store the button that opened the modal;
  2. Then we show it by removing the attribute hidden;
  3. Then we bring the focus to the modal with the focus() method.

It’s essential that you store the button before bringing the focus to the modal. Otherwise, the element that would be stored in this case would be the modal itself, and you don’t want that.

Now, we need to create the function to close the modal:

const closeModal = () => {
  modal.hidden = true;
  focusedElementBeforeModal.focus()
}

This is why it’s important to store the proper element. When we close the modal, we’ll bring back the keyboard focus to the element that opened it. With those functions created, all we have to do is add the event listeners for those functions! Remember that we also need to make the modal closes when you press the Esc key.

modalOpenButton.addEventListener("click", () => openModal())
modalCloseButton.addEventListener("click", () => closeModal())
modal.addEventListener("keydown", (e) => {
  if (e.key === "Escape") {
    closeModal()
  }
})

Right now, it looks very simple. But if that were all, modals wouldn’t be considered a complex pattern for accessibility, were they? This is where we need to create a very key task for this component, and we have two ways to do it.

Creating A Focus Trap

A focus trap ensures the keyboard focus can’t escape from the component. This is crucial because if a keyboard user can interact with anything outside a modal once it’s opened, it can create a very confusing experience. We have two ways to do that right now.

One of them is checking each element that can be tabbable with a keyboard, then storing which are the first and the last, and doing this:

  • When the user presses Shift + Tab and the keyboard focus is on the first tabbable element (remember, you can check that with document.activeElement), the focus will go to the last tabbable element.
  • When the user presses Tab, and the keyboard focus is on the last tabbable element, the keyboard focus should go to the first tabbable element.

Normally, I’d show you how to make this code, but I think A11y solutions made a very good script to create a focus trap. It sort of works as the keyboard navigation with the arrow keys we created for tab elements (as I mentioned before, patterns repeat themselves!), so I invite you to check this page.

I don’t want to use this approach as the main solution because it’s not exactly flawless. Those are some situations this approach doesn’t cover.

The first one is that it doesn’t take into account screen readers, especially mobile screen readers. As Rahul Kumar mentions in his article “Focus Trapping for Accessibility (A11Y)”, Talkback and Voiceover allow the user of gestures and double taps to navigate to the next or previous focusable element, and those gestures can’t be detected with an event listener because those gestures are something that technically speaking doesn’t happen in the browser. There is a solution for that, but I’ll put a pin on that topic for a moment.

The other concern is that this focus trap approach can lead to weird behaviors if you use certain combinations of tabbable elements. Take, for example, this modal:

Technically speaking, the first tabbable element is the first input. However, all the inputs in this example should focus on the last tabbable element (in this case, the button element) when the user presses the keys Shift + Tab. Otherwise, it could cause a weird behavior if the user presses those keys when the keyboard focus is on the second or third input.

If we want to create a more reliable solution, the best approach is using the inert attribute to make outer content inaccessible for screen readers and keyboard users, ensuring they can interact only with the modal’s content. Remember, this will require the inert polyfill to add more robustness to this technique.

Note: It’s important to note that despite the fact a focus trap and using inert in practice help to ensure keyboard accessibility for modals, they don’t work exactly the same. The main difference is that setting all documents but modal as inert, it’ll still let you move outside of the website and interact with the browser’s elements. This is arguably better for security concerns but deciding if you want to create a focus trap manually or use the inert attribute is up to you.

What we’ll do first is select all areas that don’t have the role dialog. As inert will remove all keyboard and screen reader interaction with the elements and their children, we’ll need to select only the direct children of body. This is why we let the modal container exist at the same level as tags like main, header, or footer.

// This selector works well for this specific HTML structure. Adapt according to your project.
const nonModalAreas = document.querySelectorAll("body > *:not([role='dialog'])")

Now we need to come back to the openModal function. After opening the modal, we need to add the attribute inert to those elements. This should be the last step in the function:

const openModal = () => {
  // Previously added code
  nonModalAreas.forEach((element) => {
    element.inert = true
  })
};

What about when you close the modal? You need to go to the closeModal function and remove this attribute. This needs to go before everything else in the code runs. Otherwise, the browser won’t be able to focus on the button that opened this modal.

const closeModal = () => {
  nonModalAreas.forEach((element) => {
    element.inert = false;
  });
// Previously added code
};

And this is our result!

See the Pen Modal test [forked] by Cristian Diaz.

Let’s suppose you don’t feel comfortable using the inert attribute right now and want to create a focus trap manually, as the one A11y Solutions shows. What can you do to ensure screen reader users can’t get out of the modal? aria-modal can help with that, but remember, the support for this property is quite shaky, especially for Talkback and VoiceOver for iOS. So the next best thing we can do is add the attribute aria-hidden="true" to all elements that are not the modal. It’s a very similar process to the one we made for the inert attribute, and you can use the same elements in the array we used for this topic as well!

const openModal = () => {
  //Previously added code
  nonModalAreas.forEach((element) => {
    element.setAttribute("aria-hidden", "true")
  });
};

const closeModal = () => {
  nonModalAreas.forEach((element) => {
    element.removeAttribute("aria-hidden")
  });
  // Previously added code
};

So, whether you decide to use the inert attribute or create a focus trap manually, you can ensure user experience for keyboard and screen reader users works at its best.

<dialog> Element

You might notice the markup I used and that I didn’t use the relatively new <dialog> element, and there is a reason for that. Yes, this element helps a lot by managing focus to the modal and to the button that opened easily, but, as Scott O’Hara points out in his article “Having an open dialog”, it still has some accessibility issues that even with a polyfill are not fully solved yet. So I decided to use a more robust approach there with the markup.

If you haven’t heard about this element, it has a couple of functions to open and close the dialog, as well as some new functionalities that will be handy when we create modals. If you want to check how it works, you can check Kevin Powell’s video about this element.

That doesn’t mean you shouldn’t use it at all. Accessibility’s situation about this element is improving, but keep in mind you still need to take into consideration certain details to make sure it works properly.

Other Component Patterns

I could go on with many component patterns, but to be honest, I think it’ll start getting redundant because, as a matter of fact, those patterns are quite similar between the different kinds of components you can make. Unless you have to make something very unconventional, those patterns we have seen here should be enough!

With that said, how can you know what requirements you will need for a component? This is an answer with many nuances that this article cannot cover. There are some resources like Scott O’Hara’s accessible components’ repository or UK government’s design system, but this is a question that does not have a simple answer. The most important thing about this topic is to always test them with disabled users to know what flaws they can have in terms of accessibility.

Wrapping Up

Keyboard accessibility can be quite hard, but it’s something you can achieve once you understand how keyboard users interact with a site and what principles you should keep in mind. Most of the time, HTML and CSS will do a great job of ensuring keyboard accessibility, but sometimes you’ll need JavaScript for more complex patterns.

It’s quite impressive what you can do for keyboard accessibility once you notice most of the time, the job is made with the same basic tools. Once you understand what you need to do, you can mix those tools to create a great user experience for keyboard users!

]]>
hello@smashingmagazine.com (Cristian Díaz)
<![CDATA[Smashing Podcast Episode 54 With Stéphanie Walter: What Is User Journey Mapping?]]> https://smashingmagazine.com/2022/11/smashing-podcast-episode-54/ https://smashingmagazine.com/2022/11/smashing-podcast-episode-54/ Fri, 18 Nov 2022 07:00:00 GMT This article is a sponsored by Deque

In this episode we’re talking about User Journey Mapping. What is it, and how does it help us build better digital products? Vitaly talks to expert Stéphanie Walter to find out.

Show Notes

Weekly Update

Transcript

Vitaly Friedman: She’s a UX expert, researcher and product designer with expertise in design and strategy. She lives in Luxembourg, teaches at school and universities and facilitates workshops in small and large teams, not a day passes by without Stephanie sharing what she has learned on Twitter on LinkedIn, and in her wonderful weekly roundup called the Pixels of the Week. She recently wrote a book on user journey maps and currently works for MAL consulting and European Investment Bank all around enterprise UX challenges. So we know she’s a fantastic designer and problem solver, but did you know that she once had a full 45 minutes long conference talk on pizza recipe examples. If your favorite pizza surprise, surprise is a vegetarian one, but with added bacon or ham. My smashing friends, please welcome Stephanie Walter. Hello Stephanie. How are you doing today?

Stéphanie Walter: Yay, I’m smashing. And it’s Friday. So yeah.

Vitaly: Yay. It’s Friday. Does Friday usually mean pizza day for you?

Stéphanie: Yeah, pizza or Indian food as well.

Vitaly: Okay, that sounds wonderful. Well, Stephanie, it’s always such a pleasure to see you. I know that you spoke at the Smashing Cup Barcelona, I think a while back. It feels like it was, yeah, I don’t know, 150 years ago. So I always learned so much from you. So maybe it’s a good idea to start by just asking you to share a little bit of your story. So how did you even end up getting into this? I know that much of your time is spent around Enterprise UX, but eventually you had to go through a lot of different things and I know you did a lot of different things throughout your career to get there. So maybe share a little bit about your background and your story.

Stéphanie: So I have a master degree in design and languages. It’s a little bit strange. It’s both. It’s a degree where you learn how to build website and how to translate them basically. And after that I decided to do an internship in Germany. So I was working for a company and I think I finished what I was supposed to do in three months instead of six. So they said, hey, do you want to do mobile apps? I was like, yeah, I’ve never done that, but sure. So I got interested and at the time there was not a lot of documentation on mobile and native design, but there was something on Apple guideline and it was called Human Computer Interaction, something like that. So it kind of drove me into HCI and UX design. So we had usability class at the university. We had kind of a few hours of how do you do usability tests, but that was basically it.

Stéphanie: And then during my internship I discovered UX design. I thought like, oh, this is actually what I want to do. It’s quite interesting, understanding user needs and really building products and services that try to fit and match those needs. So I worked in Germany, then I went back to France to work for a web agency and I said, yeah, if I’m going to leave the agency, I’m going to leave France. So this is basically what I did. And I got hired at the University of Luxembourg as a researcher assistant in the Human Computer Interaction Department. So it was very interesting to work in an academic place. And after that I decided to go back to private sector, and I was lucky I worked with a company that had a lot of different contracts in a lot of different areas, and this is really when I started specializing in Enterprise UX because they were doing a lot of things that were either B2B or B2B to C, but it was always ugly complex dashboards and a lot.

Vitaly: This sounds exciting stuff, isn’t it?

Stéphanie: Yeah, I remember I had to help with the design of a form that was for Luxembourgish customs, and the form was so complicated in terms of levels that I printed it on a piece of paper and I just drew lines to understand the hierarchy and information architecture of that. And it’s a little bit complicated because it’s like you have to have a number and all the numbers stacks up. So if you have the tax number, every single digit means something. So it’s like apples that were harvest between October and November in specific countries in Europe that are designed to become cider, all of that, it’s a number.

Vitaly: Well that does sound like a very exciting exercise in design.

Stéphanie: Crazily complex. But I have eight levels, we have six levels in html, H1, 2, 3, 4, 5, 6. What do we do?

Vitaly: But then look at that, you have established itself as an expert in helping other people harvest apples, but instead of decided to jump more into design and UX work.

Stéphanie: So it was a lot of different really, really cool stuff around that. And I was like, yeah, you know what? I’m never going to be a fancy designer, like those designer who do amazing website for marketing campaigns or I don’t know, I know a lot of people who do really cool stuff around the museum and things like very immersive. I was like, yay. I like complex challenging, super heavy information architecture and solving problems for people who have to work with a tool on a daily basis. And I was like, okay, I think this is kind of the kind of challenge that I like and I want to keep on working on that.

Vitaly: That’s interesting. So it’s always kind of fascinating story to me because I think that we have a lot of articles about designing a perfect button and picking the right icons and making responsive tables and navigation, things like that. But when I dive deep into this really, really complicated world of enterprise applications or multi-level like six, seven levels of navigation and I don’t know, 20, 25 multi form pages with PDFs integrated and all of that, I’m wondering, I mean I know that this is your life, most part of it. I’m wondering at this point, do you think that the world in which we’re living in, the enterprise world, is undiscovered? Are there a lot of books, articles, resources on that? How do you feel in that world?

Stéphanie: Honestly, there’s not a lot of content that specifically talked to that. And I don’t know why. Maybe because NDAs and things like, that’s a lot of stuff that you can’t show in those areas. Also, let’s face it’s not fancy. No one wants to see an interface that is supposed to help you optimize track driving through the area or something super complicated. So it’s not self explanatory. So a lot of people, they don’t put those on the portfolios because today there’s still this idea that you need some wow stuff in the portfolio. So I think there’s a lot of people around here that actually work in enterprise UX with complex software like that, but there’s not a lot of content about it. But why is a mystery to me.

Vitaly: Well, you are changing that. I think in many ways it’s just the fact that what really surprises me really is that we see a lot of case studies about portfolio designs, about immersive campaigns, like you mentioned, things related to branding, I don’t know, big redesigns that happen in big companies and so on. But not necessarily about those things, which are, I don’t know, insurance companies and truck configurators and whatnot. So that’s kind of always challenging for me. But I also want to ask you maybe on another side of that, when you think about enterprise UX, I think that many of us listening this later or in years from now, maybe still will be thinking about long meetings, long deadlines, complex workflows, a lot of legacy. Is that enterprise UX or how would you describe it? How would you define it?

Stéphanie: It necessarily depends. You can arrive on a project where they have nothing and then there’s no legacy. You build from the ground, but you still have a lot of meeting because the business is complex. So you need time to understand the business. You also need help to, you can’t really go around those meetings because they are usually kind of useful to help you understand exactly what is going on. But then, yeah, it depends. Legacy is one problem. Another problem that I see and foresee in the future is depending on when we are those Gartner, Bloomberg and all of those big company, they either tell people that they need to internalize the team and then you need to do in-house developments. So you have a bunch of developer who will develop the enterprise, product often without designers.

Stéphanie: And then a few years later, Gartner goes like, no, you know what, no. Package the new thing. So stop having an internal IT team, buy packages, and then everyone decides to buy package. And then there’s a new wave from, I don’t know, Gartner, Bloom, whatever, Harvard Business Review, those people that big company listen to. And they say, yeah, no, let’s go hybrid, let’s do something like a package. But then the package is the business web services, and then you can still do the UI, the front ends. So this cycling through, and it’s really, really funny because if you worked in such industry for a few years, you’ve seen the waves of Oh, let’s build everything internally. Oh, let’s build a package. But look, the business is so complicated. We bought a package and now we discover doesn’t fit our need. I think we need to rebuild something internally and then, but building things internally costs a lot of money. So let’s package. It’s kind of every few years it comes around.

Vitaly: I think it’s also related to the fact that there is just a lot of layers and with every layer comes a bit of politics involved and everybody has their own interests and KPIs and goals. And I’m wondering how do you even operate in this kind of environment? I mean, you must have very strong governance, very strong guidelines, and very strong buy-in from the top. The reason why I bring this up is because your work has been known for being you focus very much on accessibility, inclusive design, user-centric design. But then at the same time, if you have all the different layers of politics and all these different layers of business decisions, which in some situations might be more important even than the user research part, how do you even navigate that space? Do you find this or is it maybe the case that now in 2023 or '22 still when we’re recording this, that UX is kind of a part of what we do, that it’s understood by stakeholders?

Stéphanie: I think it’s that bad. The way we do it is we navigate around the mess. Basically. We try to stay away. And I am lucky, I work with amazing people who actually shield the team from all the political stuff. So I have people working with us who try to deal with that so that we on the team can do our daily job. And also I think I’m lucky because my manager understands, I’m my manager. The person I’m referring to understands what is UX design and why it’s useful. So they will fight basically for me to have some time to talk to the users. But I’m super lucky in the place where I work, I think we are the only project that’s actually able to have a very user centered approach. And in a lot of area in enterprise UX not everyone is that lucky.

Stéphanie: In a lot of places you have analysts will ask to the user what do they want, and then the users are expected to provide a solution and then the person will just write a technical ticket saying the user wants an export to Excel. Well, if you go there and you’re like, you talk to the user and yeah, but today you don’t have an export to Excel button, so what do you do? And the user shows you the table, they will copy paste the whole table, paste it in Excel, and then you’re like, okay, so it’s in Excel, what do you do now? And then the person goes into one of the column that is the status of something. So it’s either active or inactive, and she just removes all the inactive from the table and she’s like, so we have an analyst that is writing a story saying the user needs an export to Excel button, but the user doesn’t need an export to Excel button in this very specific situation. She needs to remove all the inactive stuff from the screen.

Stéphanie: And yet the export to Excel is the solution she came up with because this is what she does today. But we could also maybe have filters in the browser directly on that table, modern tables and things. So the user need here is not to export to Excel, is to clean up some stuff on the screen and then you come here and you’re like, yeah, but actually no, we are not going to do the export. We will do the export to Excel for other reasons because it’s needed. But for this specific user need, it’s not an export to Excel that we will provide a solution is a filter on the table.

Stéphanie: And unfortunately in a lot of places you have this kind of old school analysis where they will go to people, ask them what they want and then IT will implement it and hopefully find a place somewhere in the screen in a corner to put that button or that feature. So yeah, it’s really, really complicated. But I think at the same time, a lot of people like me are starting to have this kind of change and pushing things forward, but then you don’t make friends all the time, then the old school people are not super happy about you coming and saying, wait a minute, that’s a weird requirement. Can we talk about that and really try to understand what’s going on here?

Vitaly: Yeah.

Stéphanie: Yeah, definitely.

Vitaly: For me, it’s also just really this interesting part because I feel like in a way everything is a little bit of a fight. Sometimes it’s a bigger fight, sometimes it’s a smaller fight. But one thing is even those little things like discovery, I can imagine that it might take, I don’t know, literally months to just discover what are the user needs, how do we make it work? And then apply the good old UX process to it. Maybe you could describe your UX process in general for those kind of projects. Is it just regular way how we do UX or do you have to adjust, do something else? Maybe some methodologies work better than the others? What has your experience been so far?

Stéphanie: So for me, it’s actually faster I think because here I work with my users are the people who work for the bank I work for. We don’t work in the same department, but for recruiting user for tests and things like that, it’s actually easier. I can have a list of the people who use the tool. So in this specific case for me, discovery really, it actually goes a little bit faster because we lose less time in the recruitment. Also, when you go to the people and you say, we are going to talk about the tool that you use in order to improve it. Most of the time people are super happy to talk to you, even if they have a lot of things to do, they’re happy to be invested, that you take time to talk to them to get invested in the project. And it’s a tool they use on a daily basis.

Stéphanie: So I think in my specific case, but here you have to understand the context is I work in the IT department internally, and we provide tools for the users. So I’m not working for a SaaS company that provides B2B employee tools that they resell. So here it’s very specific context and for me it’s actually easier in this case. But for the process, what we have is, so we are redesigning a tool. So we have some basic data, which is server logs that say, okay, how many people visited this page? But then that’s kind of the baseline to say if we migrate some pages, we should migrate first the one that were visited the most. And we have kind of two streams, we have the pages visited the most, and also we have things around user tasks.

Stéphanie: So the user, they need to do some things in the whole process of loan at the bank. So to give you some context, the bank is lending money is just that it’s a European investment bank. So they’re lending money to other countries, to other banks. So it’s not a loan for your car, but it’s kind of has the same principle. You need to build a project, you need to explain what you are going to do with the money, how it’s going to be used and stuff like that. So there’s a lot of different steps and there’s a lot of tasks and activities around that. So a lot of the things we do is we start with the user tasks. So sometimes people ask me about personnel and I’m like, yeah, if I do personnel, if I have 300 of those, and for me, it doesn’t matter if the person is an assistant, a lawyer, an engineer, or we don’t discriminate based on personnel, we discriminate and we do the user research on specific tasks and then we’d check what type of user needs to do this task at which step of the process.

Stéphanie: And in the discovery phase, we will involve the different users from different department who will have to perform this task. We do a lot of interviews. So usually we have kind of interview script where, so I prepare my research plan with the objective of the research, then I write my questions to kind of understand what people are doing. Often it’s kind of open interview where you will ask a few question and then you will connect topics that will go around. Sometimes we go way beyond the research that we’re currently doing, but you’re like, yeah, we’re going to write this down because eventually we will tackle the other topic that the person is currently talking about. So I’m just going to write it down and then take a note that whenever we tackle that specific topic, oh, that’s a user, I can also talk to you to about that.

Stéphanie: So we do a lot of interviews. We do some kind of light shadowing where we ask people to show us to share their screen where we’re working on a specific feature or page. We would be like, okay, then show us where do you go? How does it works? We do a lot of, it’s not really observational. Yeah, it’s kind of observational studies, but with screen sharing. So we are not observing them work as in they do the daily basis, we’re behind them, but we ask them to show how do they perform a task or an activity so that we can get a better understanding of that. And I’m also working a lot with business analysts to understand the business processes because this is super complicated and I can’t know it all by heart to start with. So yeah, mostly discovery interviews. Then we will do some prototype, what needs a big feature and some usability testing on those prototypes.

Stéphanie: What we do is if it’s not such a big feature, we would sometimes just do a design, implement it, and then ask feedback on the implemented version. If it’s something we are pretty confident about and we know we may not have too many user issues or too many question about that, then we will implement it and ask questions or do some light testing once it’s implemented.

Stéphanie: Add then what we did is we onboarding new users and we gave them the user diary, which is an Excel sheet because I work for a bank. So the idea was they use the new interface for a month to see if anything is missing, if there’s some things they don’t understand. And for a month they have this diary where they can log every time there’s something that prevents them from doing their job, whether it’s a bag, a content missing, a feature or something, they put it in the diary log and then we check those diary. We usually come back to them with specific question about certain area. And then we keep on improving the product. So we are not just doing kind of discovery before launching a feature. We also do a lot of back and forth once something is launched and then, yeah [inaudible 00:21:05].

Vitaly: That sounds fantastic.

Stéphanie: Yeah, we support people. So we don’t do the training unless someone asks us to. But every department basically has some support people who are helping the user with different tools, including our tools. So what I usually do is I attend those smaller training sessions because it’s quite interesting also to see how people react the first time they see the interface, what are their questions, stuff like that. So we collaborate a lot. It takes a tremendous amount of time because then it’s one hour meetings where you just sit and listen and watch what the people are doing. So in terms of time, it takes a lot of time, but it also helps gather interesting data.

Vitaly: Do you also use of a speak aloud protocol when people are going through tasks or you just observe mostly how people deal with, I don’t know, with an interface kind of competing the tasks?

Stéphanie: No, we ask them to speak aloud, so we explain what speak aloud means. Because if you’re not UX, you might not mean.

Vitaly: Yes.

Stéphanie: So we try to make people feel comfortable. So some people are amazing at that. They will just tell you what everything that is going on in their brain where they click, what’s weird, and some people even after you told them, please feel free to explain to us what you see on the screen, what’s happening in your mind, why do you want to click somewhere? All of that. They will still just click and say nothing. So we trying to nudge them like, oh yeah, then when they stopped you just say something like, "Oh, you stopped. What is happening? Can you explain us why?" So we try to nudge them without kind of helping them, but yeah, it’s not academic research.

Vitaly: Yeah, yeah, I understand. But do you feel, Stephanie, at this point, after all these interviews, that you can actually read people’s minds when they start clicking around or tap on buttons and so on? Can you just predict what people are doing or do you feel like it’s always almost a miracle? Surprises are always in there.

Stéphanie: Depends on the people. Some stuff you can kind of predict, especially when we test some of the older things that were developed years ago, we kind of anticipate the issues, but no, sometimes on the new things, we have interesting results and you’re like, yeah, actually that makes sense. We should have thought about this. That’s a really good idea. We will do that. We had a column with the name of the person, and we have a place where you have the team member for a specific project. And in the team place, what I did, I put mail to on everyone’s name. So you click on it auto fills an email with the name of the project nicely, the introduction. And people are super happy about that because then they don’t need to copy paste the email of the person anymore. They do all of that.

Stéphanie: And then I have another page where I have the name of the person, I didn’t even think about putting the link there, and the user was like, "Yeah, but we have the link on the teams here. We have also name. Why is it not there?" Ah, actually, yeah.

Vitaly: That makes sense.

Stéphanie: That makes a lot sense. So it’s easy to develop. So yeah, quick win. Definitely. Yeah.

Vitaly: Yeah. Excellent. Well, one thing that surprised me is that you wrote this entire book about customer journey maps and you published customer journey maps, but you did not mention customer journey map as a part of your workflow. Does it not quite fit, or is it just something that you do for other projects?

Stéphanie: Because customer journey map for me is that research method is a tool that you build based on the research. So basically some of the interviews, we worked on a project that was people have to validate tasks, and we actually build a customer journey map for that. But basically we did some interviews and the customer journey map was kind of an artifact kind of result of the user interview. So no, I use customer journey maps a lot, but it’s as if I’m say I and I didn’t learn to mention that I do word frames. To me it’s the kind of same thing.

Stéphanie: It’s not building a customer journey map to build a customer journey map. You are basically doing some research and sometimes you present it as a customer journey map, sometimes as a report, sometimes as an empathy map. But yeah, definitely we have this amazing customer journey where one of the trigger is human notification. And it always makes me laugh so much, which is they have a lot of email and all of the stuff for notification, but kind of the biggest notifications at some point, an assistant picking up the phone and saying, "Hey look, you need to validate this before six tonight. Could you please do it?" So we have this whole journey with human notification in the middle, which is quite funny.

Vitaly: Well, that’s the enterprise world for you, I guess in some way or the other. I’m also wondering now I can only imagine that it takes quite a bit of time to even work in this space, but then you always find time to, I don’t know, read a lot apparently, because every time I jump into LinkedIn or on a blog, it’s just an incredible wealth of resources all around things from CSS to UX to freebies, goodies, whatever, everything. So how does that work? Where do you find all of the stuff? Do you just spend time, I don’t know, during your pizza experiences by reading articles all around design front and then UX.

Stéphanie: Okay, so the big secret is most of the articles, I don’t read them, I listen to them.

Vitaly: No, come on Stephanie. You can’t say that.

Stéphanie: No. I listen to them.

Vitaly: Oh, so you listen to them.

Stéphanie: Yeah, I listen to them.

Vitaly: Please share details.

Stéphanie: Which means in Firefox you have, I think it’s called Reading Mode, but you can ask Firefox to read the article to you. So usually a lot of the super long in-depth articles, I don’t have the patience to read them on a screen, so I will just put the headset on my ears and then I will listen to the article while cooking, cleaning the dishes, doing [inaudible 00:27:34] for the moving of my flats and stuff like that. So yeah, that’s the secret. It’s like I’m multitasking and often I’m listening to the articles while doing manual labor that doesn’t need my brain.

Vitaly: Right. But I assume that compiling the list of links and writing on LinkedIn is done manually.

Stéphanie: Yeah, yeah. I have actually where it is basically, I can schedule things on LinkedIn and Twitter at the same time, so it makes it a little bit easier. It just allows me to post, so I enter it once. Sometimes I need to check for the handles because the tool is able to get the Twitter handles, but not the LinkedIn handles. So if I post something on LinkedIn, I need to tag someone, I need to go back to the post and edit it, which is a little bit annoying. And also sometimes I will not read anything for a whole day and just read, I don’t know, 10 articles during the weekend. And I don’t want to annoy people with an article during the weekend, so will just schedule the post so that it’s not kind of overwhelming and posting everything at the same time. So yeah, organization and having an AI.

Vitaly: So I think that-

Stéphanie: A screen reader read the articles to me.

Vitaly: I think that enterprise world taught you how to be very well organized, but I’m sure that you’ve been organized even before that as well. I can almost hear some people in the back asking, "But I’m interested in getting into enterprise UX." So maybe kind of jumping back on quickly to the topic, I’m wondering are there particular roles, skills that you think are absolutely important to be able to comfortably navigate that enterprise UX space? Or is it just the regular UX work, just more challenging?

Stéphanie: I think definitely information architecture and the ability to make sense of a lot of data and kind of organizational skills as on information in UI level, because you will get a lot of information thrown at you in enterprise. The business is so complicated that you need to make sense of all the mess. And there’s an amazing book that I think it’s called Making of Sense of the Mess, Abby Covert. She wrote a book on information architecture and she wrote a second book on diagrams, which I really like as well. And so yeah, I would say if you want to work in enterprise UX, it’s definitely being able to not be scared of the complexity because you will get a lot of complexity to deal with on a daily basis. And then, yeah, information architecture is one of the biggest skills at some point to make sure that you arrange the content in a way that makes sense to the user.

Stéphanie: You cannot comprehend all the complexity of the business behind that. Yes, but it’s a bit tricky. Also, I think you need to understand that you might need to let go of all the UI principle that are taught in mainstream articles, like make the font bigger and put some more white space. And no. I have places who wants a small font, they want as much data as possible on the screen, they don’t want to scroll. So if you could condense everything on one single screen. So all these fancy article that say, yeah, big font size are trendy there, it’s like, yeah, sure on blogs and marketing websites, but in my world, nah.

Vitaly: Yeah. So I can only agree with you on that because I think in many ways what my job has been is really trying to keep as I’m really literally also show as much information as possible in given place. And then of course you have a table with filters, with sorting, with multi sorting, with all those things, and they all have to be visible and then you need to add some batch actions on top of that and export features and whatnot. And then it has to kind of in some way or the other, work on mobile as well.

Stéphanie: So this is a very different world for sure. So I think it would definitely be a good idea to see, just to be able to explore or see more case studies and work done in that world as well. But I heard that enterprise UX actually is just one part of your story because you are also interested in other things like for example, illustrations and graphic design. And on your beautiful blog, you also of course have your beautiful illustrations, and every now and again one can see your illustrations, but do you even have time for it now that you are so, I don’t know, so deep dive into this messy world of tables, filters, forms, and all of that. Do you have time for your beautiful graphic design illustration work?

Stéphanie: Yeah, usually in the evenings or weekends when I have a topic that I’m interested in too. This is also why I could not be a professional illustrator because I don’t know, how do you illustrate something someone else asks you to do? So all the illustration I’m doing is just like, yeah, I have this really fun idea and I’m going to draw it, and that’s basically it. So I would not be able to have someone tell me, oh, could you do an illustration on that on that, so I admire illustrator who are able to do that work for other people and stuff. For me it’s kind of just a hobby and just having fun illustrating kind of funny things.

Stéphanie: And also I blame Instagram, they have this Domestica advertisements. So Domestica is a website where you can learn a lot of art, craft and stuff. Really like this illustration, I think the pottery, how to build furniture out of wood. I’ve done some courses on that. So it’s really all the creativity stuff and sometimes they’re pushing me advertisement on my Instagram’s like, "Hey, do you want to do a new class on [inaudible 00:33:37] illustration? I was like, "Damn it."

Vitaly: Right?

Stéphanie: Another class on [inaudible 00:33:44] illustration just for fun.

Vitaly: But it’s unlikely that you’re going to give up your wonderful world of enterprise UX for that. Will you?

Stéphanie: No, no. Yeah, I prefer, I think it’s kind of tough to be in enterprise UX because there’s a lot of politics and so it’s very, very demanding. But artist world, illustration world, then this sounds even worse with everyone thinking they can just do whatever they want. Copyright issue, content theft AI as you know who are fed by styles of a specific artist and you can create-

Vitaly: Well who knows Stephanie, maybe at some point we just waiting for a startup to be building an enterprise AI constructor bot something using mid journey and whatnot.

Stéphanie: I don’t see that. But that’s the same as a package. And everywhere where they bought a package, I saw it fail. Either it didn’t work or you end up with some users super frustrated. In one company they bought a package and they could not have it involved anymore because the company went bankrupt and they basically repurposed some of the labels. So it’s like, okay, this label is something, but it does something completely else. And everyone knows that if you want to do that, you need to click on this label that has nothing to do. But they can’t change the label.

Vitaly: Oh well I think it sad.

Stéphanie: Yeah, I’m like, yeah, I’m curious to see what AI and stuff can do for enterprise UX. But honestly I don’t know.

Vitaly: A little bit skeptical. I can tell from your voice and from the way you answer that question, well, but I’m wondering if your students challenge you because of course you also teach for University in Strasbourg and also online and you also provide mentorship. And not only do I wonder just how do you find time for it all, but I understand that one, I mean for me it’s kind of the same story. I always kind of make time for it. It’s not about finding time, it’s kind of making time for it. But I do want to ask at this point, what is for you, the most rewarding part about this? I can tell that of course, you’re very passionate about disability and design interface design and the world of enterprise UX one can tell of course as well, I think it might be a little bit difficult to convey to students all the difficult part about enterprise UX and how to apply UX work in enterprise UX setting. Or are you teaching something that’s maybe a little bit more just general UXy? And again, just the experience. What would be the most rewarding part for you of taking time to do this?

Stéphanie: So I am teaching mobile usability and UX design applied to mobile and responsive design. So it’s not specifically enterprise UX, but the cool thing is I’m teaching a framework which helps people build products and services with reusable components. And I think that’s the interesting part because then the students are super happy that I’m providing a framework to help them deal with the complexity. And sometime they will be like, yeah, I’m not sure where the teacher is going with this framework. But then after they started working, they’re like, oh, I remembered your course. And then I used that framework and it totally helped me kind of make sense of the mess and stuff. So I have a very small part of my course that is dedicated to information architecture and how to build reusable components for responsive web design. So components that can adapt to different screen sizes or that you can reuse in a big area in a smaller area.

Stéphanie: So I’m not going in all the media query and container query detail, that would be the technical part. But basically I’m preparing designers to be ready for that. And I had a lot of feedback that was like, "Oh, I went back to work on Monday and I reuse what you taught us." And I think this is what drives me, the best feedback you can give someone who is teaching a workshop is on Monday morning, I was able to apply something I learned from you last week, which is amazing because then you really made a difference in that person’s work. So I think it’s the same for students. I’m like, I’m pretty sure that they are not super convinced that everything I’m teaching them today is going to be useful, but at some point later in their career, they will remember, oh yeah, we didn’t know how to decide if we needed to build a mobile or native app.

Stéphanie: But then we remembered was Stephanie said about starting with the user need and checking what makes sense based on the user needs, so user need first, and then decide on the technology instead of deciding technology and try to feed the user need into that technology, which makes very little sense. So it’s often about, but it’s the same for some of my classes. While you are in the class, you’re like, yeah, okay, it’s interesting, but I’m not sure if I’m ever going to reuse that. And then a few years later, you’re working. I like, huh, yeah, actually that was very useful.

Vitaly: That is indeed I’m sure, a very rewarding experience. I think it’s always just getting some sort of a feedback from people who, I don’t know, read something that you posted or found something useful. And all this is in many ways kind of the fuel of motivation to keep going and explore and keep exploring and keep growing. But also, actually one thing that I ask myself a lot based on that, every time it comes to a point where I realize, okay, well these are some bits of knowledge that I’ve gathered and I presented maybe, and then somebody learned from that, I always try to look back and see when did I learn that actually? Or how did I learn that? And how did it evolve over time maybe. So the question that I’m thinking of at this point is when you look back at your career, what do you wish you would’ve told yourself 10, 15 years ago? Or what do you wish you, I don’t know, how would you wish you would have structured your career? Or do you feel comfortable where you are? Do you feel like you would’ve done something a little bit differently, looking back?

Stéphanie: I would’ve loved to have more psychology. I have this whole thing on, we created some [inaudible 00:40:25] on cognitive biases of years ago with a trend and it’s kind of blown up. I have people in different institutions and in some company using it to help their colleagues understand cognitive biases. And definitely I think I would’ve liked to have a little bit more background in psychology, cognitive psychology, behavioral psychology, also as a UX designer. But at the same time, I think when I was a student, this kind of UX career path didn’t really exist per se. So in France you had something called ergonomic, which is an issue with ergonomics. That’s my problem. Ergonomics. Ergonomics is chair and posture and how do you make sure physical, but ergonomy in French is both, it’s either the chair, but it can also be the usability part.

Stéphanie: So it’s a tricky word to translate. So there’s some master degree that in psychology that you prepare you to become an ergonomist in the English version of the world, which is you go to the people, you observe how they work and then you try to help them with postures or moving around things, but also adapting their workspace and adapting the processes and stuff like that. And it’s kind of linked to mastering, is it mastering? No, it’s a license. So it’s a bachelor in psychology in France, but this is not your design again, it’s something else. So I wish I had kind of more of a background in that. So now I’m trying to compensate with some online learning, some books and a lot of that. But yeah, definitely I would say if you want to become a UX designer and really if you are interested into that, having a little bit of background in how does the human brain work when it comes to memory, how do we learn, how do we perceive information, all of that can be very, very helpful.

Vitaly: I remember the last one that would be, that’s always something I ask because who knows who is going to listen to this podcast at some point, well, this year or in a few years. Is there a particular dream project that you ever wished or always wished you would be working on? I mean, you are working on some pretty complex environments and projects already, but if you had to pick your battle, what would be one of the really interesting products, companies, challenges, dream projects that you ever wished you could work on?

Stéphanie: I don’t know honestly, but I think something around maybe service design or more having stuff built in to not only the UI but also the whole service around it. So maybe connected houses or kind of helping in different area, maybe working on some tools in factory for instance. I would love to do that, go there, observe how do people work, and then optimize the tool to help them in their daily job. So kind of a mix between a little bit of interface, but also a lot of work around service design, process design, things like that. I think this would be cool. I’ve seen some that Airbus, a plane company was looking for an intern and was like, oh gosh, I would’ve loved to be a York’s intern for Airbus when I started. Because I think it’s working on the cockpits and the UI interface of a plane. That must be something quite challenging and quite fun.

Vitaly: You do a good challenge, one can tell. Wait and see, wait and see Stephanie, wait and see, who knows. Well, so we’ve been learning today what enterprise UX is, but maybe as a final word from you Stephanie, what have you been learning recently? You’ve been publishing, linking to all the articles and mentioning all the tools. What were some of the really interesting things that you learned recently?

Stéphanie: I think I shared it last week. Gary Reid had a super interesting take on [inaudible 00:44:44] 3.0 and the need or not need of interfaces. A lot of interesting thought on how Web3 is not accessible and not open at all, even if people are trying to tell you that it’s open and easy. So yeah, I really like that her take mixing what’s coming in the new work and kind of accessibility in the future and how we will include human being in different experiences. That is something that I really like that talk that she gave because it’s really cool to try to imagine and foresee the future in a not bullshit way, because it’s the end of the year. We will get the trends for next year and it’ll be all bullshit. But her talk is actually, it sound did grounded on reality. So that was really, really cool.

Vitaly: Yeah, that’s interesting. I’m very excited actually about this plethora of articles around all the cool and important and less important digital trends in 2023. Always look at them and then think, huh, let’s see how well we or better we have become in predicting the future. It didn’t look very good over the last decade or so at all. Well if you, dear listener, would like to hear more from Stephanie, you can find her on Twitter where she’s @WalterStephanie and on LinkedIn where she’s Stephanie Walter Pro and on her website StephanieWalter.design, Stephanie will also be running a workshop on designing better products for Smashing Workshops. So please drop in if you have time. I totally forgot to ask about that, Stephanie, but is it true that you are running that workshop?

Stéphanie: Yay. I hope so. It should be a lot of fun. It should be about dealing with complexity of product, giving people, again, a framework to help them. And I hope they will be happy and find something that will help them deal with complexity on the work on the Monday morning also-

Vitaly: Oh, you do like complexity?

Stéphanie: Yeah.

Vitaly: Excellent, excellent. So that sounds very, very good. So please do join us on November 28th and December 12th where we’re going to dive in into designing by the products with Stephanie. I’m very excited about that. Well, thanks for joining us today, Stephanie. Do you have any parting words or wisdom that you would like to send into the universe by people who actually manage to listen to the very last sentence of this podcast?

Stéphanie: No, I don’t know. I’m really bad at this.

Vitaly: We all are.

Stéphanie: Stay safe maybe. And yeah, I think stay safe is still something we need to make sure, even if the pandemic seems to be a little bit over. Yeah, stay safe.

]]>
hello@smashingmagazine.com (Drew McLellan)
<![CDATA[A Guide To Image Optimization On Jamstack Sites]]> https://smashingmagazine.com/2022/11/guide-image-optimization-jamstack-sites/ https://smashingmagazine.com/2022/11/guide-image-optimization-jamstack-sites/ Thu, 17 Nov 2022 10:00:00 GMT This article is a sponsored by Storyblok

Today, creating content on the Internet is the norm, not the exception. It has never been easier to build a personalized website, digitalize a product and start seeing results. But what happens when we all start creating content on a massive scale, filling the web with more and more data, and storing hundreds of zettabytes of content?

Well, it is right at that moment when big brands and hosting platforms, such as Google or Netlify, seek solutions to optimize the data we generate, make the web lighter, and therefore faster, promoting measures and techniques to improve our website’s performance, and rewarding those who do so with better positions in the ranking of their search engines. That is why, today, Web Performance is as important and trendy as having an online presence.

Table of Contents:

What Is Web Performance?

Web performance refers to the speed at which a website loads, how fast it’s downloaded, and how an app is displayed on the user’s browser. It is both the objective measurement and the perceived user experience (UX) of an application.

If you minimize load times, improve UX and make your website faster, more users will be able to access your site regardless of device or Internet connection, increase visitor retention, loyalty, and user satisfaction, and this will ultimately help you achieve your business goals and rank better in search engines.

The Relation Between Images And Web Performance

It is clear that when we think of content, the first thing that comes to mind is text. But if we leave text aside, what other options are left? Video? Images? Yes, images play a very important role on the web today, not only on platforms that are 100% focused on this asset, such as Pinterest or Unsplash, but on most of the web pages we browse on a daily basis.

According to the Web Almanac in late 2021, 95.9 percent of pages contain at least one <img> tag, and 99.9 percent have generated at least one request for an image resource.

Media, Images, Web Almanac 2021 chapter

And, just as the use of images is so present in content creation, optimizing them is key to improving our page load speed and rendering it in the shortest possible time, as images are responsible for more bytes than any other resource. Although in the last years, the size of the image transfer per page has been reduced, thanks to the use of new image optimization techniques, there is still a lot of work to be done.

Images are crucial elements for performance and UX, and data extracted from Core Web Vitals metrics such as Largest Contentful Paint, which attempts to identify the most important piece of the above-the-fold content on a given page, proves this.

According to the analysis carried out in the performance section of Web Almanac, the img tag represents 42% of the LCP elements of websites, while 71-79% of the pages have an image as an LCP element, because they can also be applied as background using CSS. This data makes it clear that there will be no good performance without well-optimized images.

Key user-centric metrics often depend on the size, number, layout, and loading priority of images on the page. This is why a lot of our guidance on performance talks about image optimization.

Addy Osmani

Why Image Optimization Is So Important For A Jamstack Site?

As you may already know, image optimization is the process that a high-quality image has to go through to be delivered in ideal conditions, sometimes with the help of an Image Transformation API and a global Content Delivery Network (CDN) to make the process simpler and scalable.

And while optimizing images is a must in any application, in the Jamstack ecosystem, it is even more paramount, considering that one of the main goals of the Jamstack architecture is to improve web performance.

Jamstack is an architectural approach that decouples the web experience layer from data and business logic, improving flexibility, scalability, performance, and maintainability.

Jamstack.org

A Jamstack site is decoupled: the front end is separated from the backend and pre-built into highly optimized static pages before being deployed. But it’s not all static. It also allows dynamic content by using JS and APIs to talk to backend services.

And you might ask, what do images have to do with this static site architecture? As Web Almanac addresses in the section on the impact of images on Jamstack sites, images are the main bottleneck for a good UX. Most of the blame lies with using older formats, such as PNG and JPEG, instead of using the next generation ones, such as WebP or AVIF, making the user wait too long and producing poor scores in Core Web Vitals metrics.

But if you’re worried that you’re not getting the performance you expected because of large, poorly optimized images, don’t worry because that’s what you’re reading this article for!

Fixes To Common Problems

In most web performance measurement tools, such as WebPageTest or PageSpeed Insights, when we generate a report on the status of our website, we can find parameters related to images. These parameters talk about the size, format, encoding, and so on, namely how optimized our images are.

In this section, we will enumerate the problems that usually appear due to the use of images and what would be the theoretical optimization technique for each of them.

1. Use Compressed Files

Imagine working on a project like DEV.to, where hundreds of people can upload content to your platform without being reviewed. In such a case, it would be expected for your project to have large, high-resolution images, as not everyone is aware of the bandwidth consumption and the slowdown in loading times that this entails.

Solution

Clearly, we want to give freedom to our content creators, but we can leave to chance neither the resolution nor the speed of delivery and download of the images that will be displayed on our website.

The solution is to optimize our images, compressing them and reducing their size with almost no loss of quality. There’re two well-known compression techniques:

  1. Lossy compression
    This compression type uses algorithms that eliminate the less critical data to reduce the file size.
    When considering the use of this lossy technique, we must keep two things in mind: by discarding part of the image information, the image quality will be negatively impacted, and if someone were to compress a picture with this technique and we wanted to compress it again, it would lose even more quality.
  2. Lossless compression
    On the other hand, lossless compression compresses the data without interfering with the image quality.
    This technique allows the images not to lose quality in subsequent compressions. Still, it leads to a larger file size, which we try to avoid in cases where quality is not a game changer for the project’s value proposition.

When deciding on one of these techniques, the most important thing is to know our users and what they are looking for from our website. If we think about social networks, we can see two clear trends, those focusing on text and those focusing on multimedia content.

It is clear that for text-focused social networks, losing a little bit of image quality is not a big problem for them and can reduce a fifth of the image file size, which would mean a big increase in performance. So it is clear that lossy compression would be the ideal technique for that case. However, for social networks focused on image content, the most important thing is delivering images with exceptional quality, so lossless compression would play a better role here.

Tip: While using an Image Service CDN, compression is usually included, but it is always good to know more tools that can help us compress our images. For that, I bring you open-source tools that you can use to add image compression to your development workflow:

  • Calibre Image Actions is a GitHub Action built by performance experts at Calibre that automatically compresses JPEGs, PNGs, and WebPs in Pull Requests;
  • Imgbot, which will crawl your image files in GitHub and submit pull requests after applying a lossless compression.

2. Serve In Next-generation (Next-gen) Formats, Encode Efficiently

Part of the problem above may be due to the use of older image formats such as JPG and PNG, which provide worse compression and larger file sizes. But not only is compression an essential factor in deciding whether to adopt a next-gen image format, but also the speed of its encoding/decoding and the quality improvement.

While it is true that in recent years we have heard a lot about next-gen formats such as WebP, AVIF, or JPEG XL, it is still surprising how many websites have not migrated to these formats and continue providing bad UX and bad performance results.

Solution

It is time for us to move to a better world, where the compression of our images and their quality have no direct relationship, where we can make them take up as little space as possible without changing their visual appearance, and where next-gen formats are used.

By using next-gen formats, we will be able to reduce the size of our images considerably, making them download faster and consume less bandwidth, improving the UX and performance of our website.

“Modern image formats (AVIF or WebP) can improve compression by up to 50% and deliver better quality per byte while still looking visually appealing.”

— Addy Osmani (Image optimization expert)

Let’s look at the two most promising formats and how they differ from each other.

  • WebP

It is an image format that supports lossy and lossless compression, reducing file size by 25-34% compared to JPEG, as well as animation and alpha transparency, offering 26% less file size than PNG. It was a clear substitute for these formats until AVIF and JPEG XL came out.

Its advantages are its uniform support across most modern browsers, its lossless 8-bit transparency channel and lossy RGB transparency, and support for metadata of various types and animations. On the other hand, it does not support HDR or wide-gamut images, nor does it support progressive decoding.

  • AVIF

It is an open-source AV1 image file format for storing still and animated images with better lossy and lossless compression than most popular formats on the web today, offering a 50% saving in file size compared to JPEG. It is in direct competition with JPEG XL, which has similar compression quality but more features.

The advantages of the AVIF format are that it supports animations and graphic elements where JPEG has limitations, improves JPEG and WebP compression, supports 12-bit color depth enabling HDR and wide color gamut, monochrome and multichannel images, and transparencies with alpha channel. However, the major drawback of AVIF is that it is not compatible with all browsers and its encoding/decoding is more expensive in terms of time and CPU, causing some Image CDNs to still not apply AVIF as an automatic format.

Note: If you want to know the differences between each format in detail, I recommend you read the article “Using Modern Image Formats: AVIF And WebP” by Addy Osmani, and trying out the AVIF and WebP quality settings picker tool.

And remember, regardless of which format you choose, if you want an effective result, you must generate the compressed files from a master image of the best possible quality.

Extra tip: Suppose you want to take advantage of the features of an image format with limited browser support. In that case, you can always use the <picture> HTML tag, as shown in the code below, so that the browser can pick the image format supported in the order provided.

<picture>
    <!-- If AVIF is not supported, WebP will be rendered. -->
    <source srcset="img/image.avif" type="image/avif">
    <!-- If WebP is not supported, JPG will be rendered -->
    <source srcset="img/image.webp" type="image/webp">
    <img src="img/image.jpg" width="360" height="240" alt="The last format we want">
</picture>

3. Specify The Dimensions

When the width and height attributes have not been added to the <img> tag, the browser cannot calculate the aspect ratio of the image and therefore does not reserve a correctly sized placeholder box. This leads to a layout shift when the image loads, causing performance and usability issues.

Solution

As developers, it is in the palm of our hands to improve the UX and make the layout shifts less likely to happen. We already have part of the way done by adding width and height to the images.

At first glance, it seems like a simple task, but in the background, browsers do a tedious job of calculating the size of these images in different scenarios:

  • For images that are resized in responsive design.

If we have a responsive design, we will want the image to stay within the margins of the container, using the CSS below for that:

img {
  max-width: 100%;
  height: auto;
}

For browsers to calculate the aspect ratio and then the correct size of our images before loading, our <img> tag must contain the defined height and width attributes when we specify the height (or width) in the CSS and the opposite property, width (or height), as auto.

If there is no height attribute in the <img>, the CSS above sets the height to 0 initially, and therefore there will be a content shift when the image loads.

<img src="image.webp" width="700" height="500" alt="The perfect scenario">

<style>
img {
    max-width: 100%;
  height: auto;
}
</style>
  • For responsive images that can change their aspect ratio.

In the latest versions of Chromium, you can set width and height attributes on <source> elements inside <picture>. This allows the parent container to have the right height before the image is loaded and to avoid layout shifts for different images.

<picture>
  <source media="(max-width: 420px)" srcset="small-image.webp" width="200" height="200">
  <img src="image.webp" width="700" height="500" alt="Responsive images with different aspect ratios.">
</picture>

Note: To know more about this topic, I recommend you to look at the article “Setting Height And Width On Images Is Important Again” by Barry Pollard.

4. Optimize Images For All Devices, And Resize them Appropriately

Usually, with CSS, we have the superpower to make our images occupy the space we want; the problem is that all superpower comes with great responsibility. If we scale an image without previously having optimized it for that use case, we will make the browser load an image with an inadequate size, worsening the loading time.

When we talk about images that are not optimized for the device and/or viewport on which they are displayed, there are three different cases:

  • The change of resolution
    When large images intended for desktops are displayed on smaller screens consuming up to 4 times more data, or vice versa, from mobile to desktop, losing image quality when enlarged.
  • The change of pixel density
    When images resized by pixels are represented on screens with higher pixel density and not providing the best image quality.
  • The change of design
    When an image with important details loses its purpose on other screen sizes by not serving a cropped image highlighting them.

Solution

Fortunately, today we have responsive image technologies to solve the three problems listed above by offering different versions, in size, resolution, and/or design, of each image to browsers so that they determine which image to load based on the user’s screen size, and/or device features.

Now let’s see how these solutions are implemented in HTML for each case:

1. Resolution change fix: Responsive images with different sizes

The solution is to properly resize the original image according to the viewport size.

To do this, using the <img> tag with the src attribute won’t be enough since it only allows to specify an image file to the browser. But by adding the srcset and sizes attributes, we can specify more versions of the same image and media conditions so the browser can choose which one to display.

Let’s see a simple example of a responsive image and understand each attribute:

<img
    src="image-desktop.webp"
    srcset="image-mobile.webp 360w, image-tablet.webp 760w, image-desktop.webp 1024w"
    sizes="(max-width: 1024px) calc(100vw - 4rem), 1024px"
    alt="Image providing 3 different sizes for 3 viewports">
  • src
    We must always add the src attribute to our images just in case the browser does not support srcset and sizes attributes. The src will serve as a fallback, so adding an image large enough to work on most devices is crucial.
  • srcset
    The srcset attribute is used to define a set of images with their corresponding width descriptors (image widths represented in the unit w), separated by commas, from which the browser can choose.
    In the above example, we can see that 360w is a width descriptor that tells the browser that image-mobile.webp is 360px wide.
  • sizes [Optional]
    The sizes attribute ensures that responsive images are loaded based on the width they occupy in the viewport and not the screen size.
    It consists of a comma-separated list of media queries that indicate how wide the image will be when displayed under specific conditions, ending with a fixed width value as a default value.

Note: Units such as vw, em, rem, calc(), and px can be used in this attribute. The only unit that cannot be used is the percentage (%).

Once we have our responsive image ready, it is up to the browser to choose the correct version using the parameters specified in the srcset and sizes attributes and what it knows about the user’s device.

The browser process consists of knowing the device width, checking the sizes attribute, and then choosing from the srcset images the one that has that width. If there is no image with that width, the browser will choose the first one larger than the size got from sizes (as long as the screen is not high-density).

2. Device’s pixel density change fix: Responsive images with different resolutions

The solution is to allow the browser to choose an appropriate resolution image for each display density.

Device vs CSS Pixels 360px width image by screen resolution
1 device pixel = 1 CSS pixel 360px
2 device pixels = 1 CSS pixel 720px
3 device pixels = 1 CSS pixel 1440px

To achieve this, we will use srcset again, but this time, with density descriptors, used to serve different images based on the device pixel density, not the image size, and without the need to specify the sizes attribute:

<img
    src="image-1440.webp"
    srcset="image-360.webp 1x, image-720.webp 2x, image-1440.webp 3x"
    alt="Image providing 3 different resolutions for 3 device densities">
  • src
    Having image-1440.webp as a fallback version.
  • srcset
    In this case, the srcset attribute is used to specify an image for each density descriptor, 1x, 2x, and 3x, telling the browser which image is associated with each pixel density.
    For this case, if the device’s pixel density is 2.0, the browser will choose the image version image-720.webp.

3. Design change fix: Different images for different displays

The solution is to provide a specially designed image with different ratios or focus points for each screen size, a technique known as art direction.

Art direction is the practice of serving completely different looking images to different viewports sizes to improve visual presentation, rather than different size versions of the same image.

The art direction technique makes this possible through the <picture> tag, which contains several <source> tags providing the different images from which the browser will choose, and adding <img> as a fallback:

<picture>
  <source media="(max-width: 420px)" srcset="image-mobile.webp" width="360" height="280">
  <source media="(max-width: 960px)" srcset="image-tablet.webp" width="760" height="600">
  <img src="image-desktop.webp" width="1024" height="820" alt="Image providing 3 different images for 3 displays">
</picture>
  • picture
    The wrapper of the different images brought by 0 or more <source> and an <img>.
  • source
    Each <source> tag specifies a media resource, in this case, an image, with its srcset attribute being the file path to that resource.
    The order of placement of this tag matters. The browser will read the conditions defined in the media attribute of each <source> from top to bottom. If any of them are true, it will display that image, and if the subsequent ones are true, they won’t be read.
    An example would be the media="(max-width: 960px)" of the second <source>. If the viewport’s width is 960px or less but more than 420px, image-tablet.webp will be displayed, but if it is less than 420px, image-mobile.webp will be displayed.
  • img
    When a browser does not support the <picture> or <source> tags or none of the media queries are met, the <img> tag will act as a fallback or default value and will be loaded. Therefore, it is crucial to add an appropriate size that will work in most cases.

Extra tip: You can combine the art direction technique with different resolutions.

<picture>
  <source media="(max-width: 420px)" srcset="image-mobile.webp 1x, image-mobile-2x.webp 2x" width="360" height="280">
  <source media="(max-width: 960px)" srcset="image-tablet.webp 1x, image-tablet-2x.webp 2x" width="760" height="600">
  <img src="image-desktop.webp" srcset="image-desktop.webp 1x, image-desktop-2x.webp 2x" width="1024" height="820" alt="Image providing 6 different images for 3 displays and 6 pixels density">
</picture>

By making use of width and pixel density at the same time, you can amplify the criteria for which an image source is displayed.

Note: If you want to learn about tools that can help you crop and resize your images efficiently, you can take a look at Serve Responsive Images by web.dev.

5. Load your images after critical resources

By default, if we do not specify the priority of our images, the browser will load them before the critical resources of our site, causing poor performance and increasing the Time To Interactive (TTI).

Solution

Fortunately, native solutions such as lazy loading allow us to defer off-screen images, the ones the user does not see initially, and focus on the most important ones, the images above the fold.

To make use of this native solution, we must add the loading attribute to our images with the lazy value:

<!-- Native lazy loading -->
<img src="image.webp" loading="lazy" width="700" height="500" alt="Loaded by appearance">

The loading attribute can have two values:

  • lazy: Postpones the loading of the resource until it reaches the viewport.
  • eager: Loads the resource immediately, regardless of where it is.
    Although this is the browser’s default behavior, it can be helpful in cases where you prefer to set loading="lazy" automatically on all your images and manually specify which ones will be visible first.
Since our goal is to defer images that do not appear above the fold, we mustn’t add the loading attribute for those displayed first. Otherwise, we can set the loading="eager" and add fetchpriority="high" to load them quicker.

Extra tip: Responsive images using the <picture> element can also be lazy-loaded only, including the loading attribute to the fallback <img> element.

<picture>
  <source media="(max-width: 420px)" srcset="image-mobile.webp">
  <img src="image-desktop.webp" loading="lazy">
</picture>

6. Cache Your Images

A website’s performance can suffer if frequently accessed images are not cached, as many requests will be made to images that have already been loaded in the user’s system.

Users should be able to view the images directly from their system and not wait again for them to download.

Solution

The solution is to store the heavily accessed images at the end of the user’s browser cache and use a CDN service to cache them on the server for you.

Note: To understand how the cache works for a user and the different strategies we can follow, I recommend the talk and article “Love your cache” by Sam Thorogood.

Once we have an optimization technique for each of the problems that images bring us, it is worth remembering that there are more things to consider for the accessibility and SEO of our images, such as the alt attribute, the file name, and its metadata.

That said, it is time to see how an image service will save us hundreds of headaches. Let’s go there! 🚀

The Benefits Of Using An Image Service CDN

All the solutions to the problems we have seen in the previous section could be solved with external tools. But why complicate things if we can just use an Image Service CDN, saving us time, reducing infrastructure costs, and automating and scaling the image optimization?

An Image Service CDN is a combination of an Image Transformation API and a CDN network. It allows you to transform images on the fly by adding a few extra parameters in the URL and delivering them to users through a fast CDN with optimized caching.

The image transformations provided by this kind of service include modifying their format, focal point, and size by cropping or resizing them, as well as applying effects and other visual enhancements. In addition, it also allows you to optimize images so that they have the smallest possible size without losing quality, thus improving the UX and using the minimum bandwidth.

Note: You can always learn more about the transformations that some services offer by reading their documentation, as in Cloudinary or Imagekit.

Thanks to the combination of the image service with the CDN network, we can speed up the delivery of our images since, after the first request, the image will be cached and served from there in future requests. But not only does it cache the original image, but it also stores all the transformations and combinations we make from it. And if that is not enough, it also creates new transformation requests from the cached version of the original image. Can it be more optimal?

In the Jamstack ecosystem, it couldn’t be easier to access these services. Most headless CMSs already have their Image Service CDN, so you don’t have to leave their premises to perform your image transformations, optimizations, or cache and deliver them quickly. This article will use Storyblok Image Service CDN as an example.

So now, let’s see how the Storyblok Image Service CDN can resolve the problems we listed before:

Compressing Images

The problem of using large image files can be resolved by adding /m/ at the end of the image URL.

But of course, if you want to change the compression rate of your images, you can use the quality filter with a value between 0 and 100 by adding /filters:quality(0-100) to the URL.

Serving The Right Format And Encoding Effectively

If we want to serve our images in a next-gen format, Storyblok’s Image Service CDN makes it easy by:

  • Automatic conversion to WebP if the browser supports it.
    Storyblok chooses the WebP format as the default format due to its capabilities. By adding /m/ to the image URL, it will be automatically served in WebP if the browser supports it.
  • The format filter
    If we want to set a specific format, we can do it by using the format filter, which supports webp, jpeg, and png.
    demo-image.jpeg/m/200x0/filters:format(jpeg))

Note: If anything, I miss the integration with more new formats, such as AVIF, but I understand that they are waiting for it to consolidate and become supported by more browsers.

Defining Width And Height From Storyblok

Although the Image Service CDN cannot help us define the image sizes, the Headless CMS, on the other hand, can streamline this process.

By simply adding a field for each attribute in our image component (Block), we can automate our front-end image component to suit the requirements of each use case.

Tip: By creating presets of the most used images, we can make these fields be filled by default and thus improve the content editor experience.

Cropping Or Resizing Images

If your website has or expects to have a large number of images, maintaining each version generated for each resolution, density, or focal point can be time-consuming.

An Image Service CDN saves you from manually creating cropped or resized versions from the master image through two methods:

Resizing

It iss perfect for responsive images using width or density descriptors.

By adding width x height in the URL of the original image, right after /m, you will have a new version of your image. By setting one of the parameters to 0 each time, you will have an image with the same aspect ratio, wholly resized.

Cropping

It is perfect for art direction, different aspect ratios, and focal points.

By using the same technique in resizing but always providing width and height, you will be able to crop the image.

Smart Cropping Of Images

To put the subject of the image in the center automatically, the Image Service CDN allows you to make use of its smart feature by simply adding /smart to the path:

Custom Focal Point Filter

In case the subject is not a person and the previous technique does not work for us, the Image Service allows us to specify in our images the point that we consider to be the center of a crop, also known as the focal point.

This can be implemented by adding the focal filter to our image path:

Note: This can be further simplified if we are using Storyblok as a headless CMS, as it returns a focus variable on each of our images via the delivery API.

Specifying The Loading Option Of The Images

As with image width and height attributes, lazy loading is not something we do through the Image Service CDN; instead, we implement it in the front-end code.

To automate this process, create a single-option field on the headless CMS Storyblok showing the eager and lazy options, so the content editors can choose the option that best suits each case.

Note: This field can be ignored if the website only has images above the fold.

In addition, another thing that can improve the loading of our images is to use the hint preconnect by adding the Image Service CDN domain, in this case, https://a.storyblok.com/.

The preconnect keyword is a hint to browsers that the user is likely to need resources from the target resource’s origin, and therefore the browser can likely improve the UX by preemptively initiating a connection to that origin.

MDN docs
<link rel="preconnect" href="https://a.storyblok.com/">

Caching Your Images

In this case, we don’t have to do anything from our side. By adding /m to our URL, we are already using the Image Service CDN, which by default caches our images the first time they are loaded and serves them from there in the next requests.

We already know the parameters we have to add to our image URL to make use of the image service and optimize them. Combining it with an image component in the associated Headless CMS, Storyblok, which is responsible for receiving the data initially, such as the width and height attributes or their responsive sizes, we will be able to standardize the use of optimized images and create presets to automate their definition in our project.

Case Study: Image Component In A Jamstack Site

For this demo, we will use Nuxt 3 to build our static site, Vue 3 with script setup to define our image component and Storyblok as a headless CMS and Image Service CDN provider. But everything we will see can be extrapolated to any other technology.

Step 1: Create The Nuxt Project And The Storyblok Space

Let’s start by creating an account on Storyblok and a new space from scratch.

Now, following the steps in the article Add a headless CMS to Nuxt 3 in 5 min, we are going to create our Nuxt 3 application and connect it to our space. Go to the command line and run:

npx nuxi init 

Install the dependencies with yarn and launch your project with yarn dev to ensure everything goes well.

To enable the Storyblok Visual Editor, we must define a default HTTPS preview URL. First, set up SSL in Nuxt 3 and then go to your space Settings > Visual Editor and add https://localhost:3000/:

Now go to the Content section in the left menu, and open the Home story. In order to see your Nuxt project, open the Entry configuration and set the real path to /, save, and voilá, you should be able to see the Nuxt landing page in the Visual Editor:

Step 2: Connect The Nuxt Project To The Space’s Content

Once the Visual Editor is set up, the next step is connecting Nuxt 3 with Storyblok. To do that, we need to install the Storyblok SDK:

yarn add @storyblok/nuxt axios # npm install @storyblok/nuxt axios

And then, include the SDK as a module inside nuxt.config.js, providing the Preview API token that we can grab at Settings > Access Tokens from our space:

export default defineNuxtConfig({
    modules: [
      [
        '@storyblok/nuxt',
        { accessToken: '' }
      ]
    ]
})

The new space, by default, already contains some blocks (components), such as page, grid, and so on. Instead of using those, we are going to define our own components, so you can remove all nestable components from this space and leave only the content type Page.

Note: Check the tutorial Structures of Content tutorial by Storyblok to understand the difference between Nestable and Content Types blocks.

Step 3: Create The Blocks (Components) In The Storyblok Space

Now, let’s create the blocks needed for this demo project in the space Block Library, where (*) means required:

Design Image (design_image) is the component we will use to define different images on different devices when using the art direction technique.

A nestable component with the required fields:

  • image (*) (Asset > Images)
  • width (*) (Number)
  • height (*) (Number)
  • media_condition (*) (Single-Option > Source: Self) with the key-value pair options: mobile → (max-width: 640px) & tablet → (max-width: 1024px), being (max-width: 640px) the default value.

Image, the component responsible for collecting all the information necessary to optimize the image.

A nestable component with the tabs:

  • General, the tab containing the fields:

  • original_image (*) (Asset > Images)
  • Image size (Group)
    • width (*) (Number): Maximum width the image will have on your website.
    • height (*) (Number): Maximum height the image will have on your website.
  • Responsive image (Group)
    • responsive_widths (Text > Regex validation: (^$|^\d+(,\d+)*$))
      Comma-separated list of widths that will be included on srcset.
      Example: 400,760,960,1024.
    • responsive_conditions (Text)
      Comma-separated list of media queries, with their image slots sizes that will be included on the attribute sizes.
  • Supported densities (Group)
    • density_2x (Boolean)
    • density_3x (Boolean)
  • Art Direction (Group)
    • art_direction (Blocks > Allow only design_image components to be inserted)

  • Style, the tab containing the fields:

  • loading (Single-Option > Source: Self) with the key-value pair options: lazy → lazy and eager → eager.
  • rounded (Boolean).

Card

A nestable component with the fields:

  • image (Blocks > Allowed maximum 1 > Allow only image components to be inserted)
  • title (Text)
  • subtitle (Text)
  • color (Plugin > Custom type: native-color-picker)

Note: To be able to see the custom type native-color-picker available in that list, you need to install the Colorpicker app in the space App Directory.

  • button_text (Text)

Album

A universal (mix between nestable & content type) component with the field:

  • cards (Blocks > Allow only card components to be inserted)

Step 4: Create The Main View, Layout, And Install Tailwind CSS In The Nuxt Project

Once we have defined the schema of our blocks in the Storyblok space, let’s go back to the code of our Nuxt 3 project and start creating the pages and components needed.

The first step will be to delete the app.vue view from the root of the project and create a pages folder with the [...slug].vue view in it to render the pages dynamically by slug and fetch the data from the Storyblok space.

  • […slug].vue (pages/[…slug].vue)
<script setup>
const { slug } = useRoute().params;
const url = slug || 'home';

const story = await useAsyncStoryblok(url, { version: 'draft' });
</script>

<template>
    <div class="container">
      <StoryblokComponent v-if="story" :blok="story.content" />
  </div>
</template>

In the template, we use the StoryblokComponent component that the SDK provides us to represent the specific blocks coming from the Content Delivery API, in this case, the page.

And since our goal is to generate a static page, we’re using the useAsyncStoryblok composable provided by the SDK too, which uses useAsyncData under the hood.

Next, let’s create a default layout, so our page has some basic styles and metadata.

  • default.vue (layouts/default.vue)
<template>
  <main class="min-h-screen bg-[#1A0F25] text-white">
    <slot />
  </main>
</template>

<script setup>
useHead({
  title: 'Pokemon cards album',
  meta: [
    { name: 'description', content: 'The Pokemon album you were looking for with optimized images.' }
  ],
  htmlAttrs: {
    lang: 'en'
  }
})
</script>

As Tailwind CSS is used for styling this demo example, let’s install and configure it in the Nuxt 3 project using the Nuxt Tailwind module. For that, run:

yarn add -D @nuxtjs/tailwindcss # npm install -D @nuxtjs/tailwindcss

Then add the code below to the modules in nuxt.config.ts:

export default defineNuxtConfig({
  modules: [
        // ...
        '@nuxtjs/tailwindcss'
    ]
})

Create tailwind.config.js by running npx tailwindcss init and copy/paste this code:

module.exports = {
  content: [
    'storyblok/**/*.{vue,js}',
    'components/**/*.{vue,js}',
    'pages/**/*.vue'
  ],
  theme: {
    container: {
      center: true,
      padding: '1rem',
    },
  },
  plugins: [],
}

Finally, create an assets folder in the root of the project, and inside, include a css folder with a file named tailwind.css that the Nuxt Tailwind module will use to get the Tailwind styles:

@tailwind base;
@tailwind components;
@tailwind utilities;

Now the project is ready to represent all the defined styles!

Step 5: Define The Components Related To The Blocks In The Nuxt Project

Let’s create a new folder called storyblok under the project’s root. The Storyblok SDK will use this folder to auto-import the components only if used on our pages.

Start by adding the components:

  • Page.vue (storyblok/Page.vue)
<template>
  <StoryblokComponent v-for="item in blok.body" :key="item._uid" :blok="item" />
</template>

<script setup>
defineProps({ blok: Object })
</script>

All components will expect the blok prop, which contains an object with the fields’ data of that specific block. In this case, the content type page will have only the body field, an array of objects/components.

Using the v-for, we iterate the body field and represent the items dynamically using StoryblokComponent.

  • Album.vue (storyblok/Album.vue)
<template>
  <div
    v-editable="blok"
    class="container grid grid-cols-[repeat(auto-fit,332px)] justify-center gap-10 py-12"
  >
    <StoryblokComponent v-for="card in blok.cards" :key="card._uid" :blok="card"
    />
  </div>
</template>

<script setup>
defineProps({ blok: Object })
</script>

The same will happen in this component, but instead of being the blok.body field, it will be the blok.cards field.

  • Card.vue (storyblok/Card.vue)
<template>
  <article v-editable="blok" class="bg-[#271B46] rounded-xl p-4 pb-6">
    <StoryblokComponent v-if="blok.image[0]" :blok="blok.image[0]" />
    <header class="pt-4 flex gap-4 items-center">
      <div class="rounded-full w-8 h-8" :style="background-color: ${blok.color.color}"></div>
      <h3 class="flex flex-col">
        {{ blok.title }}
        <span class="font-sans font-thin text-xs">{{ blok.subtitle}}</span>
      </h3>
      <button class="ml-auto bg-purple-900 rounded-full px-4 py-1">{{ blok.button_text }}</button>
    </header>
  </article>
</template>

<script setup>
defineProps({ blok: Object })
</script>

As card is one of the last levels of nested blocks, we won’t iterate in this component, but we will directly represent the fields in the HTML.

Step 6: Create The Image Component Property By Property

Let’s build a generic image component in Vue, using the parameters coming from the Storyblok image block and taking advantage of the Image Service CDN to render an optimized image.

The Foundation Of The Image Component

Let’s define the core functionality of the image component with the original_image, width, and height properties that come from the image block in our space and create a custom method called createImage that returns the URL of the optimized image using the Image Service CDN:

<template>
  <picture v-editable="blok">
    <img
      :src="createImage(filename, width, height)"
      :width="width"
      :height="height"
      :alt="alt"
      class="shadow-lg w-full"
    />
  </picture>
</template>

<script setup>
const props = defineProps({ blok: Object })

const { width, height } = props.blok
const { filename, alt, focus } = props.blok.original_image

const createImage = (original, width, height, focal = focus) => {
  return `${original}/m/${width}x${height}/filters:focal(${focal})`
};
</script>

Adding Lazy Or Eager Loading

Once we have the image’s base, we can start adding new properties, such as loading, and specifying it as an attribute in the img tag:

<template>
  <picture v-editable="blok">
    <img
      // all other attributes
      :loading="loading"
    />
  </picture>
</template>

<script setup>
const props = defineProps({ blok: Object })

const { /* all other properties */, loading } = props.blok
// ...
</script>

Adding Responsive Images Using Width Descriptors

If we need to represent different sizes of the same image on our website, using the responsive image technique, we can specify the widths and conditions using the responsive_widths and responsive_conditions properties.

<template>
  <picture v-editable="blok">
    <img
      // all other attributes
      :srcset="srcset"
      :sizes="blok.responsive_conditions"
    />
  </picture>
</template>

<script setup>
const props = defineProps({ blok: Object })

// all other properties
let srcset = ref('')

if (props.blok.responsive_widths) {
  const aspectRatio = width / height
  const responsiveImages = props.blok.responsive_widths.split(',')

  let widthsSrcset = ''
  responsiveImages.map(imageWidth => {
    widthsSrcset += ${createImage(filename, imageWidth, Math.round(imageWidth / aspectRatio))} ${imageWidth}w,
    return true
  })

  srcset.value = widthsSrcset
}
</script>

Adding Responsive Images Using Density Descriptors

When our site is used on different devices with different pixel densities, we must display our image in the appropriate resolution. By checking the density_2x and density_3x boxes to true and creating an image for each density with the following code, we can patch this problem.

Note: The original image must be large enough to work with a size three times larger than the image used in the viewport.

<template>
  <picture v-editable="blok">
    <img
      // all other attributes
      :srcset="srcset"
    />
  </picture>
</template>

<script setup>
const props = defineProps({ blok: Object })

// all other properties
let srcset = ref('')

if (props.blok.density_2x || props.blok.density_3x) {
  let densitiesSrcset = ${createImage(filename, width, height)} 1x
  densitiesSrcset += props.blok.density_2x ? , ${createImage(filename, width &#42; 2, height &#42; 2)} 2x : ''
  densitiesSrcset += props.blok.density_3x ? , ${createImage(filename, width &#42; 3, height &#42; 3)} 3x : ''

  srcset.value = densitiesSrcset
}
</script>

Adding Different Images For Different Devices

When using the art direction technique, we will define one source tag per element in the art_direction array field. We will use that data to render a different image according to the specified media_condition.

<template>
  <picture v-editable="blok">
    <template v-if="art_direction">
      <source
        v-for="{ image, media_condition, width, height } in art_direction"
        :media="media_condition"
        :srcset="createImage(image.filename, width, height, image.focus)"
        :width="width"
        :height="height"
      >
    </template>
    <!-- Base Image -->
  </picture>
</template>

<script setup>
const props = defineProps({ blok: Object })

// all other properties
const { art_direction } = props.blok
</script>

In the example repository for this demo, you can find Image.vue (storyblok/Image.vue), the resulting image component, combining all the cases above.

Note: These implementations are the only possible ways to solve the problems we have seen during this article.

Measuring Performance To Test The Image Component

It’s time to measure the performance results with and without the custom image component to demonstrate how the above optimizations improve our site.

If we generate a report with Lighthouse from our website representing the images as they originally came from the headless CMS without going through the Image Service CDN or applying any optimization technique other than the definition of the width and height attributes in the img tag, the result we get is:

As we can see, the performance is already negatively affected, with only five unoptimized images in place. But at least the report not only gives us bad news but also provides us with a list of opportunities to improve the results and solve the problems.

Once we apply the improvements mentioned above using the image component we have developed and giving the necessary values in the CMS headless, the result is impeccable:

The next step will be to educate our content editors, designers, and developers to synchronize between them what values are required for each case and prepare self-defined presets in our Storyblok space to make their work a lot easier.

Simplifying Image Optimization With Next-generation Frameworks

What if I tell you that if you use a framework like Nuxt, Next, or Astro to build your applications, you don’t need to develop a custom image component? They have already created one for you. These being Nuxt Image, Next Image and Astro Image, among others.

These components are extensions to the <img> tag, which includes a number of built-in performance optimizations to help us achieve a better web experience.

By simply installing or using the component provided, we achieve the same result. To test the Nuxt Image in our project, let’s install it by running yarn add -D @nuxt/image-edge and adding the module to nuxt.config.ts with Storyblok as Image CDN provider:

export default defineNuxtConfig({
  modules: [
    // ...
    '@nuxt/image-edge',
  ],
    image: {
    storyblok: {
      baseURL: 'https://a.storyblok.com'
    }
  }
})

By replacing the Image.vue component with the code below, we will get similar behavior to our custom component but using the Nuxt Image enhancements:

Note: To render different images per device, we will have to add the source as in the custom component. This is not something that Nuxt Image supports yet.

<template>
  <picture v-editable="blok">
    <NuxtImg
      provider="storyblok"
      :src="filename"
      :width="width"
      :height="height"
      :[srcset]="densitiesSrcset"
      :sizes="widthsPerSize"
      :modifiers="{ filters: { focal: focus } }"
      :loading="loading"
      :alt="alt"
    />
  </picture>
</template>

<script setup>
const props = defineProps({ blok: Object })

const { width, height, loading, responsive_widths, density_2x, density_3x } = props.blok
const { filename, alt, focus } = props.blok.original_image

let srcset = responsive_widths ? '' : 'srcset'
let densitiesSrcset = ''
if (density_2x || density_3x) {
  densitiesSrcset = ${filename}/m/${width}x${height}/filters:focal(${focus}) 1x
  densitiesSrcset += density_2x ? , ${filename}/m/${width &#42; 2}x${height &#42; 2}/filters:focal(${focus}) 2x : ''
  densitiesSrcset += density_3x ? , ${filename}/m/${width &#42; 3}x${height &#42; 3}/filters:focal(${focus}) 3x : ''
}

let widthsPerSize = ''
if (responsive_widths) {
  const sizes = ['sm', 'md', 'lg', 'xl']
  widthsPerSize = responsive_widths.split(',').map((w, i) => ${sizes[i]}:${w}px).join(' ')
}
</script>

Looking at the code, you might think it’s not much different from the one we have created before, but the truth is that if tomorrow you consider changing the Image Service or you don’t define the width and height of the image, Nuxt Image will do the dirty work for you.

Conclusion

Image optimization, like web performance, is not a short-term task but constant work to progressively improve the website. That is why there are three things we must always keep in mind:

1. Stay Up To Date

The most important thing to keep your images in the best possible condition is to keep up with the latest trends in image optimization and web performance.

Following the work of expert authors in the field, such as Addy Osmani and Barry Pollard, can help you learn about new improvements in image optimization in advance. Likewise, renowned websites such as Smashing Magazine, web.dev, and Web Almanac by Google, Mozilla docs, among others, will let you know the state of the web and the latest developments.

2. Constantly Monitor The Status Of Your Images

Another crucial point to keep our website in good shape is to measure web performance continuously, in this case, emphasizing metrics related to image loading. You can start now by visiting Lighthouse and PageSpeed Insights.

Web performance involves measuring the speeds of an app and then monitoring its performance, ensuring that what you’ve optimized stays optimized. This involves a number of metrics and tools to measure those metrics.

MDN

Some tools like WebPerformance Report send you a weekly report by email on the performance status of your website. This can allow you to be aware of any changes in browsers or web performance techniques, as you have a report that corroborates the good status of your website over time.

Moreover, there are always tools out there that allow us to ensure that the optimization quality of our images is the best we can offer. For example, RGBA Structural Similarity, a tool that calculates the (dis)similarity between two or more PNG and/or JPEG images using an algorithm that approximates human vision, maintained by @kornelski, can help us to check that we aren’t losing too much quality when compressing and thus better choose our compression parameters.

3. Align With Your Team, Create Standards

Most of the implemented solutions in this article are just possible proposals to optimize the images of our websites. Still, it is expected that you come up with new unique solutions agreed upon with your team of content creators, designers, and developers.

We must all be on the same page when creating a quality project; communication will allow us to solve problems more quickly when they occur. By creating standards or presets when uploading images and defining their size and different resolutions, we will simplify the work of our colleagues and ensure that it is a joint effort.

I hope the techniques presented will help or inspire you when dealing with images in current or future projects. Here are the main links to the demo project:

Many thanks to Joan León (@nucliweb) and Vitaly Friedman (@vitalyf), for reviewing the article and giving me powerful feedback.

]]>
hello@smashingmagazine.com (Alba Silvente)
<![CDATA[Using Automated Test Results To Improve Accessibility]]> https://smashingmagazine.com/2022/11/automated-test-results-improve-accessibility/ https://smashingmagazine.com/2022/11/automated-test-results-improve-accessibility/ Wed, 16 Nov 2022 14:00:00 GMT A cursory google search will return a treasure trove of blog posts and articles espousing the value of adding accessibility checks to the testing automation pipeline. These articles are rife with tutorials and code snippets demonstrating just how simple it can be to grab one’s favorite open-source accessibility testing library, jam it into a cypress project, and presto changeo, shifting left, and accessibility has been achieved… right?

Unfortunately, no, because actioning results in a consistent, repeatable process is the actual goal of shift-left, not just injecting more testing. Unlike the aforementioned treasure trove of blog posts about how to add accessibility checks to testing automation, there is a noticeable dearth of content focused on how to leverage the results from those accessibility checks to drive change and improve accessibility.

With that in mind, the following article aims to fill that dearth by walking through a variety of ways to answer the question of “what’s next?” after the testing integration has been completed.

Status Quo

The confluence of maximum scalability and accessibility as requirements has brought most modern-day digital teams to the conclusion that the path to sustainable accessibility improvements requires a shift left with accessibility. Not surprisingly, the general agreement on the merits of shifting left has led to a tidal wave of content focused on how important it is to include accessibility checks in DevOps processes, like frontend testing automation, as a means to address accessibility earlier on in the product life cycle.

Unfortunately, there has yet to be a similar tidal wave of content addressing the important next steps of how to effectively use test results to fix problems and how to create processes and policies to reduce repeat issues and regression. This gap in enablement creates the problem that exists today:

The dramatic increase in the amount of accessibility testing performed in automation is not correlating to a proportional increase in the accessibility of the digital world.

Problem

The problem with the status quo is that without guidance on what to do with the results, increased testing does not correlate with increased accessibility (or a decrease in accessibility bugs).

Solutions

In order to properly tackle this problem, development teams need to be enabled and empowered to make the most of the output from automated accessibility testing. Only then can they effectively use the results to translate the increase in accessibility testing in their development lifecycle to a proportional decrease in accessibility issues that exist in the application.

How can we achieve this? With a combination of strategically positioned and mindfully structured quality gates within the CI/CD pipeline and leveraging freely available tools and technologies to efficiently remediate bugs when they are uncovered, your development team can be well on their way to effectively using automated accessibility results. Let’s dive into each of these ideas!

Quality Gates

Making a quality gate is an easy and effective way to automate an action on your project when committing your code. Most development teams now create gates to check if there are no linting errors, if all test cases have passed, or if the project has no errors. Automated accessibility results can fit right into this same model with ease!

Where Do The Gates Exist?

For the most part, the two primary locations for quality gates within the software development lifecycle (SDLC) are during pull requests (PRs) and build jobs (in CI).

With pull requests, one of the most commonly used tools is GitHub Actions, which allows development teams to automate a set of tasks that should be completed or checked when code is committed or deployed. In CI Jobs, the tools’ built-in functionality (Azure, Jenkins) is used to create a script that checks to see if test cases or scenario has passed. So, where does it make sense to have one for your team?

It all depends on what level development teams want to put a gate in place for accessibility testing results. If the team is doing more linting and component-level testing, the accessibility gate would make the most sense at a pull request level. If the automated test is at an integration level, meaning a full baked-out site ready for deployment, then the gate can be placed with a CI job.

Types Of Gates

There are two different ways that quality gates can operate: a soft check and a hard assertion.

A soft check is relatively simple in the definition. It looks at whether or not the accessibility tests were executed. That is it! If the accessibility checks were run, then the test passes. In contrast, assertions are more specific and stringent on what is allowed to pass. For example, if my accessibility test case runs, and it finds even ONE issue, the assertion fails, and the gate will say it has not passed.

So which one is most effective for your team? If you are looking to get more teams to buy into accessibility testing as a whole, a best practice is to not throw a hard assertion right away. Teams initially struggle with added tasks or requirements, and accessibility is no different. Starting with a soft gate allows teams to see what the requirement is going to be and what they are required to be doing.

Once a certain amount of time has passed, then that soft gate can switch to a hard assertion that will not allow a single automated issue out the door. However, if your team is mature enough and has been using accessibility automation for a while, a hard assertion may be used initially, as they already have experience with it.

Creating Effective Gates

Whether you are using a soft or hard gate, you have to create requirements that govern what the quality gate does with regard to accessibility automated results. Simply stating, “The accessibility test case failed,” is not the most effective way to make use of the automated results. Creation of gates that are data-driven, meaning they are based on a piece of data from the results, can help make a more effective gate that matches your development team or organization’s accessibility goals.

Here are three of the methods of applying assertions to govern accessibility quality:

  • Issue severity
    Pass or fail based on the existence or count of specific severity issues (Critical, Serious, and so on).
  • Most common issues
    Pass or fail based on the existence or count of specific issue types which are known to be most common (either global or organization specific).
  • Critical or Targeted UI /UX
    Do these bugs exist in high-traffic areas of the application, or do these bugs directly impede a user along a critical path through the UX?

Fixing Bugs

The creation and implementation of quality gates is an essential first step, but unfortunately, this is only half the battle. Ultimately a development organization needs to be able to fix the bugs found at the various quality gate inspection points. Otherwise, the applications’ quality will never improve, and nothing will clear the gates that were just put in place. What a terrifying thought that is.

In order to translate the adoption of the quality gates into improved accessibility, it is vital to be able to make effective use of the accessibility test results and leverage tools and technologies whenever possible to help drive remediation, which eliminates accessibility blockers and ultimately creates more inclusive experiences for users.

Accessibility Test Results

There is a common adage that “there is no such thing as bug-free software,” and given that accessibility conformance issues are bugs, this axiom applies to accessibility as well. As such, it is absolutely necessary to be able to clearly prioritize and triage accessibility test results in order to apply limited resources to seemingly unlimited bugs to fix them in as efficient and effective a way as possible.

It is helpful to have a few prioritization metrics to assist in the filtration and triage work when working with test results. Typically, context is an effective top-level filter, which is to say, attacking bugs and blockers that exist in high-traffic pages or screens or critical user flows is a useful way to drive maximal impact on the user experience and the application at large.

Another common filter, and one that is often secondary to the “context” filter mentioned above, is to prioritize bugs by their severity, which is to say, the impact on the user caused by the bug’s existence. Most free or open-source automated accessibility tools and libraries apply some form of issue severity or criticality label to their test results to help with this kind of prioritization.

Lastly, as a tertiary filter, some development teams are able to organize these bugs or tasks by thinking about the level of effort to implement a fix. This last filter isn’t something that will commonly be found in the test results themselves. Still, developers or product managers may be able to infer a level of effort estimation based on their own internal understanding of the application infrastructure and underlying source code.

Thankfully, accessibility test results, for the most part, share a level of consistency, regardless of which library is being used to generate the test results, in that they generally provide details about what specific checks failed, where the failures occurred in terms of page URL and sometimes even CSS or XPath as well as specific component HTML, and finally actionable recommendations on how to fix the components that failed the specific checks. That way, a developer always has a result that clearly states what’s wrong, where’s it wrong, and how to fix what’s wrong.

In the above ways, developers can clearly stack, rank, and prioritize tasks that result from automated accessibility test results. The test results themselves are typically designed to be clear and actionable so that each task can be remediated in a timely fashion. Again, the focus here is to be able to effectively deliver maximal impact with limited resources.

Helpful Tools

The above strategies are well and good in terms of having a clear direction for attacking known bugs within a project. Still, it can be daunting to figure out whether one’s remediation solution actually worked or further to figure out a path forward to prevent similar issues from recurring. This is where a number of free tools that exist in the community can come into play and support and empower development organizations to expedite remediation and enable validation of fixes, which ultimately improves downstream accessibility while maintaining development velocity.

One such family of free tools is the accessibility browser extension. These are free tools that can help teams locate, fix, and validate the remediation of accessibility bugs. It is likely that whatever accessibility library is being used in the CI/CD pipeline has an accompanying (and free) browser extension that can be used in local development environments. A couple of examples of browser extensions include:

The browser extensions allow a developer to quickly and easily scan a page in the browser, identify issues on the page, or as in the case described above, they can validate that an issue that was detected during the testing automation process, which they have since remediated, no longer exists (validation!). Browser extensions are also a fantastic tool that can be leveraged during active development to find and fix bugs before code gets committed. Often, they are used as a quality check during a pull request approval process, which can help prevent bugs from making their way downstream.

Another group of free tools that can help developers fix accessibility bugs is linters which can be integrated within the developers integrated development environment (IDE)and automatically identifies and sometimes automatically remediates accessibility bugs detected within the actual source code before it compiles and renders into HTML in a browser.

Linters are fantastic because they function similarly to a spell checker in a document editor tool like Microsoft Word. It’s largely fully automated and requires little to no effort for the developer. The downside is that linters typically have a limited number of reliable checks that can be executed for accessibility at the point of source code editing. Here are some of the top accessibility linters:

Equipping a development team with browser extensions and linters is a free and fast way to empower them to find and fix accessibility bugs immediately. The tools are simple to use, and no special accessibility training is required to execute the tests or consume and action the results. If the goal is to get farther faster with regard to actioning automated accessibility test results and improving accessibility, the adoption of these tools is a great first step.

The Next Level

Now that we have strategies for how to use results to improve accessibility at an operational level, what’s next? How can we ensure that all of our organization knows that accessibility is a practical piece of our development lifecycle? How can we build out our regression testing to include accessibility so that issues may not be reintroduced?

Codify it!

One way we can truly ensure that what we have created above will be done on a daily basis is to bring accessibility into your organization’s policy (also known as code policy or policy of code) — establishing such means that accessibility will be included throughout the SDLC as a foundational requirement and not an optional feature.

Although putting accessibility into the policy can take a while to achieve, the benefits of it are immeasurable. It creates a set of accessible coding practices that are clearly defined and established for how accessibility becomes part of the acceptance criteria or definition of “done” at the company level. We can use the automated accessibility results to drive this policy of code and ensure that the teams are doing full testing, using gates, and fixing the issues set by the policy!

Automate it!

Most automated accessibility testing libraries are standard out-of-the-box libraries that test generically for accessibility issues that exist on the page. The typical amount of issues caught is around 40%, which is a good amount. However, there is a way in which we can write automated accessibility tests to go above and beyond even more!

Accessibility regression scripts allow you to check accessibility functionality and markup to ensure that the contents of your page are behaving the way they should. Will this guarantee it works with a screen reader? Nope. But it will ensure that the accessible functionality of it is properly working.

For example, let’s say you have an expand/collapse section that shows extra details have you click the button. Automated accessibility libraries would be able to check to ensure the button has accessible text and maybe that it has a focus indicator. Writing a regression script, you could check to ensure the following:

  • It works with a keyboard (Enter and Space);
  • aria-expanded=” true/false” is properly set on the button;
  • The content in the expanded section is properly hidden from screen readers.

Doing this on key components can help ensure that the markup is properly set for assistive technology, and if there is an issue, it can be easier to debug if the issue is in code or potentially a bug in the assistive technology.

Conclusion

The “shift left” movement within the accessibility industry over the last few years has done a lot of good in terms of generating awareness and momentum. It has helped engage and activate companies and teams to actually take action to impact accessibility and inclusion within their digital properties, which in and of itself is a victory.

Even so, the actual impact on the overall accessibility of the digital world will continue to be somewhat limited until teams are not only empowered to execute tests in efficient ways but also that they are enabled to effectively use the test results to govern the overall quality, drive rapid remediation, and ultimately put process and structure in place to prevent regression.

In the end, the goal is really more than simply shifting left with accessibility, which often ends up taking what a bottleneck of testing in the QA stage of the SDLC is and simply dragging it left and upstream and placing it into the CI/CD pipeline. What really is desired, if sustainable digital accessibility transformation is the goal, is to decentralize the accessibility work and democratize it across the entire development team so that everyone participates (and hopefully into the design as well!) in the process.

The huge increase in automated accessibility testing adoption is a wonderful first step, but ultimately its impact is limited if we don’t know what to do with the results. If teams better understand how they can use these test results, then the increase in testing will, by default, increase accessibility in the end product. Simple gatekeeping, effective tool use and a mindful approach can have a major impact and lead to a more accessible digital world for all.

Related Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Noah Mashni and Mark Steadman)
<![CDATA[A Guide To Keyboard Accessibility: HTML And CSS (Part 1)]]> https://smashingmagazine.com/2022/11/guide-keyboard-accessibility-html-css-part1/ https://smashingmagazine.com/2022/11/guide-keyboard-accessibility-html-css-part1/ Mon, 14 Nov 2022 14:00:00 GMT Keyboard accessibility is an important part of the user experience. There are multiple criteria in Web Content Accessibility Guidelines (WCAG) about this topic. Still, it’s somehow overlooked, affecting the experience of many users, mainly people with motor disabilities — any condition that limits movement or coordination.

Certain conditions like having a broken arm, the loss or damage of a limb, muscular dystrophy, arthritis, and some others can make it impossible for a person to use a mouse to navigate a site. So, making a site navigable via keyboard is a very important part of ensuring the accessibility and usability of our websites.

The importance of making a site accessible for users with motor disabilities becomes even more evident when you learn that they have access to more assistive technology options. Keyboards are not even the main focus of motor disability assistance! There are tools like switches that you use with your hand (or even with your head) to work with any device, which helps a lot for people with more severe motor disabilities. You can see how those technologies work in this demonstration made by Rob Dodson or in this video of Christopher Hills.

In this article, I’ll cover how to use HTML and CSS to create an accessible experience for keyboard users while mentioning what WCAG criteria we should keep into consideration.

HTML

One of the basics of creating a website accessible site for keyboard users is knowing what elements should be navigable via keyboard. For this, a good HTML semantic is crucial because it’ll indicate the kind of elements we want to focus on with keyboard navigation.

The Basics

When a user presses the Tab key, it’ll let them select the next focusable element in the HTML, and when they press the keys Shift + Tab, it’ll take them to the last focusable element. With that said, what elements need to be focusable? Anything that requires user interaction. Between them, you can find the elements button, a, input, summary, textarea, select, and the controls of elements audio, and video (when you add the attribute controls to them). Additionally, certain attributes can make an element keyboard navigable, such as contenteditable or tabindex. In the case of Firefox, any area with a scroll will also be keyboard focusable.

Additionally to that, you can:

  • Activate the button, select, summary, and a elements using the Enter Key. Keep in mind that except for the a element, you can activate them with the Space Key as well.
  • Use the arrow keys to navigate between different input with the type radio if they share the same name attribute.
  • Check those inputs using the Space key (keep in mind that when you navigate with the arrow keys radio inputs, it’ll be checked once the keyboard is focused, but that doesn’t happen with checkbox inputs).
  • Use the up and down keys to navigate between the different options of a select element.
  • Close the select element displayed list and multiple input popups.
  • Use the arrow keys to scroll vertically or horizontally a document.

There are probably more interactions, some of which depend on differences between operating systems and browsers, but that covers mostly what you can do with the keyboard.

Does that mean those elements are automatically keyboard-accessible by default? A good HTML structure is very helpful, and it makes content mostly accessible by default, but you still need to cover some issues.

For example, certain input types like date, datetime-local, week, time, and month have popups that can work differently between browsers. Chrome, for example, allows you to open the date picker popup by pressing the Enter or Space key in a designated button in the input. However, with Firefox, you need to press Enter (or Space) in either the day, month, or year fields to open the popup and modify each field from there.

This lack of consistency can be a bit off-putting, and maybe it’s just a matter of personal preference. Still, I feel that the Firefox experience is not very intuitive, which leads to thinking that, arguably, one of those experiences is more keyboard-accessible than the other. So if you want to create a good, accessible, and consistent keyboard experience between browsers, you’d need more than HTML for that. If you're going to try it yourself, check this compilation of input types by MDN and navigate them by yourself.

Additionally to the previous point, certain components require elements to be keyboard focusable without being natively selectable. In other cases, we need to manage keyboard focus manually, and our markup needs to help us with that. For both cases, we’ll need to use an HTML attribute that will help us with this task.

tabindex Attribute

This attribute will greatly help us bring keyboard accessibility to more complex component patterns. This attribute receives an integer, and properly using it will help us make a DOM element keyboard focusable. With tabindex, we can find three different cases:

tabindex="0"

It causes the element to be keyboard focusable. You usually don’t want to add keyboard focus to an element unless it is not interactive, but some scenarios will require it.

One of them is when you have a component with a scroll beside the body element. Otherwise, keyboard users won’t be able to see the full extent of the content. Some components that could have this trouble are scroll-based carrousels, tables, and code snippets. Just to give an example, any code snippet created with the help of prism.js has the attribute tabindex="0". Open prism.js’ site and navigate it using the Tab key. You’ll be able to focus the snippets and control the vertical and horizontal scroll using the arrow keys.

Some people who start with web accessibility think it is a good idea to add the attribute tabindex="0" to every element because they think it’ll help screen reader users navigate easily through a site. This is a terrible practice because of two reasons:

  1. Screen reader users have multiple ways to navigate a site. They can jump between headers, landmarks, or form elements, and they don’t need that extra help to create an accessible site as long as the markup is appropriate.
  2. It can make keyboard navigation difficult because a user will have to press the Tab key many times to arrive at the desired content, and for certain motor disabilities, having too many focusable elements can create a physically painful experience.

So, to summarize: it’s a useful technique for some components, but most of the time, you’ll be alright if you don’t use it, and certainly, you must not use it in every single element of your site.

Negative tabindex

Before we start this section, we need to keep in mind two concepts: a DOM element is at the same time focusable (that means, you can programmatically focus on it with JavaScript) and tabbable (that means, being able to be selected with the Tab Key).

With that in mind, here is where negative tabindex comes into play because it’ll make an element unable to be tabbed (but you can still focus on it with JavaScript). This is important for specific components because, in some cases, we’ll need to make a normally tabbable element unable to be tabbed, or we’ll need an element to be focusable but not tabbable.

One example of that is tabs. A recommended pattern for this component is ensuring that when you press the Tab key when you’re located in the active tab, it goes to the active tabpanel instead of bringing the focus to the next tab. We can achieve that by adding a negative tabindex to all non-active tabs like this:

<ul role="tablist">
  <li role="presentation">
    <button role="tab" href="#panel1" id="tab1" aria-selected="true">Tab one</button>
  </li>
  <li role="presentation">
    <button role="tab" href="#panel2" id="tab2" tabindex="-1">Tab two</button>
  </li>
  <li role="presentation">
    <button role="tab" href="#panel3" id="tab3" tabindex="-1">Tab three</button>
  </li>
  <li role="presentation">
    <button role="tab" href="#panel4" id="tab4" tabindex="-1">Tab four</button>
  </li>
</ul>

We’ll see more examples later about how a negative tabindex will help us to have more control over focus state management in different components, but keep in mind a negative tabindex will be important in those cases.

Finally, you can put any negative integer in the tabindex property, and it’ll work the same. tabindex="-1" and tabindex="-1000" will make no difference, but my mere convention is that we tend to use -1 when we use this attribute.

Positive tabindex

A positive tabindex will make the element keyboard focusable, but the order will be defined according to the used integer. That means that first, the keyboard will navigate all elements with the attribute tabindex="1", then the ones with tabindex="2", and after all elements with a positive tabindex have been navigated, it’ll take into account all interactive elements by default and those with the attribute tabindex="0". This is known as the tabindex-ordered focus navigation scope.

Now, this is a pattern that you shouldn’t use. You’ll be better if you put the required focusable elements in your site in the order you need. Otherwise, you could create a very confusing experience for keyboard users, which would make a failure of the WCAG criterion 2.4.3: Focus order.

“If a Web page can be navigated sequentially and the navigation sequences affect meaning or operation, focusable components receive focus in an order that preserves meaning and operability.”

Success Criterion 2.4.3: Focus order

It might be useful if you want keyboard users to focus on widgets before they reach the page content, but that’d be a bit confusing for assistive technology users (like screen readers). So again, you’d be better by creating a logical order in the DOM.

inert Attribute

I have to quickly note an incoming attribute that will help us a lot with keyboard accessibility called inert. This attribute will make the content inaccessible by assistive technologies.

Now you might be asking yourself how this can be useful because if something removes keyboard accessibility, but in some cases, that’s a good thing! One component that will benefit from it is modals. Adding this attribute to all elements in the site except this modal will make it easy to create a focus trap. So you’ll ensure the user can’t accidentally navigate to other parts of the site using the Tab key unless they close that modal. Right now, creating a keyboard trap requires quite some thinking with JavaScript (I’ll explain how in the second part of this guide). So, having a way to make it easier with this attribute will be handy.

Sounds pretty cool, right? Well, unfortunately, this attribute is not recommended to be used yet. If you check the caniuse.com entry about this attribute, you’ll notice it’s very recent; Opera doesn’t have support for it yet. The most recent implementation of it was version 105 of Firefox, and at the moment of writing this article, it’s a beta version! So, it’s still very early to do it. There is a polyfill for inert attribute, but right now, it’s a bit performance costly. So, I seriously suggest not using it now for production. But once we have adequate support for this attribute, some component patterns will be easier to create.

CSS

CSS is an essential tool for keyboard accessibility because it allows us to create a level of customization of the experience, which is important for compliance with WCAG 2.2 criteria. Additionally, CSS has multiple selectors with different uses that will help to create a good keyboard experience, but be careful because a bad use of certain properties can be counterproductive. Let’s start diving into the use of this language to create an accessible experience for keyboard users.

Focus Indicator

When you use a mouse, you can see which element you can interact with it thanks to the cursor, and you wouldn’t remove the cursor from your user, right? That’d make them unable to know what element they want to use!

We have a similar concept for keyboard navigation, and it’s called a focus indicator, which by default is an outline that surrounds a keyboard-focusable element when it’s selected by it. Being sure all your keyboard-focusable elements have a focus indicator is essential to making a website keyboard accessible, according to WCAG criteria:

“Any keyboard operable user interface has a mode of operation where the keyboard focus indicator is visible.”

Success Criterion 2.4.7: Focus Visible

This style is different depending on the browser you’re using. You can see how it looks in the various browsers in those pictures by default and when you use the CSS property color-scheme set to dark just to check out how the default styles would behave in dark mode.

As you can notice, Chromium-based browsers like Chrome or Edge have a black and white outline, which works in light and dark mode. Firefox opted for a blue outline which is noticeable in both modes. And Safari (and Webkit-based browsers because right now, all iOS browsers use Webkit as their browser engine) looks almost the same as Firefox but with an even subtler outline for a dark color scheme.

WCAG Criterion 2.4.11

Now, default focus indicators are visible, but are they enough? The answer is “no”. While it can work in some cases, people with visual impairments will have problems noticing it, so WCAG created the Success Criterion 2.4.11 — Focus appearance to make an accessible focus indicator. Right now, this criterion is part of WCAG 2.2, which is a Candidate Recommendation. So it’s quite unlikely it will change before the final release, but keep in mind that it’s still subject to changes.

When the keyboard focus indicator is visible, one or both of the following is true:
  1. The focus indicator:
    • encloses the visual presentation of the user interface component, and
    • has a contrast ratio of at least 3:1 between the same pixels in the focused and unfocused states, and
    • has a contrast ratio of at least 3:1 against adjacent colors.
  2. An area of the focus indicator meets all the following:
    • is at least as large as the area of a 1 CSS pixel thick perimeter of the unfocused component, or is at least as large as a 4 CSS pixel thick line along the shortest side of the minimum bounding box of the unfocused component, and
    • has a contrast ratio of at least 3:1 between the same pixels in the focused and unfocused states, and
    • has a contrast ratio of at least 3:1 against adjacent non-focus-indicator colors, or is no thinner than 2 CSS pixels.

Where a user interface component has active sub-components, if a sub-component receives a focus indicator, these requirements may be applied to the sub-component instead.

Success Criterion 2.4.11 Focus Appearance

There is something important to consider here, and that’s the area of the focus indicator. This area needs to meet the contrast requirements of this criterion. To illustrate that, I’ll use an example Sara Soueidan made for her article “A guide to designing accessible, WCAG-compliant focus indicators.”

This example uses an outline, but remember that you can use other properties to determine the focus state, like background-color or some creative ways of using box-shadow as long as it’s compliant with the requirements. However, don’t use the property outline: none to eliminate the element’s outline.

That’s important for Windows High Contrast Mode because when it’s active, your website colors will be replaced with ones chosen by the user. So depending on properties like background-color will have no effect there. Instead, use the CSS declaration outline-color: transparent with the appropriate thickness to comply with WCAG criteria. You can see examples of how it works in my article about Windows High Contrast Mode.

The Optimal Outline Size

An easy way to create a compliant focus indicator is using this method Stephanie Eckles suggested in her talk Modern CSS Upgrades To Improve Accessibility. First, we set custom properties in the interactive elements. Remember you can add more elements to the rule depending on the complexity of your project:

/* Add more selectors inside the :is rule if needed */

:is(a, button, input, textarea, summary) {
    --outline-size: max(2px, 0.08em);
    --outline-style: solid;
    --outline-color: currentColor;
}

And then, we use those custom properties to add a global focus rule:

:is(a, button, input, textarea, summary):focus {
    outline:
      var(--outline-size)
      var(--outline-style)
      var(--outline-color);
    outline-offset: var(--outline-offset, var(--outline-size));
}

The use of 0.08em here is to give it a bigger outline size if the font is bigger, helping to scale the element’s contrasting area better with the element’s font size.

Keep in mind that even when WCAG mentions that the focusing area “is at least as large as the area of a 1 CSS pixel thick perimeter of the unfocused component”, it also mentions that it needs to have “a contrast ratio of at least 3:1 against adjacent non-focus-indicator colors, or is no thinner than 2 CSS pixels.” So, a minimum thickness of 2px is necessary to comply with WCAG.

Remember that you might need a thicker line if you use a negative outline-offset because it’ll reduce the perimeter of the outline. Also, using a dashed or dotted outline will reduce the focused area roughly by half, so you’ll need a thicker line to compensate for it.

The outline’s ideal area is related to the perimeter of the element. Sara Soueidan once again did a great job explaining how this formula works in her article about focus indicators. So check it out if you want to understand better the maths behind this matter and how to apply them.

CSS Focus-related Selectors

With CSS, you normally use the pseudo-class :focus to give style to an element when it’s being focused by a keyboard, and it does its job well. But modern CSS has given us two new pseudo-classes, one that helps us with a certain use case and the other that solves an issue that happens when we use the focus pseudo-class. Those pseudo-classes are :focus-within and :focus-visible. Let’s dive into what they do and how they can help us with keyboard accessibility:

:focus-within

This pseudo-class will add a style whenever the element is being focused or any of the element’s children is also being focused. Let’s make a quick example to show how it looks:

<form>
  <label for="name">
    Name: 
    <input id="name" type="text">
  </label>
  <label for="email">
    Email:
    <input for="email" type="email">
  </label>
  <button>Submit</button>
</form>

Very quick tangent note: Consider not using label to wrap the input element. While it works in all browsers, it doesn’t work well with Dragon speech recognition software because it won’t be recognized appropriately.

form {
  display: grid;
  gap: 1em;
}

label {
  display: grid;
  gap: 1em;
  padding: 1em;
}

label:focus-within {
  background-color: rebeccapurple;
  color: white
}

This pseudo-class is interesting to enrich the styles of certain components, as previously shown, but in others, it helps a lot to make content accessible for keyboard users. For example, I created a card for my article about media queries hover, pointer, any-hover, and any-pointer. This card shows the content when the user puts the cursor on it, but it also shows the content when you focus the button inside of it using the :focus-within pseudo-class by using the same rules that are triggered on hover. You can check out the code in the mentioned article as well as in this CodePen:

If you use keyboard navigation, you’ll notice the order is pretty straightforward. It reads from left to right and from top to bottom, and the navigation will be the same. Now let’s use grid properties to make some changes:

ul li:where(:nth-child(1), :nth-child(5), :nth-child(7), :nth-child(9)) {
  grid-row: span 2;
  grid-column: span 2
}

ul li:where(:nth-child(1), :nth-child(5)) {
  order: 2;
}

ul li:where(:nth-child(7), :nth-child(8)) {
  order: -1;
}

ul li:nth-child(4) {
  grid-row: 3;
  grid-column: 2 / span 2;
}

ul li:nth-child(3) {
  grid-row: 5 / span 3;
  grid-column: 3;
}

Now it looks completely disarrayed. Sure, the layout looks funny, but when you start navigating it with the Tab key, it’ll have a very random order. There is some degree of predictability now because I used numbers as the button’s label, but what happens if they have different content? It’d be impossible to predict which would be the next button to be focused on with a keyboard.

This is the kind of scenario that needs to be avoided. It doesn’t mean you can’t explicitly order an element within a grid or use the order property. That means you need to be careful with managing your layouts and be sure the visual and DOM order matches as much as possible.

By the way, if you want to try it by yourself, you can see the demo of this code here and experience this chaotic keyboard navigation by yourself:

Now let’s start styling this component! By default, this element uses this triangle to indicate if the details element is opened or closed. We can remove that by adding this rule to the summary element.

summary {
  list-style: none;
}

But we’ll still need a visual indicator to show if it’s opened or closed. My solution is to add a second element as a child of summary. The important part is that this element will have the attribute aria-hidden="true":

<summary>
  <p>
    How much does shipping cost?
  </p>
  <span aria-hidden="true"></span>
</summary>

The reason why I hid this span element is that we’ll be modifying its content with CSS modifying the pseudo-element ::before, and the content we add will be read by a screen reader unless, of course, we hide it from them.

With that said, we can change it because the browser manages the open state of the details element by adding the attribute open to the container. So we can add and change the content using those CSS rules:

summary span[aria-hidden="true"]::before {
  content: "+";
}

details[open] summary span[aria-hidden="true"]::before {
  content: "-";
}

Now, you can add the styling you need to adapt it (remember to use adequate focus states!). You can check this demo I made to see how it works. Test it with a keyboard, and you’ll notice you can interact with it without a problem.

But there can be multiple skip links in a site that will lead you to various parts of the site, as Smashing Magazine does. When you use the Tab Key to navigate this website, you’ll notice there are three skip links, all of them taking you to important points of the page:

They’re usually located on the site’s header, but it’s not always the case. You can add them where needed, as Manuel Matuzović shows in this tweet. He added an inline skip link to a project because the interactive map has a lot of keyboard-focusable elements.

Working on a feature that allows users to skip areas with many tab stops (inline skip link). 🔥

Video alt: A page with a bunch of links followed by an embedded map. Pressing the Tab key reveals a link that, when activated, moves focus to the next tabbable element after the map. pic.twitter.com/utSPgzs2Kh

— Manuel Matuzović (@mmatuzo) April 6, 2022

Now, as the usefulness of skip links is clear, let’s create one. It’s very simple; we just need to create an a element that takes you to the desired element:

<header>
  <a class="skip-link" href="#main-content">Go to main content</a>
</header>
<main id="main-content"></main>

Next, we need to hide visually the a element. What I do there is use the transform CSS property to remove it from the visual range:

.skip-link {
    display: block;
    transform: translate(-9999px);
}

Then, we move it to the needed position when the element is being focused:

.skip-link:focus {
  transform: translate(0)
}

And that’s it! Creating a skip link is easy and offers a lot of help for keyboard accessibility.

Tooltips

Those little text bubbles that show extra information to an element can be done with pure CSS as well, but a little disclaimer here: it is suggested that you can close a tooltip by pressing the Escape key, which it’s only possible with JavaScript. I’ll explain how to add this feature in the second part of this article, but everything else can be done in a very simple way using HTML and CSS only.

A common problem with tooltips is that a keyboard user cannot see them, so we need to ensure the component that triggers it is a keyboard-focusable element. Our best bet here is using the button element. The semantics is really simple, as Heydon Pickering shows in his book Inclusive Components.

<div class="tooltip-container">
  <button>
  </button>
  <div role="tooltip"></div>
</div>

The container with the class tooltip-container is there just to allow us to manipulate the container’s position with the attribute role="tooltip" later using CSS. Speaking of this element, you would think this role adds enough semantics to make it work, but as a matter of fact, it doesn’t, so we’ll have to rely upon a couple of aria attributes to link it to our button.

This attribute depends of what’s the intention of the tooltip. If you are planning to use it to name an element, you need to use the attribute aria-labelledby:

<div class="tooltip-container">
  <button aria-labelledby="tooltip1">
    <svg aria-hidden="true">
      <!--  SVG Content  -->
    </svg>
  </button>
        <div id="tooltip1" role="tooltip">Shopping cart</div>
</div>

However, if you want to use the tooltip to describe what an element does, you’ll need to link it using the attribute aria-describedby:

<div class="tooltip-container">
  <button aria-label="Shopping cart" aria-describedby="tooltip2">
    <svg aria-hidden="true">
      <!--  SVG Content  -->
    </svg>
  </button>
  <div id="tooltip2" role="tooltip">Check, modify and finish your purchase</div>
</div>

Be careful with this approach; use it only to give auxiliary descriptions, not to give information that is absolutely necessary to understand what this element does. That’s because when a screen reader user generates a list of the form elements (including buttons) in the site, the description won’t be displayed unless the user is focusing on the element, as Adrian Roselli shows in his test on aria-description attribute.

Now, it’s time to talk about what concerns us in this article — keyboard accessibility! For this, we need to hide the tooltip and show it until the user is either focusing the pointer on the element or when it’s being focused with a keyboard. For this, we’ll use the :hover and :focus pseudo-classes in tandem with the adjacent sibling combinator.

Additionally, it’s important you can see the tooltip when you hover over it to comply with WCAG Criterion 1.4.13: Content on Hover or Focus. With those considerations in mind, this is how our code should look:

[role="tooltip"] {
  position: absolute;
  bottom: 0;
  left: 50%;
  display: none;
  transform: translate(-50%, 100%);
} button:hover + [role="tooltip"], button:focus + [role="tooltip"], [role="tooltip"]:hover { display: block; }

And this is how you create a keyboard-accessible tooltip using HTML and CSS. You can check how both examples of tooltip behave in this demo. Remember, this is not fully ready for production. You need JavaScript to close the tooltip when you press the Esc key. We’ll cover that later in the next part of this article, so keep it in mind.

See the Pen Tooltip demo - CSS only [forked] by Cristian Diaz.

As Heydon mentions in his book, tooltips have a problem when you use them for devices that don’t have a pointer, like cellphones or tablets, then a different approach for them is required. You can use CSS for that using the media queries hover and pointer, as I explain in my article.

Wrapping Up

Keyboard accessibility is an essential part of accessibility. I hope this article has helped you understand how vital HTML and CSS are to make keyboard navigation a good and accessible user experience. That’s not the end of keyboard accessibility, though! I’ll be covering how we can use JavaScript to manipulate keyboard navigation and how we can use it in more complex component patterns.

]]>
hello@smashingmagazine.com (Cristian Díaz)
<![CDATA[How To Search For A Developer Job Abroad]]> https://smashingmagazine.com/2022/11/search-developer-job-abroad/ https://smashingmagazine.com/2022/11/search-developer-job-abroad/ Fri, 11 Nov 2022 09:00:00 GMT Many millions of people dream of flying the coop and spending time working abroad.

The opportunity to work abroad is a popular prospect, one undimmed by the years of restriction due to the pandemic and made only more accessible thanks to hybrid working and the rise of the digital native.

However, despite the still-growing desire to work abroad, many people — including professionals in the IT sphere — don’t know where to start. With that in mind, I wanted to write the ultimate guide for finding international employment opportunities.

The article primarily aims at seasoned developers, focusing on where to look for jobs, how to apply, and how to prepare a resume to get called for interviews. I will explore the dos and don’ts during international job interviews and hopefully provide the right sort of advice that should be able to help any IT professional, at any stage of their career, be able to seek out career options abroad.

So, let’s dive in!

Table of Contents:

How To Prepare Your Resume For Getting A Developer Job Abroad

Let’s start with the basics — your resume.

The critical thing to remember about creating a resume for an international employer is the relevance and flexibility of skills to match your target company’s needs and their specific market.

While there are some hard and fast rules to resume writing that apply no matter where you’re sending an application, your resume needs to be tailored to your new market. This is where a little research goes a long way.

I’ll give you an example: In Malaysia, it’s considered good practice to include your personal details like marital status or date of birth on your resume. However, in other markets, these sorts of details (especially around age, sex, or marital status) are unnecessary or, in some cases, considered inappropriate.

So choose the information you share wisely! Your resume has to reflect your desire to relocate to your chosen market/region, it has to be hyper-personalized in approach, and it needs to sound like you’re passionate about your work.

Resume Length, Format, And Size

  • Depending on your skill set and experience, the details in a developer resume will vary, but I stand by my rule of not making a resume more than 2 pages.
  • Your resume should be formatted in a simple, easy-to-read font (Lato, Merriweather, or Helvetica, for example).
  • You should also include succinct summaries in sections like About Me or Key Achievements. Keep it short, keep it direct, and don’t repeat information.

Achievements

  • Instead of giving generic lists of tasks/duties/responsibilities, I advise you to clearly communicate your achievements and accomplishments, with statistics to back them up. This will help you stand out from other applicants.

For example, if you helped develop an app, make sure you include a variety of proven KPI deliverables, such as engagement KPI metrics, UX KPI metrics, and revenue metrics, rather than just a final product showcase:

Developed a social sharing feature using Android Studio, which increased downloads by 150% in the first three months.

Language

  • Use strong action verbs, such as built, led, deployed, reduced, developed, automated, managed, re-architected, implemented, designed, overhauled, and so on to describe your experience/accomplishments. They will bring a confident tone to your resume.
  • Use industry-specific adjectives like scalable, fault-tolerant, multi-threaded, and robust (to name a few) to highlight your expertise.

Tailoring

  • Tailoring doesn’t mean changing every line of your resume. It means adapting the direction and desire of your resume for a specific employer and their market.
  • Tailoring your application can take many forms: you can write a personalized cover letter, adapt your introductory paragraph to reflect your desire to work at a specific company, add specific terminology used in the job listing you’re applying for, or angle your achievements to the market and needs of a particular employer. It shows you’ve done your research and are willing and able to adapt your skill set to the needs of an employer abroad.

For some great advice on writing an effective developer resume, head to Stack Overflow and FreeCodeCamp for a further deep-dive.

Where Do You Find An International Developer Job?

My advice is to build a strategy based around four key international job-seeking means:

  • Job boards and aggregators;
  • Networking and network news;
  • International recruitment agencies;
  • Boolean search logic on Google.

Jumping onto Google and leaving your international career in the hands of algorithmic fate is not the way to approach getting a developer job abroad.

Job Boards And Aggregators

Job boards and job aggregators (the differences between the two are sometimes vague but transformative to the scale of your international job search) are a popular and effective first port of call for job hunters.

I suggest using job boards for specificity (specific markets and employers in certain countries) and aggregators as overview searches (a generalist overview of where employers are hiring the most and in what sectors).

It’s also important to utilize international job boards that have filters for “relocation” and “visa sponsorship.” In other words, fish in the right pond. Here are some sites I recommend:

  • AngelList Talent is now one of the go-to websites for finding a tech job with a startup.

You need to sign up and complete the mandatory profile information in order to filter for positions that offer visa sponsorship. Once you’re all set up on the site, you can enter your search parameters at https://angel.co/jobs.

If you open the Filters tab, you’ll find a section called “Immigration.” Choose “Only show companies that can sponsor a visa” to narrow your search appropriately.

If you don’t turn on this filter, you’ll find all jobs that meet your other criteria, regardless of whether they offer visa sponsorship.

  • Relocate.me is a job board for IT professionals (mainly software engineers) that is designed with relocation in mind.

You can see job opportunities in Europe, Asia, and North America from verified employers who offer relocation benefits. The listings include specific details about the relocation packages, making it easy to compare your options.

  • Japan Dev is a job board for finding a variety of tech jobs in Japan.

This site features hand-curated jobs from companies that have immediate openings. You can search for positions that offer relocation benefits by clicking the “Jobs with Relocation” button on the home page. You’ll be taken to the Jobs page, where you can further refine your search.

Most of the listings are for software developers and programmers, but other positions for those who work directly with developers are listed as well.

  • TokyoDev is another site that helps foreign developers find positions in Japan.

You’ll be able to filter your search with labels such as “No Japanese Required,” “Apply From Abroad,” “Residents Only,” and so on.

  • Landing.Jobs is specifically for tech jobs in Europe, with a focus on Portugal.

When you’re looking for jobs through this site, be sure to find the “Visa & work permit” filter section and select the options you need.

  • SwissDevJobs is, as the name indicates, specifically for IT jobs in Switzerland.

The site is well-designed, with a modern and easy-to-navigate UI. In the advanced filters, you can narrow your search down to only show jobs that provide visa sponsorship with a simple checkbox.

  • Arbeitnow is based in Berlin and features positions in Germany. It makes it simple to filter for jobs that provide visa sponsorship and many other options.

Indeed, LinkedIn Jobs, SimplyHired, and Monster are major job aggregators that can be very effective when searching for developer jobs abroad if you use the appropriate keywords.

When searching on LinkedIn, for example, you should add “relocation,” “visa support,” or “visa sponsorship” into the keywords tab, and select the city/country/region that’s your choice for relocation. Of course, some searches can come back with opportunities not quite suited to your situation, but using relevant keywords does a good job of filtering them out.

The same method works for Indeed, Monster, and other similar aggregators. Include “relocation,” “visa sponsorship,” or “visa support” along with your job title (or other keywords) in your search.

Networking And Network News

Networking takes time but is a highly effective source of referral recommendations. Utilizing social networks (LinkedIn, GitHub, Twitter, and even Instagram) is a highly personal and effective way of making connections with hiring teams and business leaders worldwide.

But I also urge the eagle-eyed developer to look at the market and network leaders like Hacker News’ Ask HN: Who is hiring? and TechCrunch to see where the movers and shakers are in the tech world and where upticks in hiring are occurring. You can then use these media-led referrals to directly approach companies and hiring managers online, via social media channels, and through their own websites.

International Recruitment Agencies

For a complete end-to-end application handling service, specialist developer support is available for those looking for a little more hands-on guidance via international recruitment agencies.

My suggested best first ports of call are international talent acquisition agencies like Global Skills Hub, Zero to One Search, Toughbyte, Orange Quarter, and TechBrainJobs, amongst others.

Tech companies often outsource hiring international talent to recruitment agencies, so going through a recruitment agency can be very beneficial. In addition to helping you find the right position, a good recruiter can fill you in on all of the relevant information on a company, including company relocation policy, benefits, and more.

Boolean Search

The trick to finding unadvertised yet very alive jobs is by using a rarely-utilized search tool strategy called Boolean logic.

Boolean logic refers to an algebraic formula that creates a clear “true” or “false” value to a data type by using “operator” terms while searching for jobs. For job seekers, “data type” refers to a job vacancy query, and “operator” terms refer to the words used to search for the jobs!

So, applying Boolean logic to a job search very quickly gives you a highly relevant shortlist of live jobs from your chosen country, region, or industry and even from targeted companies’ applicant tracking systems like Lever, Workable, and Greenhouse!

It sounds complex (and the theory behind it is), but the search terms are super effective and simple to deploy. As Reed highlights in their piece on Boolean job searches, “You can use keyword searching almost everywhere, ranging from big search engines through to search functions within smaller sites.”

So, how does it work?

You add the relevant “operator” terms into your search platform or site that refer to specific jobs, skills, or criteria you’re looking for. It’s not as complex as it sounds!

These operator terms are the following:

  • AND: for job searches containing multiple keywords, for example, developer AND javascript AND python will guarantee search results with only those primary keywords as priority indexed terms.
  • OR: for job searches where one of several keywords are prioritized, but not all of them need to be. For example, web developer OR software developer will bring you back jobs with both web and software developer in the title or text, but no other jobs.
  • "" marks: used in searches for a particular phrase or term. For example, putting "mobile developer" into a job search will only bring back mobile rather than other developer roles.
  • *: for searches where you want to start with a certain term. For example, *Engineer will return all jobs that start with the term Engineer, such as Engineering Manager.
  • ( ): when you want to group certain job criteria together. For example, software developer (startup and python) will only bring back specific jobs that fit the startup and tech stack mold you’re looking for.

Example: (site:https://jobs.lever.co OR site: https://apply.workable.com) (engineer OR developer) AND android AND (relocation assistance OR relocation support OR relocation package OR visa sponsorship OR visa support)

Ex-Amazoner Kip Brookbank has a great article on LinkedIn about using Boolean searches to source a job. Make sure to check it out!

Put It All Together

The end result of using all four strategies above is a highly targeted, specific, niche, and personal job search that utilizes the best of digital job searching tools and the international recruitment consultant market.

But above all else, the above four points should drive home the feeling that you can get the perfect international tech job with a bit of patience and consideration using these very effective, free tools at your disposal.

By using each specific search platform’s own location tools, you can narrow down the right sort of opportunities for you. Recruiters offer advice, targeted recruitment support, and hands-on help finding a role. Through networking, you can get job referrals. Finally, using Boolean search removes a lot of the stress of sifting through hundreds of jobs.

Your Primary Considerations When Looking For A Developer Job Abroad

Now you’ve got your resume sorted, and you’re utilizing a raft of different strategies to source your new developer role abroad. So what are your primary considerations beyond clicking the “apply” button?

My advice is to start by building an application strategy (or multiple strategies) that will handle the complexity of relocation and which will keep you focused on building a foundation of credible, flexible professionalism in the eyes of your new employer.

These strategies include some of the points raised above and further details on referral systems, social media approaches, and watching out for red flags!

Application Strategies

My top 5 applications strategies are:

  1. Apply through specific job boards or aggregators.
    As mentioned above, utilize all digital options at your disposal.
  2. Apply with information from company websites.
    Go directly to your chosen company or use Boolean search terms to shortlist your chosen jobs better.
  3. Seek referrals.
    One of the most versatile and personal job search channels, referrals are found via your network and network news.
  4. Contact your target companies’ HR departments through LinkedIn or email.
    Utilizing social media is no bad thing, and LinkedIn is your primary weapon.
  5. Watch out for red flags.
    Get yourself savvy about what constitutes a poor job advert (Workable provides some eye-opening advice on bad job ads), and use Glassdoor and Blind to sift through companies with bad reviews. You might dodge a career bullet by doing so.

Interview Preparation

In my experience working with incredibly exacting tech talent, sometimes the basics of interview prep can get lost in the melting pot of assessment testing and high-value candidate handling.

There are some absolutely crucial dos and don’ts for getting a developer job abroad you must adhere to:

Do:

  • Learn about the company.
    Do your research, understand your employer’s journey, and get under the skin of their company’s purpose and mission.
  • Look up interview questions that the company may ask and consider your answers.
    Prepare for any and every question, from coding to critical thinking, from teamwork to timekeeping.
  • Prepare your own questions to ask to show your interest.
    Your research should guide the types of questions you want to ask your prospective new employer. This is your one chance to mine them for information!
  • Be on time. A basic interview must is not be late.
  • Reiterate your desire to relocate and your plans to do so.
    Although I do go into more detail on this below, international employers will want to know how you plan to factor in a relocation into your application. Better to be prepared than caught out.

Don’t:

  • Criticize or otherwise speak poorly about former or current employers.
    It’s not a good look, shows a lack of professionalism, and will reflect poorly on your exit strategies.
  • Imply that your main interest in the job is relocation.
    Although relocation is important, it shouldn’t be the main reason for moving as it makes the job sound like a means to an end.
  • Say “I don’t know” during the interview.
    Contrary to popular belief, you are allowed to make mistakes in an interview. But rather than so you don’t know, say you’d be happy to provide an answer for it in a follow-up interview or post-interview as you don’t have the right information to hand or something to that effect. In short, indicate you may not know now, but you can find out easily.
  • Ask about salary, bonuses, and benefits during the interview.
    Interviews are designed to determine whether you have the character, skills, drive, and determination for a role. The salary and bonus conversation will come later. It’s not a conversation for now unless an interviewer asks you directly. My advice is to be prepared and know your worth!
  • Forget details on your resume.
    You will be asked about certain points on your resume, and your interview will ask you to elaborate on key points. You must know your resume back to front. If you don’t, you run the risk of looking half-prepared and out of your depth.

How To Negotiate A Job Offer And Navigate Discussions About Your Salary

Contract, salary, and compensation negotiation is a vital moment in your international developer job search. This discussion is not just about money alone; this is your opportunity to talk about everything from relocation packages to IP rights, expat taxes, and more.

  • First things first, do your research.
    Understand the expectations for tech talent in the market (and for the size of the company) you wish to relocate to, and formulate an idea of what you’d want from your pay packet commensurate with the situation of your ideal employer.
  • Be careful about offering salary expectations.
    The best way to approach a discussion around salary is to ask for an offer first rather than put in your expectations based on your research. See if the offer meets your idea of fair pay and relocation package (if offered).
  • Relocation packages
    Does your new employer help with relocation, and if so, by how much and at which stage?
  • Negotiate perks and benefits
    From subsidized travel to commute costs, your employer needs to put a whole package in writing.

Other negotiation considerations should be:

  • Factor in expat taxes: do you pay more tax while working abroad?
  • Discuss intellectual property rights: do you retain any IP in the course of product creation, or are there other options available?
  • Ensure agreement is enforceable: international employment contracts can be a foggy minefield to navigate through, so do your research regarding everything from employer’s rights to e-signing capability.
Relocation, Preparation, And Moving To Your New Developer Job

Relocation is a complex, emotional, and risky endeavor, and never one a developer should take lightly. I advise relocating developer talent to run through a pre-travel hitlist to guarantee smooth sailing:

  • Learn everything you can about your destination.
    The first thing you should do is deep-dive into the place you’re moving to, from intercity travel to the nuances of the local language. It’ll help reduce the culture shock of the first few weeks and months in a new place.
  • Visit your new location before your move.
    I appreciate international travel isn’t cheap, but if it’s possible to visit pre-move, you’ll benefit from a bit of a headstart with getting around and making some local connections. It’s also beneficial when it comes to meeting potential landlords, work colleagues, and so on.
  • Determine the cost of living in your new location.
    Cost of living fact-finding will be done around the negotiation stage. Still, it’s worthwhile understanding how far your salary will stretch and any nuances around pay and tax bandings, cost of living, rent, food, travel, and the like. That’s where websites like Numbeo come in handy.
  • Understand what’s included in your relocation package.
    This is vital. How much of your travel, accommodation, or relocation will be subsidized, if at all, by a new employer? This is a cornerstone of your financial planning arrangements.
  • Consider your family’s requirements in your planning.
    Finally, although it should never be far from your mind and it no doubt won’t be, your family is an important factor in your relocation. I urge you to include them as much as possible in the process and remember the emotional toll of moving away from home to a new country, as much as the physical and financial.

Finding a developer job abroad is a labor of love — one that has to take stock of everything from financial planning to tweaking and perfecting your resume for an international audience.

It takes a lot of preparation, but the results of a well-planned international job search are incredibly rewarding.

Moving abroad for work is one of the most rewarding and life-changing things you can do, especially if you’re a talented worker with an in-demand skill set like software development.

The world is your oyster!

]]>
hello@smashingmagazine.com (Andrew Stetsenko)
<![CDATA[Design Systems: Useful Examples and Resources]]> https://smashingmagazine.com/2022/11/design-systems-inspiration-resources-case-studies/ https://smashingmagazine.com/2022/11/design-systems-inspiration-resources-case-studies/ Wed, 09 Nov 2022 07:00:00 GMT Design systems ensure alignment, reusability, and consistency across a project or brand. And while we have gotten very good at breaking down UIs into reusable components, a lot of design systems aren’t as useful and practical as they could be, or they aren’t even used at all. So how can you make sure that the work you and your team put into a design system really pays off? How can you create a design system that everyone loves to use?

In this post, we’ll take a closer look at interesting design systems that have mastered the challenge and at resources that help you do the same. We’ll explore how to deal with naming conventions, how motion and accessibility fit into a design system, and dive deep into case studies, Figma kits, and more. We hope that some of these pointers will help you create a design system that works well for you and your team.

Table of Contents

Below you’ll find quick jumps to real-world design systems and specific design system topics. Scroll down for a general overview. Or skip the table of contents.

Inspiring Real-World Design Systems

Nord: Accessibility And Naming Conventions

Bringing together everything that’s required to manage a healthcare business digitally, Nordhealth creates software that aims to redefine healthcare. As such, their design system Nord is heavily focused on accessibility.

Nord offers plenty of customization options, themes, and a fully-fledged CSS framework, plus dedicated guides to naming conventions and localization, for example. Unfortunately, the Nord Figma Toolkit isn’t open-sourced yet.

Workbench: Comprehensive Live Examples

Gusto serves more than 200,000 businesses worldwide, automating payroll, employee benefits, and HR. To enable their team to develop cohesive and accessible experiences for Gusto’s platform, the Workbench design system encompasses Gusto’s design philosophy, design tokens, creative assets, React components, and utilities — and documentation to tie it all together.

What really stands out in the Workbench system are the comprehensive live examples that explain exactly how components should be used in different contexts. Do’s and don’ts, visual explanations, and implementation details ensure that both designers and developers working with Workbench can use the design system effectively. For even more convenience, there’s also a Gusto Workbench VS Code Extension with common snippets for UI components.

Olympic Brand: Branding And Multi-Lingual Design

The Olympic Games are probably one of the most widely recognized brands in the world. Since the birth of the modern Games more than 125 years ago, hundreds of people have grown and enhanced the Olympic brand. To increase consistency, efficiency and impact across all that they do, the IOC hired a Canadian agency to create a comprehensive design system that conveys the timeless values of the Olympic Games and propels the brand into the future.

The Olympic design system is focused on branding and identity design, but also provides examples of illustrations and graphic elements. It shows how to manage multi-lingual challenges and how to use typography, with plenty of good and not-so-good examples and guidance notes along the way.

Brand Estonia: Custom Design Attributes

Pure and contrasting nature, digital society, and smart, independent-minded people are the core values behind the brand Estonia. The Brand Estonia design system maps the country’s strengths and shows how to express them through writing, designs, presentations, and videos.

Stories, core messages, facts, and plenty of examples and templates provide a solid foundation for creating texts and designs across the brand — be it on the web, in social media, or print. A special highlight of Estonia’s design system lies on authentic photos and custom design attributes such as wordmarks and boulders to underline the message.

Audi: Visual Examples Of Do’s And Don’ts

Audi UIs range from websites to applications for a particular service. The Audi design system provides a joint set of components, modules, and animations to create a well-balanced, system-wide user experience — from the app to the vehicle.

Along with brand appearance guidelines and UI components, a handy feature of the design system is its comprehensive set of visual examples of how a component should (and shouldn’t) be used in Audi’s interfaces. There is also a Audi UI Kit for Figma and a Sketch UI library that ensure that designers use the most up-to-date components and icons in their products.

Deutsche Bahn: Content Guidelines And UX Writing

Deutsche Bahn, the national railway company of Germany, is one of the most recognized brands in Germany. With the help of their DB Digital Product Platform, the company enables developers, designers, and copywriters to build flexible digital experiences with an emphasis on mobility.

The design system features content guidelines, accessibility considerations, code examples, components, and contextual examples of how to use them. It also provides guidelines around UX writing and helpful visual guides to accessibility and logo. Everything is open source on GitHub and NPM.

Shopify, If, And More: Data Visualization

Data is pretty much useless if we can’t make sense of it. Luckily, data visualization helps us tell the full story. But how to include data visualization in a design system? Here are some examples.

Shopify’s design system Polaris maps out guidelines for how to approach data visualization and defines five core traits for successful data visualizations. Do’s and don’ts for different data visualizations deliver practical examples. Culture Amp features helpful further reading resources for each type of data visualization they define in their design system. The If Design System shines a light on color in data visualizations, and the Carbon Design System comes with demos and ready-to-use code snippets for React, Angular, Vue, and Vanilla.

Design Systems For Figma

Atlassian, Uber, Shopify, Slack — these are just a few of the design systems you’ll find on Design Systems For Figma. Curated by Josh Cusick, the site is a growing repository of freely available Figma kits of design systems — grouped, organized, and searchable.

Not featured in the collection, but worth looking into as well, is the GOV.UK design system Figma kit. It focuses specifically on complex user journeys and web forms. Lots of valuable insights and inspiration are guaranteed.

Design System Resources

Design System Naming Conventions

Let’s face it, naming things can be hard. Particularly in a design system, where you need to find names for your components, styles, and states that can be easily understood by everyone who works with it. But how to best tackle the task? Ardena Gonzalez Flahavin explores not only why we should care about naming in our design systems but also what we should keep in mind when doing so.

Shayna Hodkin also summarized best practices for solid naming conventions for the different categories in a design system — from colors and text styles to layer styles and components.

Another great read on the topic comes from Jules Mahe. To help you find the right balance between clarity, searchability, and consistency, Jules summarized tips for naming your design files, understanding what you need to name in a design system, and structuring it. Three useful resources for futureproofing your design system.

Accessibility In Design Systems

When building a design system, it’s a good idea to include guidelines and documentation for accessibility right from the beginning. By doing so, you reduce the need for repeat accessibility work and give your team more time to focus on new things instead of spending it on recreating and testing accessible color palettes or visible focus states again and again. In her article on accessible design systems, Henny Swan explores what an accessible design system needs to include and how to maintain it.

To shift the understanding of accessibility from one of basic compliance to a truly inclusive, human-centered experience, the team at AdHoc released their Accessibility Beyond Compliance Playbook. It explores several ways to improve accessibility — from the immediate task of building accessible products to creating teams of people that underscore an Accessibility Beyond Compliance mindset.

Another handy resource to help you incorporate accessibility efforts comes from IBM. Their open-source Carbon Design System is based on WCAG AA, Section 508, and European standards to ensure a better user experience for everyone. It gives valuable insights into how users with vision, hearing, cognitive, and physical impairments experience an interface and what designers should think about to ensure their design patterns are operable and understandable.

For more practical tips, be sure to check out the IBM Accessibility Requirements checklist on which Carbon is based. It features detailed tips to make different components and design patterns comply with accessibility standards. A way forward to empowering your diverse user base.

Brand Expression In Design Systems

When it comes to visual elements like icons and illustrations, many companies have difficulties finding the right balance between being on-brand, useful, and scalable. The team behind Design Systems For Figma also faced this challenge and came up with a recipe for creating and scaling a system of visuals. Elena Searcy summarized how this system works.

In her blog post, Elena starts with the smallest visual element, an icon, explaining what her team aims for when choosing and creating icons to make them align with the brand and provide real value for the user. Elena also shares insights into how they handle illustrations, including a scalable way of creating them and considerations regarding anatomy, style, and color. A great example of how a set of established rules can help make visuals more meaningful.

Motion In Design Systems

Motion in design is powerful. It can help to reduce cognitive load, guide users through pages and actions, provide user feedback, improve the discoverability of features, and improve perceived response time. To make full use of motion, the design team at Salesforce created an end-to-end motion language for their products: the Salesforce Kinetics System.

As Pavithra Ramamurthy, Senior Product Designer at Salesforce, explains, the intention behind the Salesforce Kinetics System is to enable the evolution and scaling of kinetic patterns across products, with design system components that are pre-baked with motion right out-of-the-box.

But how do you scale these motion patterns from design system to product? How would teams actually use the artifacts in their daily workflows? Pavithra wrote a case study that takes a closer look to demonstrate the possibilities. Interesting insights guaranteed.

Enterprise Design System 101

Introducing an enterprise design system is a lot of work. But it is work that will pay off. Especially with large teams, multiple platforms, and numerous user interfaces to manage, having a single source of truth helps maintain a consistent user experience. So what do you need to consider when building your own? Adam Fard takes a closer look.

As Adam explains, an enterprise design system is a system of best practices, reusable design elements, processes, usage guidelines, and patterns that help reinforce the brand, improve the UX design process, and optimize the user experience. He compares it to a box of Lego: the building blocks are the collection of code and design components, the building instructions that you’ll usually find inside the box correspond to a collection of guidelines, processes, and best practices that ensure that co-designing and cross-collaboration are seamless. If your enterprise traverses numerous sites or apps, Adam’s writeup is a great opportunity to get familiar with the concept of enterprise design systems.

Measuring A Design System

When you’ve built a design system or are just about to start working on one, metrics might not be the thing you’re concerned about at first sight. However, measuring your design system is more important than you might think. In his article “How to measure your design system?”, Jules Mahe dives deeper into why it’s worth the effort.

Jules explains how to define the KPIs for your design system and how to get quantitative data measurements to learn more about a design system’s efficiency. Qualitative data conducted with the help of surveys and interviews make the narrative more compelling. Of course, Jules also takes a closer look at how to use the data. As he concludes, measuring a design system is challenging and requires time, but it will be a goldmine and one of the essential levers for your design system’s growth and sustainability.

Design System ROI Calculator

Your boss is hesitant that the work you’ll put into a design system will eventually pay off? The Design System ROI Calculator might be just what you need to convince them that the time and money invested in a design system is a good investment.

The ROI calculator helps you understand and project cost savings when implementing a design system. It calculates total employee savings from implementing a design system, as well as time saving and efficiency gain by component or UI element. To estimate total savings, you can select between different scenarios based on team size and product calculation.

Design System Case Studies

Having robust components and patterns that can be reused in different situations is the essential idea behind every design system and often seems like the magical wand everyone has waited for to solve challenges and improve collaboration. Henry Escoto, UX & Design at FOX Corporation, offers a perspective on design systems that is a bit different. He claims that it’s actually the practice which can truly make a difference.

In his case study “Our Design System Journeys”, Henry shares in-depth insights into FOX Tech Design’s design systems Delta and Arches to find answers to the following questions: How will a design system truly help your product design? What does it take to build and execute a design system within an organization? How to inject the practice into existing workflows? And last but not least, what is the pay off of such an investment?

Another interesting case study comes from Jan Klever. Jan is a designer at Quero Educação and also fills the role of the organization’s Design System Ops. He shares from his team’s experience how having a dedicated Design System Ops role can help when it comes to maintenance and following up on the product.

Design System In 90 Days

When you’re starting to work on a design system, you do it with the intent to build something that lasts, a system that teams love to use and that saves them precious time in their daily work. However, many attempts to build a design system end up in great libraries that don’t get used as much as you had hoped. But how do you create a design system that becomes an established part of your organization’s workflow? SuperFriendly published a practical workbook in which they take you and your team from zero to a design system that lasts — in 90 days.

Written for cross-disciplinary teams of design, engineering, and product, the workbook consists of a 130-page PDF and FigJam prompts and Figma templates you’ll use to complete activities. No theory, only clear instructions on what to do and how to do it over a 90-day timeframe. At $349, the workbook isn’t cheap, but considering that it can save you about 6–9 months of figuring out what work to do, the investment is definitely worth considering.

Wrapping Up

Have you come across a design system you found helpful? Or a resource or case study that eased your work and that you’d like to share with others? We’d love to hear about it in the comments below.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[What’s New In Next.js 13?]]> https://smashingmagazine.com/2022/11/whats-new-nextjs-13/ https://smashingmagazine.com/2022/11/whats-new-nextjs-13/ Mon, 07 Nov 2022 14:00:00 GMT October has come and gone, and with it, Next.js has released a new major version packed (pun intended) with tons of new features — some of which can be seamlessly adopted from your Next.js 12 app, while others not so much.

If you’re just jumping on the bandwagon, it may be confusing to distinguish the hype, the misinformation, and what’s stable for your production apps, but fear not! I’m here to give you a nice overview and get you up to speed.

What Misinformation?

As with all Next.js releases, there are a few APIs that are moved into the stable core and for recommended use, and there are others still in the experimental channel. “Experimental” APIs are still up for debate. The main functionality is there, but the question of how these APIs behave and how they can be used is still susceptible to change as there may be bugs or unexpected side effects.

In version 13, the experimental releases were big and took over the spotlight. This caused many people to consider the whole release unstable and experimental — but it’s not. Next.js 13 is actually quite stable and allows for a smooth upgrade from version 12 if you don’t intend to adopt any experimental API. Most changes can be incrementally adopted, which we’ll get into detail later in this post.

Releases Summary

Before we dig deeper into what each announcement entails, let’s check on a quick list and balance experiments and stable features.

Experimental

  • App Directory;
  • New Bundler (Turbopack);
  • Font Optimization.

Stable

  • “New” Image Component to replace legacy Image component as default;
  • ES Module Support for next.config.mjs;
  • “New” Link component.
The App Directory

This feature is actually a big architectural rewrite. It puts React Server Components front and center, leverages a whole new routing system and router hooks (under next/navigation instead of next/router), and flips the entire data-fetching story.

This is all meant to enable big performance improvements, like eagerly rendering each part of your view which doesn’t depend on data while suspending (you read that right!) the pieces which are fetching data and getting rendered on the server.

As a consequence, this also brings a huge mental model change to how you architect your Next.js app.

Let’s compare how things were versus how they will work in the App directory. When using the /pages directory (the architecture we have been using up to now), data is fetched from the page level and is cascaded down toward the leaf components.

In contrast, given that the app directory is powered by Server Components, each component is in charge of its own data, meaning you can now fetch-then-render every component you need and cache them individually, performing Incremental Static Regeneration (ISR) at a much more granular level.

Additionally, Next.js will carry on optimizations: Requests will be deduped (not allowing different components to fire the same request in parallel), thanks to a change in how the fetch runtime method works with the cache. By default, all requests will use strong cache heuristics (“force-cache”), which can be opted out via configuration.

You read it right. Next.js and React Server Components both interfere with the fetch standard in order to provide resource-fetching optimisations.

You Don’t Need To Go "All-In"

It is important to point out that the transition from the /pages architecture to /app can be done incrementally, and both solutions can coexist as long as routes don’t overlap. There’s currently no mention in Next.js’ roadmap about deprecating support for /pages.

Recommended Reading: ISR vs DPR: Big Words, Quick Explanation by Cassidy Williams

New Bundler And Benchmarks

Since its first release, Next.js has used webpack under the hood. This year, we have watched a new generation of bundlers, written in low-level languages, popping up, such as ESBuild (which powers Vite), Parcel 2 (Rust), and others. We have also watched Vercel setting the stage for a big change in Next.js. In version 12, they added SWC to their build and transpilation process as a step to replacing both Babel and Terser.

In version 13, they announced Turbopack, a new bundler written in Rust with very bold performance claims. Yes, there has been controversy on Twitter about which bundler is the fastest overall and how those benchmarks were measured. Still, it’s beyond debate how much Turbopack can actually help large projects written in Next.js with way better ergonomics than any other tool (for starters, with built-in configuration).

This feature is not only experimental but actually only works with next dev. You should not (and as of now can’t ) use it for a production build.
Font Optimization

The new @next/font module allows making performance optimization to your Web Fonts during build time. It will download the font assets during build-time and host them in your very own /public folder. This will save a round-trip to a further server, avoid an additional handshake, and ultimately deliver your font in the fastest way possible and cache it properly with the rest of your resources.

Remember that when using this package, the it's important to have a working internet connection when you run your development build the first time so it can cache it properly, otherwise it will fallback to system fonts if adjustFontFallback is not set.

Additionally, @next/font has a special module for Google Web Fonts, conveniently available as they are widely used:

import { Jost } from '@next/font/google';
// get an object with font styles:
const jost = Jost();
// define them in your component:
<html className={jost.className}>

The module will also work in case you use custom fonts:

import localFont from '@next/font/local';
const myFont = localFont({ src: './my-font.woff2' });
<html className={myFont.className}>

Even though this feature is still in Beta, it is considered stable enough for you to use in production.

New Image And Link Components

Arguably the most important components within the Next.js package have received a slight overhaul. Next Image has been living a double life since Next.js 12 in @next/image and @next/future/image. In Next.js 13, the default component is switched:

  • next/image moves to next/legacy/image;
  • next/future/image moves to next/image.

This change comes with a codemod, a command that attempts to automigrate the code in your app. This allows for a smooth migration when upgrading Next.js:

npx @next/codemod next-image-to-legacy-image ./pages

If you make this change and do not have visual regression tests set up, I'd recommend taking a good look at your pages in every major browser to see if everything looks correct.

For the new Link component, the change should also be smooth. The <a> element within <Link> is not necessary nor recommended anymore. The codemod will either remove it or add a legacyBehavior prop to your component.

npx @next/codemod new-link ./pages

In case the codemod fails, you will receive a linting warning on dev, so keep an eye on your terminal!

ES Modules and Automatic Module Transpilation

There two upgrades have passed under the radar for most, but I consider them especially useful for people working with Monorepos. Up until now, it was not very ergonomic to share configuration between configuration files and other files that may be used in runtime. That’s because next.config.js is written with CommonJS as the module system, which can't import from ESM files. Now, Next.js supports ESM simply by adding type: "module" to your package.json and renaming next.config.jsnext.config.mjs.

Note: The “m” stands for “module” and is part of the Node.js spec for ESM support.

For Monorepos using internal packages (JavaScript packages that are not published to NPM but instead are consumed from source by sibling apps within the monorepo), a special plugin was necessary to transpile those modules on build-time when consuming them. From Next.js 13 onwards, this can be arranged without a plugin by simply passing an (experimental) property to your next.config.mjs:

const nextConfig = {
  experimental: {
    transpilePackages: ['@my-org/internal-package'],
  },
};

You can see an example in the Apex-Monorepo template. With these settings, it is possible to develop both the dependency component and your app simultaneously without any publishing or workaround.

What’s Next?

If you’re still interested in playing around and talking more about these features, I’ll be running a Advanced Next.js Masterclass from Nov 30 – Dec 15, 2022 — I’d be super happy to welcome you there and answer all of your questions!

Until then, let me know in the comments below or tweet over to me at @AtilaFassina on how your migration has been and your thoughts on the experimental features.

]]>
hello@smashingmagazine.com (Atila Fassina)
<![CDATA[Practical Steps To Build Transparency In Your Remote Business]]> https://smashingmagazine.com/2022/11/practical-steps-build-transparency-remote-business/ https://smashingmagazine.com/2022/11/practical-steps-build-transparency-remote-business/ Fri, 04 Nov 2022 09:00:00 GMT It used to be the norm that businesses were opaque, with employees only having access to what they needed to get their work done. Over the past twenty years, though, there has been an increase in transparency in businesses: an article in HBR describes transparency as a leadership imperative, and studies conducted by companies like Slack and Tinypulse highlight the importance of transparency to employees.

“Transparency is the process of being open, honest, and straightforward about various company operations. Transparent companies share information relating to performance, small business revenue, internal processes, sourcing, pricing, and business values.”

Forbes

Companies can be transparent with their employees only; others take it further and are transparent with the world. In a remote organization, transparency is even more critical. When you rarely see your colleagues, transparency helps people feel connected to one another and to the business. It can also help to reduce timezone bias as it relies on asynchronous communication, which makes it easier for people at any timezone to participate.

In this article, I will share some tactics for improving transparency within your organization. Some of them are tactics I’ve implemented myself through my years as a remote worker and leading a remote company, and others are best practices and guidance shared by companies leading the pack in terms of remote work.

Tactics To Improve Transparency

Default To Open

Imagine signing in to your company’s Slack team, where little of the day-to-day work happens in public channels. Some people say hi in the morning or goodbye in the evening, but all the work happens in private channels and DMs. The #general channel is a dead zone. Work happens in silos, and it’s hard to know what is going on at any one time. Individuals have to ask for information when needed, and sometimes they don’t even know where to look. This can cause bottlenecks and slow down work.

At the opposite end of the spectrum is a remote team where everything is in the open: hundreds of channels cover the whole range of work done in the company, and personal interests are chucked in too. Just by looking at the list of channels in your work’s messaging platform, you’ll see the overall work and identity of the company, and anyone can jump into any channel and connect with what’s going on there. It makes people feel more connected to work across the company rather than restricting people to work silos. It also has the advantage of exposing questions and discussions to more people. You never know who might have the answer to your question, and by posting it in public, someone you wouldn’t expect might be able to help.

Practical Tips

  • Onboard new employees on how to work in the open through their onboarding period and gently nudge them to post questions and work discussions in public channels.
  • Create naming conventions for your teams’ channels because you will end up with a lot, and it helps with the organization if people can see them grouped together (e.g., #marketing-content, #marketing-design, #dev-qa, and so on).
  • Remember that some things that shouldn’t be public. Human Resources matters such as illness or performance and anything that is a special category data under GDPR should not be shared by the company. You can, however, be transparent about what won’t be open.

Lean In To Asynchronous Communication

Synchronous communication happens in real time, whether that’s on a video or voice call, messaging, teams, or in person. Asynchronous communication happens in your own time, and immediate responses are not requested or expected within the exchange.

There are many reasons why asynchronous communication is beneficial in a remote company:

  • Reduces roadblocks as employees don’t need to be online at the same time;
  • Increases flexibility for employees as they can prioritize when to respond;
  • Combats presenteeism;
  • Demonstrates trust in employees;
  • Reduces timezone bias;
  • Increases transparency as it relies on written communication and documentation.

Prioritizing asynchronous communication over synchronous communication doesn’t mean that you will never have a meeting or talk at the same time. Instead, it means that your first preference is tools such as documentation and shared issue trackers/task managers instead of having a call. Documentation is kept up to date so people can find what they need for themselves, and issue trackers capture what someone is doing and where they are at and provide spaces for collaboration that don’t require everyone to be online at the same time. By preferring these practices over synchronous practices, work carried out within the organization is always transparent and available.

Practical Tips

  • Choose a tool that people love to use that they can use to keep track of their work. There are so many project management tools that you should be able to find one that suits your way of working.
  • Keep your issue tracker updated with all of the most up-to-date information about where a task or project is, including links to works in progress, such as Google Docs, Slides, and spreadsheets.
  • Create guidelines and onboard people to this way of working. Don’t just assume that people know how to work asynchronously. If they are from a traditional office, it’s unlikely that they will.
  • Encourage everyone to ask, “do I need a meeting for this?” and make working in other spaces the default. This ensures full transparency of what’s happening, and people can engage in their own time.
  • Make sure that decisions are documented so that everyone knows what action to take and why.

Document Processes And Continuously Improve

Effective remote companies need to have great documentation. This is especially true as companies grow. When you’re a small number of people, with just a handful of people in each role, it can feel easier just to get on and do the work and not worry about documentation.

Without embedded, documented practices, different approaches to the same task will proliferate, and it will become difficult to know what is the }}best approach for the organization as a whole}}. The growth that is not managed leads to inefficiencies within the business because things spin up in new ways all the time. When new employees join, they are unclear about whose approach is the right approach, and interpersonal issues may surface just because people disagree about the best way to do things.

Good documentation creates a shared expectation about how things should be done. A well-documented process should be a ladder rather than a cage. It should provide you with the steps to get to where you want to go, which you might need to adapt to your specific circumstance rather than being something fixed that you have to stick to rigidly.

For documentation to be useful, it has to be kept up-to-date. Out-of-date documentation is worse than no documentation at all, as it tells you the wrong way to go about doing something. Therefore, I advocate keeping documentation as straightforward and to the point as possible — only enough information so that a reader can achieve their goals. Anything else is just maintenance overhead that you don’t need.

Once you have good documentation in place, it means that all employees can find what they need by looking at the documentation.

Employees shouldn’t need to jump on a call for a walkthrough, ping lots of different people to find out what they need, or be confused by the different ways that they are told to do something. This is essential to enable everyone to work autonomously and reduce time wasted on calls because something isn’t written down.

Practical Tips

  • Ensure that your documentation tool has everything you need to ensure that people can navigate and update it easily. We find built-in version control essential to see what has changed (spoiler: we use WordPress for documentation).
  • Add dates to your documentation, so people know when it was written. If you want to embed practices of continuous improvement, you can add expiry dates to your documents, and process owners are expected to review and complete any updates.
  • Provide clear expectations around documentation. If a process exists, it must be documented.
  • Gitlab sets the standard with their “handbook-first” approach. It’s worth reading how they approach documentation and adapting what is useful to your own context.

Manage The Noise

An advantage of transparency is that information is there to be found. However, there needs to be the correct systems and processes in place so that people can find them. As someone from a company that has been remote for 10+ years, I’m amazed at the amount of documentation and communication that has built up over the years, not to mention the proliferation of tools. If you’re early in your remote journey, I highly recommend creating structures now that will enable you to keep on top of all the comms as you grow.

You need to proactively manage your docs and tools. It’s like a garden: you plant flowers in the flower beds, maybe a few trees and shrubs, and get your lawn looking lovely. But over time, the weeds start to appear, the shrubs become overgrown, and the flowers need to be dead-headed.

Transparency can have a positive impact on your company, but if you don’t tend to your documentation and information, it can end up being like an overgrown garden, where you have to clamber through weeds to get what you want or find a path through it.

Practical Tips

  • Create onboarding pathways for different roles so that when new people join the company, they know where to find what they need and are taken through it step by step.
  • Stay on top of your information architecture and make sure it remains intuitive for employees. Ideally, keep your IA the same or similar across your different tools (e.g., GDrive for docs, handbooks, and so on).
  • Often, people will just search for what they need to make sure that you have a working search tool.
  • Set expectations about what people need to stay on top of. It’s important that people are up-to-date on what’s happening in their areas, but do they need to read every piece of communication?
  • Create an announcements channel or blog, with the expectation that the only items posted are things that everyone has to read. This makes sure that nothing important gets missed.

Record Meetings And Provide Useful Notes

Preferencing asynchronous communication doesn’t mean ever communicating synchronously. There are times when meetings are inevitable and valuable. However, that doesn’t mean that what happens in the meeting needs to stay within the black box of that meeting. We have tools at our disposal to make these transparent, but as with all things, we want these to be as frictionless as possible.

Recording a meeting so that anyone who is not present can catch up on it can be helpful. Also, this reduces the need for detailed minutes as anyone who wants specific details can watch the recording or catch up on the transcript (zoom has built-in transcription features, which provide a good enough transcript to scan what’s going on). This may not be suitable for all meetings as it can have a knock-on effect on people’s behavior, making them more guarded.

Alongside that, there are the meeting notes. There are as many different ways of producing notes as there are people writing them. You need to determine the purpose of your notes to put them in the best format for your organization. When thinking about it, ask yourself what someone who hasn’t attended the meeting needs to know. If a video is available, do they need full minutes or just notes about decisions, actions, and deadlines? Who is going to take the notes? Are they always taken by a specific person, or is it a role that rotates?

Practical Tips

  • Always have an agenda for a meeting and ensure that anyone who adds an item to the agenda also writes a summary with links to supporting documentation. This provides the basis for the notes and means the note taker doesn’t have to re-summarise.
  • Make sure everyone knows what the expectations are around meeting notes. A standard meeting template means that everyone knows what they need to provide before and during the meeting and that everyone reading notes knows what to look for.
  • Ask yourself if you need notes every time. Maybe a video suffices for a discussion, especially if all of your actions are captured in your issue tracker. Maybe it’s enough just to keep an activity log, so everyone stays on top of what’s next.

Onboard New Team Members To Transparency

Something I have been guilty of is assuming that people will just be able to join the company and instantly normalize how transparent we are. Actually, it’s quite challenging for someone to go from an organization that is not transparent or doesn’t really think about it to one in which everything is out in the open.

It requires some empathy and imagination to recognize the experience of someone who has just joined the company. As there is a lot of noise, communication, and notifications, there is a mountain of information to climb and years of asynchronous communication stacked up. On top of that is the feeling of vulnerability that comes with being a new employee. When the expectation is that everything is discussed in public channels, it can make people feel reticent about putting themselves out there, asking the “stupid” questions that are so important to navigating your way around somewhere new.

That makes it essential to familiarise people with the concepts and tactics of transparency through the onboarding process and for managers and peers to support new starters with that. You can’t just assume someone will get it, so you need to support them to succeed.

Practical Tips

  • Have clear expectations about what people should read and what they can let pass them by. Otherwise, some people will try to read everything. For most people, the work of their immediate team and essential company announcements suffice to begin with.
  • Talk about transparency through the onboarding process, why it is important, and how you practice it within your company.
  • Adjust to your new employee’s level of comfort. Some people will jump straight into public channels, but others will want to take their time. Work with them in DMs or private channels to begin, with the expectation that you’ll move to the public once they are onboarded.
  • Create specific pathways or tables of contents for different roles to take them through the documentation and training they need to read.
  • Provide guides and documentation on how to practice transparency, especially best practices for documentation and for using your issue tracker.

Make Use Of Integrations, APIs And Bots

Integrations, APIs, and Bots let us automate work and prevent information from getting stuck in silos. One of the first things I look for when I’m sourcing a new tool is what integrations it has and whether it will integrate with my stack. If it doesn’t have a native integration, does it have an API so we can have a developer build an integration for us? Or, for simple integration, you can use a tool like Zapier to connect your tools together.

If you’ve been remote for a long time, you can have a proliferation of tools, and manually moving data between them leaves room for human error and creates a huge administrative burden.

However, if you don’t transfer your data, it can lead to information remaining in silos and not getting to where it needs to be. If you are building out a stack for your remote team, I highly recommend working with tools and apps that integrate with one another.

As well as integrations, bots can be massively helpful in automating tasks and removing the need for people to manually run different processes. Some tools that I have found to be useful are geekbot, which we use for standups, and donut, which we use for social connections like pairing people up for a coffee. You can use integrations to pipe posts from other tools, such as GitHub or Hubspot, into your Slack Channel or MS Teams. Geekbot fatigue is real, though, so beware of having too many standups and bots running simultaneously because if they’re not used well, they can become a bureaucratic task that no one loves.

Practical Tips

  • Figure out the bots which are right for your organization. Both Slack and Teams have a lot of bots available.
  • When you are signing up for a new tool, look at the integrations that it currently has and think about how you might want to use the tool in the future.
  • Connect your issue tracker and any other asynchronous tools to your messaging app so that any activity is piped into relevant channels.
  • If bots are causing too much noise, consider creating firehose channels, which are just for piping in information from a specific tool or project.

Equip Everyone To Give And Receive Feedback

When your company is transparent, everything is out in the open all of the time. This means that a culture of transparency must go hand in hand with a culture of feedback. Drive-by feedback from people who don’t have context on a specific project is rarely helpful, nor are cryptic one-liners that say something isn’t great but don’t provide anything constructive about why.

This type of feedback can make people reticent about working in the open, and they can hold things back until they feel it is totally ready. Equally unhelpful are feedback requests that are just “what do you think?” or “can I have feedback?” These requests rarely elicit high-quality feedback.

When you equip your team to give feedback, you create a space where people are okay putting their half-finished projects out there because they know any feedback will be provided in good faith and will help them to achieve their goals. You also need to ensure that people are open to feedback, listen, and receive it in a non-defensive manner. Ultimately, it is up to the person who receives the feedback whether they should implement it or not, but you should always listen and try to understand the other person’s perspective.

Practical Tips

  • Set company-wide expectations around feedback. Some companies might prefer a free-for-all where anyone can provide feedback all the time; others prefer to set the expectation that feedback should come only when it is asked for.
  • Be very specific on what you are asking for feedback for: is it on the design, the content, the tone of voice, the structure, or the message? This will help you to get high-quality feedback.
  • Research different feedback methodologies and adopt a few that are right for your company. Radical Candor is a very popular technique; I like Situation, Behaviour, Impact because of its simplicity, but there are lots of options out there. Whatever you use should be straightforward enough for anyone to use.

Build A Culture Of Transparency

You can build transparency into your practices, but you also need to build it into your culture. A common way to do this is to write transparency into your values, which is great but rarely enough. You’ve got to embed transparency into everything you do, which I hope some of the practices above will help with.

One of the most powerful ways to become more transparent as a company is for people to role-model transparency, especially leaders. If a leader behaves in a particular way, others follow. It follows that if someone at the top does something, then it is acceptable behavior.

If a leader does everything in Direct Messages, brings people into meetings all the time, and works in silos, then others will do the same. If your leaders default to open communications, asynchronous practices, collaboration, and information sharing, then they create an environment where others will follow.

It’s not enough for a CEO or founder to say they want to be transparent — they have to practice it like everyone else.

And remember, full transparency isn’t for everyone or for every company. You can set limits on what you are transparent about: some organizations share salaries, others don’t, some share financial info, others don’t, some share everything publicly, and others don’t. Being transparent isn’t necessarily sharing everything; it’s being upfront about what you are going to share, what you aren’t, and why. But remember that some level of transparency in a remote organization will go a long way to helping you be successful.

Practical Tips

  • Role-modelling transparent behaviors should be built into the expectations of every leader. This could be written into role descriptions or behavior frameworks.
  • It’s easy to find yourself working on something in a DM or private space; when you do, gently suggest to others that a discussion is moved into a public channel.
  • Acknowledge behavior and actions that are transparent. We have a kudos bot in our HRS which we can use to credit positive behavior, and transparency is a consideration within our career progression framework.
The Transparency Trap?

Generally, I am a big advocate of transparency, but it’s not without its pitfalls. If you want your organization to be more transparent, then you need to be aware of what these are so that you can work against them. Some examples are:

  • Decision-making can take a lot longer because so many people can provide input.
  • Information overload can be a burden on employees, and they can feel fatigued by the amount of communication.
  • Employees can feel that they are constantly being observed, which can leave them feeling exposed and vulnerable.
  • Some employees will hide what they are doing just to get it right, even if there is nothing to hide.
  • People experiment less because they are afraid to take risks or be vulnerable in front of others.
  • There is an additional administrative burden as people have to produce meeting notes, update documentation and issue trackers, and generally perform what they are doing.
  • Access to a company’s financial information can cause anxiety when times are rocky.
  • Creative work may not always benefit from transparency as people can self-censor during the development process.
  • Sharing all meetings can lead to self-censorship, which can stifle debate.

The researcher Ethan Bernstein talks about the “transparency trap” and explains how some organizations have “found the sweet spot between privacy and transparency, getting the benefits of both.” This means employing different types of boundaries to ensure that privacy is maintained in some areas without losing the benefits of transparency. However transparent you plan to be, it’s important to keep these challenges in mind so that you can work and don’t overwhelm or undermine your employees while still getting all of the benefits of transparency.

]]>
hello@smashingmagazine.com (Siobhan McKeown)
<![CDATA[New Smashing Front-End & UX Workshops]]> https://smashingmagazine.com/2022/11/new-frontend-coding-ux-online-workshops/ https://smashingmagazine.com/2022/11/new-frontend-coding-ux-online-workshops/ Thu, 03 Nov 2022 07:00:00 GMT You might know it already, but perhaps not yet: we regularly run friendly online workshops around front-end and design. We have a couple of workshops coming up soon, and we thought that, you know, you might want to join in as well. All workshops sessions are broken down into 2.5h-segments across days, so you always have time to ask questions, share your screen and get immediate feedback.

Meet Smashing Online Workshops: live, interactive sessions on frontend & UX.

Live discussions and interactive exercises are at the very heart of every workshop, with group work, homework, reviews and live interaction with people around the world. Plus, you get all video recordings of all sessions, so you can re-watch at any time, in your comfy chair at your workspace.

Upcoming Live Workshops (Nov 2022 – April 2023)
Pushing CSS To The Limit
Amit Sheen
4 sessions Nov 2–10 css
Deep Dive On Accessibility Testing
Manuel Matuzović
5 sessions Nov 14–28 dev
Early birds!
Mastering the Design Process
Paul Boag
4 sessions Nov 15–23 workflow
Figma Workflow Masterclass
Christine Vallaure
5 sessions Nov 17 – Dec 1 ux
Designing The Perfect Web Forms
Vitaly Friedman
2 sessions Nov 17–18 ux
Early birds!
Building Modern HTML Emails
Rémi Parmentier
4 sessions Nov 23 – Dec 1 dev
Early birds!
Designing Better Products Masterclass
Stéphanie Walter
5 sessions Nov 28 – Dec 12 ux
Advanced Next.js Masterclass
Átila Fassina
6 sessions Nov 30 – Dec 15 dev
Early birds!
Creating and Maintaining Successful Design Systems
Brad Frost
5 sessions Jan 10–24 workflow
Early birds!
Interface Design Patterns UX Training
Vitaly Friedman
8 sessions Mar 10 – Apr 7 ux
5× Tickets Bundle
Save $500 off the price.
5 tickets No expiry Smashing!
What Are Workshops Like?

Do you experience Zoom fatigue as well? After all, who really wants to spend more time in front of their screen? That’s exactly why we’ve designed the online workshop experience from scratch, accounting for the time needed to take in all the content, understand it and have enough time to ask just the right questions.

Our online workshops take place live and span multiple days across weeks. They are split into 2.5h-sessions, and in every session there is always enough time to bring up your questions or just get a cup of tea. We don’t rush through the content, but instead try to create a welcoming, friendly and inclusive environment for everyone to have time to think, discuss and get feedback.

There are plenty of things to expect from a Smashing workshop, but the most important one is focus on practical examples and techniques. The workshops aren’t talks; they are interactive, with live conversations with attendees, sometimes with challenges, homework and team work.

Of course, you get all workshop materials and video recordings as well, so if you miss a session you can re-watch it the same day.

TL;DR

  • Workshops span multiple days, split in 2.5h-sessions.
  • Enough time for live Q&A every day.
  • Dozens of practical examples and techniques.
  • You’ll get all workshop materials & recordings.
  • All workshops are focused on frontend & UX.
  • Get a workshop bundle and save $250 off the price.
Bonus: Free Online Community Events

Dive deep into discussions around accessibility and design systems with our upcoming online events — free for everyone, so please do bring your friends along!

Thank You!

We do our best to ensure that our online workshops are worth your time. We’d sincerely appreciate it if you could spread the word with your wonderful colleagues and friends.

Thanks for staying smashing and take good care of each other!

]]>
hello@smashingmagazine.com (Iris Lješnjanin)
<![CDATA[Designing The Perfect Mobile Navigation UX]]> https://smashingmagazine.com/2022/11/navigation-design-mobile-ux/ https://smashingmagazine.com/2022/11/navigation-design-mobile-ux/ Wed, 02 Nov 2022 14:00:00 GMT When it comes to complex navigation on mobile, we often think about hamburger menus, full-page overlays, animated slide-in-menus, and a wide range of nested accordions. Not all of these options perform well, and there are some alternative design patterns that are worth exploring. Let’s dive in!

This article is part of our ongoing series on design patterns. It’s also a part of the 4-weeks live UX training 🍣 and is available in our recently released 8h-video course.

1. Avoid Too Many Signposts In Your Navigation

One of the most common strategies for navigation on mobile is to use a good ol’ accordion. In fact, accordions work very well for multiple levels of navigation and are usually better than slide-in menus. However, since we open and collapse menus, we also need to indicate it with an icon. This often results in too many signs pulling user’s attention in too many directions.

In the example above, Vodafone uses 3 different icons, pointing either to the bottom (accordion collapsed), to the top (accordion open), or to the right. The latter indicates that the selection is a link, driving users to a category page. This is, however, not immediately obvious.

An alternative — and perhaps a slightly more obvious way — is by adding a link underline to links and removing the icons next to them altogether. As a side effect, if we eventually have to mix collapsible menus and links to categories, it’s perhaps a bit more obvious which is which.

In general, pointing users in too many directions is often unnecessary. It’s quite likely that you’ll be able to achieve better results with just two icons, indicating whether an accordion is open or not. That’s how it’s done on Swisscom (pictured above), for example.

Alternatively, Icelandic Postal Service uses just one single icon, and to indicate expansion of the accordion, changes the color of the heading of the section. This seems to be working fairly well, too.

It’s a good idea to avoid too many icons guiding users in too many directions. If we can get away without them, it’s a good idea to test what happens if we do.

2. Don’t Overload Your Navigation With Multiple Actions

Sometimes navigation menus combine two different functions in one single navigation bar. For example, what if you have categories that you want to link to directly, but then you also want to allow for quick jumps into sub-menu items?

Usually, this means adding two different actions to the same navigation bar. A tap on the title of the category would lead to the category; a tap on the icon would open an accordion or prompt a separate view. And to make this difference a bit more obvious, we often add a vertical separator. Unfortunately, in practice, that doesn't work too well.

In the example above on Tivoli Gardens Copenhagen, each section title is linked to a standalone category page. A tap on the icon on the right, however, opens a separate sub-navigation. Indeed, a vertical separator does help to distinguish between the two actions, but it still causes plenty of mistakes.

Sometimes users want to just assess the navigation at a glance, and they aren’t yet committing to going to a dedicated page. Yet here they are, being pushed towards a page just when they aren’t ready to go there at all. And once they do, they then have to return back to the previous page and start all over again. Ideally, we’d avoid this problem altogether.

On Mammut, the entire navigation bar drives the user to the second level of navigation. Their users can move to discover all items within the category or jump into a sub-category. Problems solved. Rather than overloading the navigation bar with separators and separate actions, we can help users move forward confidently and comfortably and prevent mistakes altogether. The next action is always just a tap away.

Always consider adding a link to the category page within the expanded accordion or in a separate view, and assign only a singular function to the entire bar — opening that view.

3. Use The Billboard Pattern For Top Tasks

Not every navigation item is equally important. Some items are more frequently used, and they might deserve a little bit more spotlight in your navigation. In fact, if some items are more important than others, we can use the billboard pattern and display them more prominently above the navigation.

In the examples above — Otto, Korea Post and Deutsche Post — we display the most important topics or features more prominently while the rest of the navigation is available, but gets a slightly less prominent presence.

4. Nested Accordions Work For Expert Users

Just like we might have too many icons, we might end up with too many nested levels of navigation, neatly packaged within nested accordions. For complex sites, it seems like one of the few options to present a huge amount of navigation available on the site. In fact, we could argue that by allowing users to jump from any page to any page on the 4th or even 5th level of navigation, we can massively speed them up in their journeys.

Surprisingly enough, this seems to be right. Expert users typically don’t have massive usability issues with multiple nested accordions. However, infrequent users often struggle to find the information that they need because they don’t understand how the information is organized.

In complex environments, navigation usually mirrors the way the organization is structured internally, and without that prior knowledge finding the right route to the right page is difficult at best. In that case, once a user is looking for something very specific, they seem to use search rather than traversing the navigation tree up and down. This becomes apparent especially when the contrast between levels isn’t obvious, such as on WHO, for example (pictured below).

If we need to include multiple levels of navigation within nested accordions, it’s a good idea to sparkle just a tiny bit of typographic and visual contrast to the menu so that every level of navigation is clearly distinct, and it’s also obvious when links to actual pages start appearing. Compare the quick mock-up below.

Another way to indicate multiple levels of nesting is by adding different types of icons to make it more obvious where users currently are. This is how it’s done on the Stockholm University website. Personally, I wasn’t able to verify how well this design pattern works, but when combined with better typographic contrast, this might be performing better and is definitely worth testing.

DOT, a public transportation website from Denmark, uses the + icon across multiple levels for their nested accordions. While the chevron is positioned on the left, the + are always positioned on the right. Thus, they display four levels of navigation with nested accordions. Perhaps it’s a bit too much navigation, but it seems to be working fairly well.

On the other hand, it might not be needed at all. Allianz gets away with using a single icon (chevron up and down), but with clearly different designs for every navigation level. The first level is highlighted with white text on a blue background; the second level is designed in bold; and the third level is plain text (which could, of course, be links, too).

Plus, instead of showing all items at the same time, only four most important ones are displayed at first, and the others are available on a separate page. This a great example worth keeping in mind.

Nested accordions can work with enough contrast between each level. However, if you have more than three levels of navigation, making it work with a bit of indentation and various typographic styles will become quite difficult.

5. Slide-In-Menus Don’t Perform Very Well

Admittedly, many navigation menus on mobile aren’t accordions. Because each navigation section might contain dozens and dozens of sub-navigation items, it’s common to see the so-called slide-in menus, with navigation items sliding in horizontally and presenting users with a comprehensive menu of all options available on that level.

With that pattern, quick jumps from one level to another are impossible, as users always have to go back to the previous level to move forward.

Unilever displays only one level of navigation at a time. As users navigate further down the rabbit hole, they are presented with only one level at a time. This does work well to fit all the items and all the levels that an organization might ever need. However, if a user isn’t quite sure where to go, the discovery of content tends to be slower. Also, it’s not necessarily clear where the “Back” button will go to.

If we do use a slide-in menu, it’s a good idea to replace a generic “Back” button with a more contextual label, explaining to users where exactly they will be returning to. Deutsche Post (pictured above) does just that. Also, notice that the main page of the menu features some of the top tasks on the site, in additionally to the slide-in menu.

Additionally, The New England Journal of Medicine adds some typographic contrast to each section, so it’s a bit more obvious right away what would open up another section and what is a link driving to a new page. In fact, we can go quite far by just making links more obvious yet again, as displayed in the example of ADAC below.

It’s worth noting that animated slide-ins can be quite disorienting, distracting, and annoying for people who use the navigation a lot. Add to that the slow speed of content discovery, a few too many icons, and a bit too little contrast between the items, and you have a recipe for disaster.

A slide-in menu is an option but rarely the best one. It surely doesn’t perform as well as accordions where jumps between levels are faster and there is rarely a need to go back. However, accordions aren’t the only option we have — especially when we want to help users navigate between levels faster, not slower.

6. The Navigation Stack Works For Quick Jumps

As we move users from one level to another, we also need to provide a way for them to move back. But instead of just displaying a "Back" button, we can stack all the previous sections like a breadcrumb under each other. And so we get a navigation stack.

On Coolblue, a Dutch retailer, as users keep exploring deeper levels of navigation, they can always return all the way to the previous levels that they’ve been coming from. This allows for faster jumps between levels, and is definitely a good idea when driving users from one-page overlay to another.

7. Use Curtain Pattern To Show Multiple Levels of Navigation

It shouldn’t be a big revelation that the speed of navigation is at its maximum when we display navigation options right away. This is why we see large buttons appearing as large clickable cards, filters, and bottom sheets. However, how do we use it in our navigation menus which barely have any place anyway?

We could make better use of the available space. For example, what if split the screen vertically, showing one level of navigation on each? Very much like a curtain that we’d pull to one side of a window. That’s what LCFC (pictured above) does. To move between levels, we don’t need to close any menus or return back at all — instead, we click on large areas, move forward, and explore.

And what if you need slightly more space for your lengthy navigation labels? Well, the labels could wrap onto multiple lines, or we could reduce the width by replacing text labels with icons (as long as they are unambiguous). It might not work for every project, but it seems to work for Playstation (pictured below).

The entire first level of navigation collapses into tabs; yet moving from one level to the other doesn’t require any jumps back. You might be wondering what the three vertical lines represent — ideally, one could drag away the pane, but it doesn’t seem to be working as expected, unfortunately.

ESPN uses a very similar approach but reduces the amount of space for the first level to the minimum. It could be a little bit larger to prevent mistaps, but the idea is pretty much the same: showing two levels of navigation at the same time.

We could use the same approach in other contexts, such as filtering. We display all filter attributes on the left, and allow users to choose the specific values for these filters in the second vertical pane. That’s what the filtering experience looks like on Myntra, an Indian eCommerce retailer pictured below.

If some filters don’t fit in the right pane, users can scroll to explore more or even search for a specific filter in the selection. Of course, the "Apply" button has to stay floating. It would be lovely to see the total number of results on that button, too.

We could take it even further, though. For example, sometimes users need to select filters that are relevant to them and define their values in the next step. In that case, we group all filters into a few categories (or even sub-categories), and then present all of the categories and filters as sub-categories side-by-side.

With FT Screener, for example, users can add or change criteria by exploring multiple levels at the same time — both the labels for groups and the filters living within those groups. Once a filter has been chosen, it’s added to the overview on the top. That’s a simple filter constructor for sophisticated filtering on mobile.

The vertical split could be used to quickly select one important present or make a single choice. That would be the case for a language selector, for example. We could organize all supported countries and languages as cards or accordions, but they could also work as vertical tabs, as it’s done in the footer of Oracle.

This way, we display only options that are relevant to users. They never have to go to irrelevant sections or pages since they get a preview and can navigate away from it quickly, should they wish to do so.

In general, the curtain pattern works well with a quite flat content architecture, but is difficult to manage with three or more levels. It shows its strengths when the speed of navigation matters and when users are likely to jump between sections a lot.

It’s way faster than slide-in-menus but less flexible than accordions. Still, a great lil’ helper to keep in mind to make better use of available space on mobile.

8. You Might Not Need 3+ Levels of Navigation

The curtain pattern works well for two levels of navigation, but you might have many more levels than that. In that case, it might be a good idea to test if it actually has to be this way. What if you show only two or three levels via your menu drawer, but then the rest would be available on standalone pages?

The University of Antwerp gets away with just one level of navigation on mobile. All the sub-section exist on standalone pages as cards. In fact, there are dozens of links on each page, but as long as the navigation is obvious, this might be just what you need.

Gov.uk isn’t a particularly small website, but it features only two main sections with plenty of subsections in its navigation on mobile. However, no third or fourth-level navigation is accessible from the menu drawer. Everything else is accessible via links and cards on separate pages.

Korea Post follows along with an interesting twist to that idea. On tap in the menu, it shows all items living on the second level, but also automatically shows the options from the third level, too. Additionally, breadcrumbs include drop-downs allowing users to jump quickly between the siblings of each level. You can find out more about that pattern (sideways breadcrumbs) in Designing A Perfect Breadcrumbs UX.

Do you need to display more than two levels of navigation? Perhaps it is indeed necessary, but chances are high that it isn’t. So perhaps it’s worth testing a design that features only two levels. Additionally, we can add another feature to it to make navigation even faster.

9. Query User’s Intent With Navigation Queries

In addition to search and navigation, we could study some of the most frequently visited pages or some of the most popular tasks, and show them directly, as we saw in the Deutsche Post example earlier above.

We could also add navigation queries to ask users about their intent, and allow them to choose a topic relevant to them. Think about it as a mini-search engine for your navigation, designed as a seamless autocomplete experience. This would give users guidance toward the goal and help them navigate more reliably.

Cosmos Direkt, a German insurance company, features a <select>-menu that allows users to select a particular task that they’d like to perform on the site. This type of navigation exists additionally to search and classic navigation menu, but increases the speed to relevance significantly.

Wrapping Up

Just as a quick summary, here are a few things to keep in mind when dealing with complex multi-level navigation:

  • Accordions work well, and for expert users, they work even if they are nested.
  • Remove icons that you don’t need. Avoid icons pointing in more than two directions.
  • Use the billboard pattern to highlight top tasks that users want to complete on the site.
  • For nested navigation levels, make sure that each level is distinct (indentation + type styles).
  • Whenever possible, make links obvious by underlining them.
  • Slide-in-menus don’t perform very well; they are slow and distracting. Accordions are likely to perform better.
  • Keep the navigation stack of the levels that users browsed through to allow for quick jumps.
  • Curtain navigation is fast! Consider using it when you have two, or at most three, levels of navigation.
  • Perhaps we don’t need to show all levels of navigation and instead can bring users to the relevant page to navigate from there.
Meet “Smart Interface Design Patterns”

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our shiny new 8h-video course with 100s of practical examples from real-life projects. Plenty of design patterns and guidelines on everything from accordions and dropdowns to complex tables and intricate web forms — with five new segments added every year. Just sayin’! Check a free preview.

Meet Smart Interface Design Patterns, our new video course on interface design & UX.

100 design patterns & real-life examples.
8h-video course + live UX training. Free preview.

Related UX Articles ]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Putting The Graph In GraphQL With The Neo4j GraphQL Library]]> https://smashingmagazine.com/2022/11/graph-neo4j-graphql-library/ https://smashingmagazine.com/2022/11/graph-neo4j-graphql-library/ Tue, 01 Nov 2022 11:00:00 GMT This article is a sponsored by Neo4j

GraphQL enables an API developer to model application data as a graph, and API clients that request that data to easily traverse this data graph to retrieve it. These are powerful game-changing capabilities. But if your backend isn’t graph-ready, these capabilities could become liabilities by putting additional pressure on your database, consuming greater time and resources.

In this article, I’ll shed some light on ways you can mitigate these issues when you use a graph database as the backend for your next GraphQL API by taking advantage of the capabilities offered by the open-source Neo4j GraphQL Library.

What Graphs Are, And Why They Need A Database

Fundamentally, a graph is a data structure composed of nodes (the entities in the data model) along with the relationships between nodes. Graphs are all about the connections in your data. For this reason, relationships are first-class citizens in the graph data model.

Graphs are so important that an entire category of databases was created to work with graphs: graph databases. Unlike relational or document databases that use tables or documents, respectively, as their data models, the data model of a graph database is (you guessed it!) a graph.

GraphQL is not and was never intended to be a database query language. It is indeed a query language, yet it lacks much of the semantics we would expect from a true database query language like SQL or Cypher. That’s on purpose. You don’t want to be exposing our entire database to all our client applications out there in the world.

Instead, GraphQL is an API query language, modeling application data as a graph and purpose-built for exposing and querying that data graph, just as SQL and Cypher were purpose-built for working with relational and graph databases, respectively. Since one of the primary functions of an API application is to interact with a database, it makes sense that GraphQL database integrations should help build GraphQL APIs that are backed by a database. That’s exactly what the Neo4j GraphQL Library does — makes it easier to build GraphQL APIs backed by Neo4j.

One of GraphQL’s most powerful capabilities enables the API designer to express the entire data domain as a graph using nodes and relationships. This way, API clients can traverse the data graph to find the relevant data. This makes better sense because most API interactions are done in the context of relationships. For example, if we want to fetch all orders placed by a specific customer or all the products in a given order, we’re traversing the pattern of relationships to find those connections in our data.

Soon after GraphQL was open-sourced by Facebook in 2015, a crop of GraphQL database integrations sprung up, evidently in an effort to address the n+1 conundrum and similar problems. Neo4j GraphQL Library was one of these integrations.

Common GraphQL Implementation Problems

Building a GraphQL API service requires you to perform two steps:

  1. Define the schema and type definitions.
  2. Create resolver functions for each type and field in the schema that will be responsible for fetching or updating data in our data layer.

Combining these schema and resolver functions gives you an executable GraphQL schema object. You may then attach the schema object to a networking layer, such as a Node.js web server or lambda function, to expose the GraphQL API to clients. Often developers will use tools like Apollo Server or GraphQL Yoga to help with this process, but it’s still up to them to handle the first two steps.

If you’ve ever written resolver functions, you’ll recall they can be a bit tedious, as they’re typically filled with boilerplate data fetching code. But even worse than lost developer productivity is the dreaded n+1 query problem. Because of the nested way that GraphQL resolver functions are called, a single GraphQL request can result in multiple round-trip requests to the database. Addressing this typically involves a batching and caching strategy, adding additional complexity to your GraphQL application.

Doubling Down On GraphQL-First Development

Originally, the term GraphQL-First Development described a collaborative process. Frontend and backend teams would agree on a GraphQL schema, then go to work independently building their respective pieces of the codebase. Database integrations extend the idea of GraphQL-First development by applying this concept to the database as well. GraphQL-type definitions can now drive the database.

You can find the full code examples presented here on GitHub.

Let’s say you’re building a business reviews application where you want to keep track of businesses, users, and user reviews. GraphQL-type definitions to describe this API might look something like this:

type Business {
  businessId: ID!
  name: String!
  city: String!
  state: String!
  address: String!
  location: Point!
  reviews: [Review!]! @relationship(type: "REVIEWS", direction: IN)
  categories: [Category!]!
    @relationship(type: "IN_CATEGORY", direction: OUT)
}

type User {
  userID: ID!
  name: String!
  reviews: [Review!]! @relationship(type: "WROTE", direction: OUT)
}

type Review {
  reviewId: ID!
  stars: Float!
  date: Date!
  text: String
  user: User! @relationship(type: "WROTE", direction: IN)
  business: Business! @relationship(type: "REVIEWS", direction: OUT)
}

type Category {
  name: String!
  businesses: [Business!]!
    @relationship(type: "IN_CATEGORY", direction: IN)
}

Note the use of the GraphQL schema directive @relationship in our type definitions. GraphQL schema directives are the language’s built-in extension mechanism and key components for extending and configuring GraphQL APIs — especially with database integrations like Neo4j GraphQL Library. In this case, the @relationship directive encodes the relationship type and direction (in or out) for pairs of nodes in the database.

Type definitions are then used to define the property graph data model in Neo4j. Instead of maintaining two schemas (one for our database and another for our API), you can now use type definitions to define both the API and the database’s data model. Furthermore, since Neo4j is schema-optional, using GraphQL to drive the database adds a layer of type safety to your application.

From GraphQL Type Definitions To Complete API Schemas

In GraphQL, you use fields on special types (Query, Mutation, and Subscription) to define the entry points for the API. In addition, you may want to define field arguments that can be passed at query time, for example, for sorting or filtering. Neo4j GraphQL Library handles this by creating entry points in the GraphQL API for create, read, update, and delete operations for each type, as well as field arguments for sorting and filtering.

Let’s look at some examples. For our business reviews application, suppose you want to show a list of businesses sorted alphabetically by name. Neo4j GraphQL Library automatically adds field arguments to accomplish just this.

{
  businesses(options: { limit: 10, sort: { name: ASC } }) {
    name
  }
}

Perhaps you want to allow the users to filter this list of businesses by searching for companies by name or keyword. The where argument handles this kind of filtering:

{
  businesses(where: { name_CONTAINS: "Brew" }) {
    name
    address
  }

You can then combine these filter arguments to express very complex operations. Say you want to find businesses that are in either the Coffee or Breakfast category and filter for reviews containing the keyword “breakfast sandwich:”

{
  businesses(
    where: {
      OR: [
        { categories_SOME: { name: "Coffee" } }
        { categories_SOME: { name: "Breakfast" } }
      ]
    }
  ) {
    name
    address
    reviews(where: { text_CONTAINS: "breakfast sandwich" }) {
    stars
    text
  }
 }
}

Using location data, for example, you can even search for businesses within 1 km of our current location:

{
  businesses(
    where: {
      location_LT: {
        distance: 1000
        point: { latitude: 37.563675, longitude: -122.322243 }
      }
    }
  ) {
  name
  address
  city
  state
  }
}

As you can see, this functionality is extremely powerful, and the generated API can be configured through the use of GraphQL schema directives.

We Don’t Need No Stinking Resolvers

As we noted earlier, GraphQL server implementations require resolver functions where the logic for interacting with the data layer lives. With database integrations such as Neo4j GraphQL Library, resolvers are generated for you at query time for translating arbitrary GraphQL requests into singular, encapsulated database queries. This is a huge developer productivity win (we don’t have to write boilerplate data fetching code — yay!). It also addresses the n+1 query problem by making a single round-trip request to the database.

Moreover, graph databases like Neo4j are optimized for exactly the kind of nested graph data traversals commonly expressed in GraphQL. Let’s see this in action. Once you’ve defined your GraphQL type definitions, here’s all the code necessary to spin up your fully functional GraphQL API:

const { ApolloServer } = require("apollo-server");
const neo4j = require("neo4j-driver");
const { Neo4jGraphQL } = require("@neo4j/graphql");

// Connect to your Neo4j instance.
const driver = neo4j.driver(
  "neo4j+s://my-neo4j-db.com",
  neo4j.auth.basic("neo4j", "letmein")
);

// Pass our GraphQL type definitions and Neo4j driver instance.
const neoSchema = new Neo4jGraphQL({ typeDefs, driver });

// Generate an executable GraphQL schema object and start
// Apollo Server.
neoSchema.getSchema().then((schema) => {
  const server = new ApolloServer({
    schema,
  });
  server.listen().then(({ url }) => {
    console.log(`GraphQL server ready at ${url}`);
  });
});

That’s it! No resolvers.

Extend GraphQL With The Power Of Cypher

So far, we’ve only been talking about basic create, read, update, and delete operations. How can you handle custom logic with Neo4j GraphQL Library?

Let’s say you want to show recommended businesses to your users based on their search or review history. One way would be to implement your own resolver function with the logic for generating those personalized recommendations built in. Yet there’s a better way to maintain the one-to-one, GraphQL-to-database operation performance guarantee: You can leverage the power of the Cypher query language using the @cypher GraphQL schema directive to define your custom logic within your GraphQL type definitions.

Cypher is an extremely powerful language that enables you to express complex graph patterns using ASCII-art-like declarative syntax. I won’t go into detail about Cypher in this article, but let’s see how you could accomplish our personalized recommendation task by adding a new field to your GraphQL-type definitions:

extend type Business {
  recommended(first: Int = 1): [Business!]!
    @cypher(
      statement: """
        MATCH (this)<-[:REVIEWS]-(:Review)<-[:WROTE]-(u:User)
        MATCH (u)-[:WROTE]->(:Review)-[:REVIEWS]->(rec:Business)
        WITH rec, COUNT(*) AS score
        RETURN rec ORDER BY score DESC LIMIT $first
      """
    )
}

Here, the Business type has a recommended field, which uses the Cypher query defined above to show recommended businesses whenever requested in the GraphQL query. You didn’t need to write a custom resolver to accomplish this. Neo4j GraphQL Library is still able to generate a single database request even when using a custom recommended field.

GraphQL Database Integrations Under The Hood

GraphQL database integrations like Neo4j GraphQL Library are powered by the GraphQLResolveInfo object. This object is passed to all resolvers, including the ones generated for us by Neo4j GraphQL Library. It contains information about both the GraphQL schema and GraphQL operation being resolved. By closely inspecting this object, GraphQL database integrations can generate database queries at the time queries are placed.

If you’re interested, I recently gave a talk at GraphQL Summit that goes into much more detail.

Powering Low-Code, Open Source-Powered GraphQL Tools

An open-source library that works with any JavaScript GraphQL implementation can conceivably power an entire ecosystem of low-code GraphQL tools. Collectively, these tools leverage the functionality of Neo4j GraphQL Library to help make it easier for you to build, test, and deploy GraphQL APIs backed by a real graph database.

For example, GraphQL Mesh uses Neo4j GraphQL Library to enable Neo4j as a data source for data federation. Don’t want to write the code necessary to build a GraphQL API for testing and development? The Neo4j GraphQL Toolbox is an open-source, low-code web UI that wraps Neo4j GraphQL Library. This way, it can generate a GraphQL API from an existing Neo4j database with a single click.

Where From Here

If building a GraphQL API backed by a native graph database sounds interesting or at all helpful for the problems you’re trying to solve as a developer, I would encourage you to give the Neo4j GraphQL Library a try. Also, the Neo4j GraphQL Library landing page is a good starting point for documentation, further examples, and comprehensive workshops.

I’ve also written a book Full Stack GraphQL Applications, published by Manning, that covers this topic in much more depth. My book covers handling authorization, working with the frontend application, and using cloud services like Auth0, Netlify, AWS Lambda, and Neo4j Aura to deploy a full-stack GraphQL application. In fact, I’ve built out the very business reviews application from this article as an example in the book! Thanks to Neo4j, this book is now available as a free download.

Last but not least, I will be presenting a live session entitled “Making Sense of Geospatial Data with Knowledge Graphs” during the NODES 2022 virtual conference on Wednesday, November 16, produced by Neo4j. Registration is free to all attendees.

]]>
hello@smashingmagazine.com (William Lyon)
<![CDATA[On The Edge Of November (2022 Desktop Wallpapers Edition)]]> https://smashingmagazine.com/2022/10/desktop-wallpaper-calendars-november-2022/ https://smashingmagazine.com/2022/10/desktop-wallpaper-calendars-november-2022/ Mon, 31 Oct 2022 09:00:00 GMT November tends to be a rather gray month in many parts of the world. So what better remedy could there be as some colorful inspiration? To bring some good vibes to your desktops and home screens, artists and designers from across the globe once again tickled their creativity and designed beautiful and inspiring wallpapers to welcome the new month.

This monthly wallpapers challenge has been going on for more than eleven years already, and we are very thankful to everyone who has put their creative skills to the test and contributed their artworks to it — back in the early days, just like today.

The wallpapers in this collection all come in versions with and without a calendar for November 2022 and can be downloaded for free. As a little bonus goodie, we also compiled some timeless treasures from past November editions at the end of this post for you. Enjoy!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.
Anbani

Anbani means alphabet in Georgian. The letters that grow on that tree are the Georgian alphabet. It’s very unique!” — Designed by Vlad Gerasimov from Georgia.

Cozy Autumn Cups And Cute Pumpkins

“Autumn coziness, which is created by fallen leaves, pumpkins, and cups of cocoa, inspired our designers for this wallpaper. You can find more calendar options in our collection.” — Designed by MasterBundles from Ukraine.

A Jelly November

“Been looking for a mysterious, gloomy, yet beautiful desktop wallpaper for this winter season? We’ve got you, as this month’s calendar marks Jellyfish Day. On November 3rd, we celebrate these unique, bewildering, and stunning marine animals. Besides adorning your screen, we’ve got you covered with some jellyfish fun facts — they aren’t really fish, they need very little oxygen, eat a broad diet, and shrink in size when food is scarce. Now that’s some tenacity to look up to.” — Designed by PopArt Studio from Serbia.

Snoop Dog

Designed by Ricardo Gimenes from Sweden.

Sunset On The Mississippi

“After a long day, a walk along the Mississippi renews our soul, and more, if we do it in great company.” — Designed by Veronica Valenzuela Jimenez from Spain.

No Shave November

“The goal of No-Shave November is to grow awareness by embracing our hair, which many cancer patients lose, and letting it grow wild and free. Donate the money you typically spend on shaving and grooming to educate about cancer prevention, save lives, and aid those fighting the battle.” — Designed by ThemeSelection from India.

Star Wars

Designed by Ricardo Gimenes from Sweden.

NOvember

“I created simple geometric lines which can refer to the month’s name. I chose sweet colors so the letter ‘o’ looks like a donut. It’s a nightmare, NO donut for NOvember!” — Designed by Philippe Brouard from France.

Oldies But Goodies

Umbrellas, autumn winds, mushrooms, and, well, cats, of course — a lot of things have inspired the design community to design a November wallpaper in the years we’ve been running our monthly series. Below you’ll find a selection of oldies but goodies from the archives. Please note that these wallpapers don’t come with a calendar.

Colorful Autumn

“Autumn can be dreary, especially in November, when rain starts pouring every day. We wanted to summon better days, so that’s how this colourful November calendar was created. Open your umbrella and let’s roll!” — Designed by PopArt Studio from Serbia.

The Kind Soul

“Kindness drives humanity. Be kind. Be humble. Be humane. Be the best of yourself!” — Designed by Color Mean Creative Studio from Dubai.

Me And The Key Three

“This wallpaper is based on screenshots from my latest browser game (I’m an indie games designer).” — Designed by Bart Bonte from Belgium.

Time To Give Thanks

Designed by Glynnis Owen from Australia.

Tempestuous November

“By the end of autumn, ferocious Poseidon will part from tinted clouds and timid breeze. After this uneven clash, the sky once more becomes pellucid just in time for imminent luminous snow.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Moonlight Bats

“I designed some Halloween characters and then this idea came to my mind — a bat family hanging around in the moonlight. A cute and scary mood is just perfect for autumn.” — Designed by Carmen Eisendle from Germany.

Mushroom Season

“It is autumn! It is raining and thus… it is mushroom season! It is the perfect moment to go to the forest and get the best mushrooms to do the best recipe.” — Designed by Verónica Valenzuela from Spain.

On The Edge Of Forever

“November has always reminded me of the famous Guns N’ Roses song, so I’ve decided to look at its meaning from a different perspective. The story in my picture takes place somewhere in space, where a young guy beholds a majestic meteor shower and wonders about the mysteries of the universe.” — Designed by Aliona Voitenko from Ukraine.

Sad Kitty

Designed by Ricardo Gimenes from Sweden.

November Nights On Mountains

“Those chill November nights when you see mountain tops covered with the first snow sparkling in the moonlight.” — Designed by Jovana Djokic from Serbia.

Hello World, Happy November

“I often read messages at Smashing Magazine from the people in the southern hemisphere ‘it’s spring, not autumn!’ so I’d like to design a wallpaper for the northern and the southern hemispheres. Here it is, northerners and southerns, hope you like it!” — Designed by Agnes Swart from the Netherlands.

Outer Space

“This November, we are inspired by the nature around us and the universe above us, so we created an out-of-this-world calendar. Now, let us all stop for a second and contemplate on preserving our forests, let us send birds of passage off to warmer places, and let us think to ourselves — if not on Earth, could we find a home somewhere else in outer space?” — Designed by PopArt Studio from Serbia.

Captain’s Home

Designed by Elise Vanoorbeek (Doud) from Belgium.

Welcome Home Dear Winter

“The smell of winter is lingering in the air. The time to be home! Winter reminds us of good food, of the warmth, the touch of a friendly hand, and a talk beside the fire. Keep calm and let us welcome winter.” — Designed by Acodez IT Solutions from India.

Deer Fall, I Love You

Designed by Maria Porter from the United States.

Sailing Sunwards

“There’s some pretty rough weather coming up these weeks. Thinking about November makes me want to keep all the warm thoughts in mind. I’d like to wish everyone a cozy winter.” — Designed by Emily Trbl. Kunstreich from Germany.

A Gentleman’s November

Designed by Cedric Bloem from Belgium.

Hold On

“We have to acknowledge that some things are inevitable, like winter. Let’s try to hold on until we can, and then embrace the beautiful season.” — Designed by Igor Izhik from Canada.

The Collection Of Birds

“The collection of birds are my travels. At each destination I buy a wood, bronze, stone bird, anything the local bazaars sell. I have all gathered at a modest vitrine in my house. I have so much loved my collection, that, after taking pictures of them, I designed each one, then created a wallpaper and overdressed a wall of my living room. Now my thought is making them as a desktop wallpaper and give them to you as a gift.” — Designed by Natasha Kamou from Greece.

November Fun

Designed by Xenia Latii from Germany.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[How To Create Advanced Animations With CSS]]> https://smashingmagazine.com/2022/10/advanced-animations-css/ https://smashingmagazine.com/2022/10/advanced-animations-css/ Sat, 29 Oct 2022 12:00:00 GMT We surf the web daily, and as developers, we tend to notice subtle details on a website. The one thing I take note of all the time is how smooth the animations on a website are. Animation is great for UX and design purposes. You can make an interactive website that pleases the visitor and makes them remember your website.

Creating advanced animations sounds like a hard topic, but the good news is, in CSS, you can stack multiple simple animations after each other to create a more complex one!

In this blog post, you will learn the following:

  • What cubic beziers are and how they can be used to create a “complex” animation in just one line of CSS;
  • How to stack animations after each other to create an advanced animation;
  • How to create a rollercoaster animation by applying the two points you learned above.

Note: This article assumes that you have basic knowledge of CSS animations. If you don’t, please check out this link before proceeding with this article.

Cubic Beziers: What Are They?

The cubic-bezier function in CSS is an easing function that gives you complete control of how your animation behaves with respect to time. Here is the official definition:

A cubic Bézier easing function is a type of easing function defined by four real numbers that specify the two control points, P1 and P2, of a cubic Bézier curve whose end points P0 and P3 are fixed at (0, 0) and (1, 1) respectively. The x coordinates of P1 and P2 are restricted to the range [0, 1].

Note: If you want to learn more about easing functions, you can check out this article. It goes behind the scenes of how linear, cubic-bezier, and staircase functions work!

But What Is An Easing Function?

Let’s Start With A Linear Curve

Imagine two points P0 and P1, where P0 is the starting point of the animation and P1 is the ending point. Now imagine another point moving linearly between the two points as follows:

Source: Wikipedia

This is called a linear curve! It is the simplest animation out there, and you probably used it before when you started learning CSS.

Next Up: The Quadratic Bezier Curve

Imagine you have three points: P0, P1 and P2. You want the animation to move from P0 to P2. In this case, P1 is a control point that controls the curve of the animation.

The idea of the quadratic bezier is as follows:

  1. Connect imaginary lines between P0 and P1 and between P1 and P2 (represented by the gray lines).
  2. Point Q0 moves along the line between P0 and P1. At the same time, Point Q1 moves along the line between P1 and P2.
  3. Connect an imaginary line between Q0 and Q1 (represented by the green line).
  4. At the same time Q0 and Q1 start moving, the point B starts moving along the green line. The path that point B takes is the animation path.
Source: Wikipedia

Note that Q1, Q2 and B do not move with the same velocity. They must all start at the same time and finish their path at the same time as well. So each point moves with the appropriate velocity based on the line length it moves along.

Finally: The Cubic Bezier Curve

The cubic bezier curve consists of 4 points: P0, P1, P2 and P3. The animation starts at P0 and ends at P3. P1 and P2 are our control points.

The cubic bezier works as follows:

  1. Connect imaginary lines between (P0, P1), (P1, P2) and (P2, P3). This is represented by the gray lines.
  2. Points Q0, Q1 and Q2 move along the lines (P0, P1), (P1, P2) and (P2, P3) respectively.
  3. Connect imaginary lines between (Q0, Q1) and (Q1, Q2). They are represented by the green lines.
  4. Points R0 and R1 move along the lines (Q0, Q1) and (Q1, Q2) respectively.
  5. Connect the line between R0 and R1 (represented by the blue line).
  6. Finally, Point B moves along the line connecting between R0 and R1. This point moves along the path of the animation.
Source: Wikipedia

If you want to have a better feel for how cubic beziers work, I recommend checking out this desmos link. Play around with the control points and check how the animation changes through time. (Note that the animation in the link is represented by the black line.)

Stacking Animations

Big animations with lots of steps can be broken down into multiple small animations. You can achieve that by adding the animation-delay property to your CSS. Calculating the delay is simple; you add up the time of all the animations before the one you are calculating the animation delay for.

For example:

animation: movePointLeft 4s linear forwards, movePointDown 3s linear forwards;

Here, we have two animations, movePointLeft and movePointDown. The animation delay for movePointLeft will be zero because it is the animation we want to run first. However, the animation delay for movePointDown will be four seconds because movePointLeft will be done after that time.

Therefore, the animation-delay property will be as follows:

animation-delay: 0s, 4s;

Note that if you have two or more animations starting at the same time, their animation delay will be the same. In addition, when you calculate the animation delay for the upcoming animations, you will consider them as one animation.

For example:

animation: x 4s linear forwards, y 4s linear forwards, jump 2s linear forwards;

Assume we want to start x and y simultaneously. In this case, the animation delay for both x and y will be zero, while the animation delay for the jump animation will be four seconds (not eight!).

animation-delay: 0s, 0s, 4s;
Creating The Rollercoaster

Now that we have the basics covered, it’s time to apply what we learned!

Understanding The Animation

The rollercoaster path consists of three parts:

  1. The sliding part,
  2. The loop part,
  3. There will also be some animation to create horizontal space between the two animations above.

Setting Things Up

We will start by creating a simple ball that will be our “cart” for the rollercoaster.

1. Add this to the body of your new HTML file:

<div id="the-cart" class="cart"></div>

2. Add this to your CSS file:

.cart {
  background-color: rgb(100, 210, 128);
  height: 50px;
  width: 50px;
  border: 1px solid black;
  border-radius: 50px;
  position: absolute;
  left: 10vw;
  top: 30vh;
}

I’ll use viewport width (vw) and viewport height (vh) properties to make the animation responsive. You are free to use any units you want.

The Sliding Part

Creating the part where the ball slides can be done using the cubic-bezier function! The animation is made up of 2 animations, one along the x-axis and the other along the y-axis. The x-axis animation is a normal linear animation along the x-axis. We can define its keyframes as follows:

@keyframes x {
  to {
    left: 40vw;
  }
}

Add it to your animation property in the ball path as follows:

animation: x 4s linear forwards

The y-axis animation is the one where we will use the cubic-bezier function. Let’s first define the keyframes of the animation. We want the difference between the starting and ending points to be so small that the ball reaches almost the same height.

@keyframes y {
  to {
    top: 29.99vh;
  }
}}

Now let’s think about the cubic-bezier function. We want our path to move slowly to the right first, and then when it slides, it should go faster.

  • Moving slowly to the right means that $P1$ will be along the x-axis. So, we know it is at (V, 0).
    • We need to choose a suitable V that makes our animation go slowly to the right but not too much so that it takes up the whole space. In this case, I found that 0.55 fits best.
  • To achieve the sliding effect, we need to move P2 down the y-axis (negative value) so P2=(X, -Y).
    • Y should be a big value. In this case, I chose Y=5000.
    • To get X, we know that our animation speed should be faster when sliding and slower when going up again. So, the closer X is to zero, The steeper the animation will be at sliding. In this case, let X = 0.8.

Now you have your cubic-bezier function, it will be cubic-bezier(0.55, 0, 0.2, -800).

Let’s add keyframes to our animation property:

animation: x 4s linear forwards,
    y 4s cubic-bezier(0.55, 0, 0.2, -5000) forwards;

This is the first part of our animation, so the animation delay is zero. We should add an animation-delay property because starting from the following animation, the animations will start at a different time than the first animation.

animation-delay: 0s, 0s;

See the Pen Rollercoaster sliding part [forked] by Yosra Emad.

Adding Horizontal Space

Before making the loop, the ball should move along the x-axis for a short while, so there is space between both animations. So, let’s do that!

  • Define the keyframes:
@keyframes x2 {
  to {
    left: 50vw;
  }
}
  • Add it to the animation property:
animation: x 4s linear forwards,
    y 4s cubic-bezier(0.55, 0, 0.2, -5000) forwards, x2 0.5s linear forwards;

This animation should start after the sliding animation, and the sliding animation takes four seconds; thus, the animation delay will be four seconds:

animation-delay: 0s, 0s, 4s;

See the Pen Rollercoaster horizontal space [forked] by Yosra Emad.

The Loop Part

To create a circle (loop) in CSS, we need to move the circle to the center of the loop and start the animation from there. We want the circle’s radius to be 100px, so we will change the circle position to top: 20vh (30 is desired radius (10vh here)). However, this needs to happen after the sliding animation is done, so we will create another animation with a zero-second duration and add a suitable animation delay.

  • Create the keyframes:
@keyframes pointOfCircle {
  to {
    top: 20vh;
  }
}
  • Add this to the list of animations with duration = 0s:
animation: x 4s linear forwards,
    y 4s cubic-bezier(0.55, 0, 0.2, -5000) forwards, x2 0.5s linear forwards,
    pointOfCircle 0s linear forwards;
  • Add the animation delay, which will be 4.5s:
animation-delay: 0s, 0s, 4s, 4.5s;

The Loop Itself

To create a loop animation:

  • Create a keyframe that moves the ball back to the old position and then rotates the ball:
@keyframes loop {
  from {
    transform: rotate(0deg) translateY(10vh) rotate(0deg);
  }
  to {
    transform: rotate(-360deg) translateY(10vh) rotate(360deg);
  }
}
  • Add the loop keyframes to the animation property:
animation: x 4s linear forwards,
    y 4s cubic-bezier(0.55, 0, 0.2, -5000) forwards, x2 0.5s linear forwards,
    pointOfCircle 0s linear forwards, loop 3s linear forwards;
  • Add the animation delay, which will also be 4.5 seconds here:
animation-delay: 0s, 0s, 4s, 4.5s, 4.5s;

See the Pen Rollercoaster loop [forked] by Yosra Emad.

Adding Horizontal Space (Again)

We’re almost done! We just need to move the ball after the animation along the x-axis so that the ball doesn’t stop exactly after the loop the way it does in the picture above.

  • Add the keyframes:
@keyframes x3 {
  to {
    left: 70vw;
  }
}
  • Add the keyframes to the animation property:
animation: x 4s linear forwards,
    y 4s cubic-bezier(0.55, 0, 0.2, -800) forwards, x2 0.5s linear forwards,
    pointOfCircle 0s linear forwards, loop 3s linear forwards,
    x3 2s linear forwards;
  • Adding the suitable delay, here it will be 7.5s:
animation-delay: 0s, 0s, 4s, 4.5s, 4.5s, 7.5s;
The Final Output

See the Pen Rollercoaster Final [forked] by Yosra Emad.

Conclusion

In this article, we covered how to combine multiple keyframes to create a complex animation path. We also covered cubic beziers and how to use them to create your own easing function. I would recommend going on and creating your own animation path to get your hands dirty with animations. If you need any help or want to give feedback, you’re more than welcome to send a message to any of the links here. Have a wonderful day/night!

]]>
hello@smashingmagazine.com (Yosra Emad)
<![CDATA[Motion Controls In The Browser]]> https://smashingmagazine.com/2022/10/motion-controls-browser/ https://smashingmagazine.com/2022/10/motion-controls-browser/ Fri, 28 Oct 2022 14:00:00 GMT In this article, I’m going to explain how to implement motion controls in the browser. That means you’ll be able to create an application where you can move your hand and make gestures, and the elements on the screen will respond.

Here’s an example:

Here’s some boilerplate to get started (adapted from MediaPipe’s JavaScript API example):

<script src="https://cdn.jsdelivr.net/npm/@mediapipe/camera_utils/camera_utils.js" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/@mediapipe/control_utils/control_utils.js" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/@mediapipe/drawing_utils/drawing_utils.js" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/@mediapipe/hands/hands.js" crossorigin="anonymous"></script>

<video class="input_video"></video>
<canvas class="output_canvas" width="1280px" height="720px"></canvas>

<script>
const videoElement = document.querySelector('.input_video');
const canvasElement = document.querySelector('.output_canvas');
const canvasCtx = canvasElement.getContext('2d');

function onResults(handData) {
  drawHandPositions(canvasElement, canvasCtx, handData);
}

function drawHandPositions(canvasElement, canvasCtx, handData) {
  canvasCtx.save();
  canvasCtx.clearRect(0, 0, canvasElement.width, canvasElement.height);
  canvasCtx.drawImage(
      handData.image, 0, 0, canvasElement.width, canvasElement.height);
  if (handData.multiHandLandmarks) {
    for (const landmarks of handData.multiHandLandmarks) {
      drawConnectors(canvasCtx, landmarks, HAND_CONNECTIONS,
                     {color: '#00FF00', lineWidth: 5});
      drawLandmarks(canvasCtx, landmarks, {color: '#FF0000', lineWidth: 2});
    }
  }
  canvasCtx.restore();
}

const hands = new Hands({locateFile: (file) => {
  return https://cdn.jsdelivr.net/npm/@mediapipe/hands/${file};
}});
hands.setOptions({
  maxNumHands: 1,
  modelComplexity: 1,
  minDetectionConfidence: 0.5,
  minTrackingConfidence: 0.5
});
hands.onResults(onResults);

const camera = new Camera(videoElement, {
  onFrame: async () => {
    await hands.send({image: videoElement});
  },
  width: 1280,
  height: 720
});
camera.start();

</script>

The above code does the following:

  • Load the library code;
  • Start recording the video frames;
  • When the hand data comes in, draw the hand landmarks on a canvas.

Let’s take a closer look at the handData object since that’s where the magic happens. Inside handData is multiHandLandmarks, a collection of 21 coordinates for the parts of each hand detected in the video feed. Here’s how those coordinates are structured:

{
  multiHandLandmarks: [
    // First detected hand.
    [
      {x: 0.4, y: 0.8, z: 4.5},
      {x: 0.5, y: 0.3, z: -0.03},
      // ...etc.
    ],

    // Second detected hand.
    [
      {x: 0.4, y: 0.8, z: 4.5},
      {x: 0.5, y: 0.3, z: -0.03},
      // ...etc.
    ],

    // More hands if other people participate.
  ]
}

A couple of notes:

  • The first hand doesn’t necessarily mean the right or the left hand; it’s just whichever one the application happens to detect first. If you want to get a specific hand, you’ll need to check which hand is being detected using handData.multiHandedness[0].label and potentially swapping the values if your camera isn’t mirrored.
  • For performance reasons, you can restrict the maximum number of hands to track, which we did earlier by setting maxNumHands: 1.
  • The coordinates are set on a scale from 0 to 1 based on the size of the canvas.

Here’s a visual representation of the hand coordinates:

Now that you have the hand landmark coordinates, you can build a cursor to follow your index finger. To do that, you’ll need to get the index finger’s coordinates.

You could use the array directly like this handData.multiHandLandmarks[0][5], but I find that hard to keep track of, so I prefer labeling the coordinates like this:

const handParts = {
  wrist: 0,
  thumb: { base: 1, middle: 2, topKnuckle: 3, tip: 4 },
  indexFinger: { base: 5, middle: 6, topKnuckle: 7, tip: 8 },
  middleFinger: { base: 9, middle: 10, topKnuckle: 11, tip: 12 },
  ringFinger: { base: 13, middle: 14, topKnuckle: 15, tip: 16 },
  pinky: { base: 17, middle: 18, topKnuckle: 19, tip: 20 },
};

And then you can get the coordinates like this:

const firstDetectedHand = handData.multiHandLandmarks[0];
const indexFingerCoords = firstDetectedHand[handParts.index.middle];

I found cursor movement more pleasant to use with the middle part of the index finger rather than the tip because the middle is more steady.

Now you’ll need to make a DOM element to use as a cursor. Here’s the markup:

<div class="cursor"></div>

And here are the styles:

.cursor {
  height: 0px;
  width: 0px;
  position: absolute;
  left: 0px;
  top: 0px;
  z-index: 10;
  transition: transform 0.1s;
}

.cursor::after {
  content: '';
  display: block;
  height: 50px;
  width: 50px;
  border-radius: 50%;
  position: absolute;
  left: 0;
  top: 0;
  transform: translate(-50%, -50%);
  background-color: #0098db;
}

A few notes about these styles:

  • The cursor is absolutely positioned so it can be moved without affecting the flow of the document.
  • The visual part of the cursor is in the ::after pseudo-element, and the transform makes sure the visual part of the cursor is centered around the cursor’s coordinates.
  • The cursor has a transition to smooth out its movements.

Now that we’ve created a cursor element, we can move it by converting the hand coordinates into page coordinates and applying those page coordinates to the cursor element.

function getCursorCoords(handData) {
  const { x, y, z } = handData.multiHandLandmarks[0][handParts.indexFinger.middle];
  const mirroredXCoord = -x + 1; /* due to camera mirroring */
  return { x: mirroredXCoord, y, z };
}

function convertCoordsToDomPosition({ x, y }) {
  return {
    x: ${x &#42; 100}vw,
    y: ${y &#42; 100}vh,
  };
}

function updateCursor(handData) {
  const cursorCoords = getCursorCoords(handData);
  if (!cursorCoords) { return; }
  const { x, y } = convertCoordsToDomPosition(cursorCoords);
  cursor.style.transform = translate(${x}, ${y});
}

function onResults(handData) {
  if (!handData) { return; }
  updateCursor(handData);
}

Note that we’re using the CSS transform property to move the element rather than left and top. This is for performance reasons. When the browser renders a view, it goes through a sequence of steps. When the DOM changes, the browser has to start again at the relevant rendering step. The transform property responds quickly to changes because it is applied at the last step rather than one of the middle steps, and therefore the browser has less work to repeat.

Now that we have a working cursor, we’re ready to move on.

Step 3: Detect Gestures

The next step in our journey is to detect gestures, specifically pinch gestures.

First, what do we mean by a pinch? In this case, we’ll define a pinch as a gesture where the thumb and forefinger are close enough together.

To designate a pinch in code, we can look at when the x, y, and z coordinates of the thumb and forefinger have a small enough difference between them. “Small enough” can vary depending on the use case, so feel free to experiment with different ranges. Personally, I found 0.08, 0.08, and 0.11 to be comfortable for the x, y, and z coordinates, respectively. Here’s how that looks:

function isPinched(handData) {
  const fingerTip = handData.multiHandLandmarks[0][handParts.indexFinger.tip];
  const thumbTip = handData.multiHandLandmarks[0][handParts.thumb.tip];
  const distance = {
    x: Math.abs(fingerTip.x - thumbTip.x),
    y: Math.abs(fingerTip.y - thumbTip.y),
    z: Math.abs(fingerTip.z - thumbTip.z),
  };
  const areFingersCloseEnough = distance.x < 0.08 && distance.y < 0.08 && distance.z < 0.11;

  return areFingersCloseEnough;
}

It would be nice if that’s all we had to do, but alas, it’s never that simple.

What happens when your fingers are on the edge of a pinch position? If we’re not careful, the answer is chaos.

With slight finger movements as well as fluctuations in coordinate detection, our program can rapidly alternate between pinched and not pinched states. If you’re trying to use a pinch gesture to “pick up” an item on the screen, you can imagine how chaotic it would be for the item to rapidly alternate between being picked up and dropped.

In order to prevent our pinch gestures from causing chaos, we’ll need to introduce a slight delay before registering a change from a pinched state to an unpinched state or vice versa. This technique is called a debounce, and the logic goes like this:

  • When the fingers enter a pinched state, start a timer.
  • If the fingers have stayed in the pinched state uninterrupted for long enough, register a change.
  • If the pinched state gets interrupted too soon, stop the timer and don’t register a change.

The trick is that the delay must be long enough to be reliable but short enough to feel quick.

We’ll get to the debounce code soon, but first, we need to prepare by tracking the state of our gestures:

const OPTIONS = {
  PINCH_DELAY_MS: 60,
};

const state = {
  isPinched: false,
  pinchChangeTimeout: null,
};

Next, we’ll prepare some custom events to make it convenient to respond to gestures:

const PINCH_EVENTS = {
  START: 'pinch_start',
  MOVE: 'pinch_move',
  STOP: 'pinch_stop',
};

function triggerEvent({ eventName, eventData }) {
  const event = new CustomEvent(eventName, { detail: eventData });
  document.dispatchEvent(event);
}

Now we can write a function to update the pinched state:

function updatePinchState(handData) {
  const wasPinchedBefore = state.isPinched;
  const isPinchedNow = isPinched(handData);
  const hasPassedPinchThreshold = isPinchedNow !== wasPinchedBefore;
  const hasWaitStarted = !!state.pinchChangeTimeout;

  if (hasPassedPinchThreshold && !hasWaitStarted) {
    registerChangeAfterWait(handData, isPinchedNow);
  }

  if (!hasPassedPinchThreshold) {
    cancelWaitForChange();
    if (isPinchedNow) {
      triggerEvent({
        eventName: PINCH_EVENTS.MOVE,
        eventData: getCursorCoords(handData),
      });
    }
  }
}

function registerChangeAfterWait(handData, isPinchedNow) {
  state.pinchChangeTimeout = setTimeout(() => {
    state.isPinched = isPinchedNow;
    triggerEvent({
      eventName: isPinchedNow ? PINCH_EVENTS.START : PINCH_EVENTS.STOP,
      eventData: getCursorCoords(handData),
    });
  }, OPTIONS.PINCH_DELAY_MS);
}

function cancelWaitForChange() {
  clearTimeout(state.pinchChangeTimeout);
  state.pinchChangeTimeout = null;
}

Here's what updatePinchState() is doing:

  • If the fingers have passed the pinch threshold by starting or stopping a pinch, we’ll start a timer to wait and see if we can register a legitimate pinch state change.
  • If the wait is interrupted, that means the change was just a fluctuation, so we can cancel the timer.
  • However, if the timer is not interrupted, we can update the pinched state and trigger the correct custom change event, namely, pinch_start or pinch_stop.
  • If the fingers have not passed the pinch change threshold and are currently pinched, we can dispatch a custom pinch_move event.

We can run updatePinchState(handData) each time we get hand data so that we can put it in our onResults function like this:

function onResults(handData) {
  if (!handData) { return; }
  updateCursor(handData);
  updatePinchState(handData);
}

Now that we can reliably detect a pinch state change, we can use our custom events to define whatever behavior we want when a pinch is started, moved, or stopped. Here’s an example:

document.addEventListener(PINCH_EVENTS.START, onPinchStart);
document.addEventListener(PINCH_EVENTS.MOVE, onPinchMove);
document.addEventListener(PINCH_EVENTS.STOP, onPinchStop);

function onPinchStart(eventInfo) {
  const cursorCoords = eventInfo.detail;
  console.log('Pinch started', cursorCoords);
}

function onPinchMove(eventInfo) {
  const cursorCoords = eventInfo.detail;
  console.log('Pinch moved', cursorCoords);
}

function onPinchStop(eventInfo) {
  const cursorCoords = eventInfo.detail;
  console.log('Pinch stopped', cursorCoords);
}

Now that we’ve covered how to respond to movements and gestures, we have everything we need to build an application that can be controlled with hand motions.

Here are some examples:

See the Pen Beam Sword - Fun with motion controls! [forked] by Yaphi.

See the Pen Magic Quill - Air writing with motion controls [forked] by Yaphi.

I’ve also put together some other motion control demos, including movable playing cards and an apartment floor plan with movable images of the furniture, and I’m sure you can think of other ways to experiment with this technology.

Conclusion

If you’ve made it this far, you’ve seen how to implement motion controls with a browser and a webcam. You’ve read camera data using browser APIs, you’ve gotten hand coordinates via machine learning, and you’ve detected hand motions with JavaScript. With these ingredients, you can create all sorts of motion-controlled applications.

What use cases will you come up with? Let me know in the comments!

]]>
hello@smashingmagazine.com (Yaphi Berhanu)
<![CDATA[Typographic Hierarchies]]> https://smashingmagazine.com/2022/10/typographic-hierarchies/ https://smashingmagazine.com/2022/10/typographic-hierarchies/ Wed, 26 Oct 2022 15:00:00 GMT Simply defined, the concept of typographic hierarchies refers to the visual organization of content in terms of their relative importance. In other words, the manner in which we organize the text, the headers, the subheaders, the columns, the paragraphs, the callouts, and others on the page or space signify their importance.

That sounds easy enough, right? Yes, it does. The problem is that visually accomplishing this is more challenging than it sounds, especially for those unfamiliar with the nuances of typography. Everything in typography behaves like a domino effect causing a chain reaction of changes by the designer. That is why when a client asks for a “small change,” it is never small and never linear. Typography is symbiotic. Each element contributes to the other, even in a very small way.

These two words: typographic and hierarchies are not familiar concepts to those outside our field. In fact, even in the art and design field, fellow artists do not necessarily understand typographic hierarchy. The term typographic refers to matters related to typography: type choice, sizes, weights, how far or close we set the letters, and others. The term hierarchy refers to levels of priority or importance: what comes first, second, and third. Thus, when these two terms are put together, we mean to arrange content in levels of importance with the intention of communicating to the reader.

Choosing typefaces, arranging content in terms of visual importance, and organizing elements (title, subtitles, body copy, images, space, and so on) on the page evoke responses from the reader. When things are in competition on a page, we might feel confused. We all have a sense of it, and we can even recall moments of disgust when we see a printed note with bloody type or a website in which the typography is all jumbled up. However, learning to use typography is elusive. It is a matter of constant practice and honing visual acumen.

While it is true that the advent of the computer to our field has expedited the design and printing process, it is also true that typographic proportions do not look the same when looking at things online versus printing. The relationship between the reader and their monitor differs from the relationship between the reader and anything printed, whether hand-held or seen at a distance.

To provide an example, let me share my experience with typography. Before becoming a designer, I graduated with a BA in Art Education. I understood color, research, composition, contrast, drawing, images, sketching, painting, and so on. When I went back to school to study design and specifically graphic design, I was lost.

My biggest challenge was that I could not see the letters as something other than the semantic symbols of language. Questions constantly flooded my mind. For instance, “What do you mean that the letters have a grid? What do you mean I am doing too much? And what is too much? How is this too big?” The questions were endless and excruciating. My beginner’s typography was, to put it mildly, a prime example of what not to do. I did not know any better, but I also did not understand any better.

My “aha” moment came when another instructor explained to me that typography was like auditioning for a part in a play that I wanted really badly. She suggested that I enunciate the words as if I was playing in the theater. Mind you, I had no experience in theater whatsoever but somehow, the idea resonated with me. It was then that I realized, in a very experiential way, that typography was the spoken language in visual form.

That, somehow, the letters, words, titles, typeface choices, size, weight, color, spacing — all conspired together to emanate a visual language. The page was the stage. The letters, words, titles, paragraphs, and so on were performers on that stage. Another instructor used to say that the typographic hierarchy was like a ballet company where only one was the prima ballerina, and everything else bowed to her. Having a cultural background where music and dance were vital, I started to get the idea.

After I made it into graduate school, my exploration of typography intensified, leading to my thesis work. My graduate thesis combined two things that were special to me: dance, specifically ballroom dancing, and my newfound love for typography. To develop a body of work for my thesis, I used one of my classes’ undergraduate projects — Typographic Hierarchies. Since then, I have been teaching typography and hierarchy using this project.

The typographic hierarchies project is based on two books by professor Rob Carter from Virginia Commonwealth University. These books are Typographic Design: Form and Communication and Experimental Typography. The latter is out of print now. The objective of the project is to isolate six basic variables to establish a typographic hierarchy. These variables are:

  • Proximity or space,
  • Weight,
  • Size,
  • Size and weight,
  • Color,
  • Visual punctuation.

When we look at a typographic composition, a poster, a brochure, or a web page, what we see is the application of these variables together. We don’t often think of dissecting the composition in front of us to say, “How many sizes are there?” Even as designers, we are not accustomed to dissecting design work. Imagine a non-designer, even less, right? Yet, when we come to school or start as designers, we are all non-designers and need to retrain our brains to look at content as a relationship of shapes in a context, format, or space.

In this article, we will discuss the variables mentioned above, learn how to look at each differently, and in turn, design pieces by intentionally modifying each variable to create a typographic hierarchy effectively. Let’s get started with proximity or space.

Note: These are studies done in a university class intended to expose the students to a series of compositional exercises. These exercises will provide students with a skill set to innovate and push the boundaries when appropriate. It will also help them to acquire a good understanding of compositional parameters. Therefore, use your discernment and consider the project’s needs when applying and/or breaking these principles and variables.

Proximity Or Space

This variable requires us to briefly discuss the grid. The grid is an underlying tool that helps us organize elements on a page. It is so foundational that there are books dedicated to it. For example, the book by designer and design educator Timothy Samara, titled Making and Breaking the Grid is one of the most eloquent discussions of it.

A Short Discussion About The Grid

A grid is simply an underlying structure used to organize elements in a context. This context can be a page, printed or web, an app, a brochure, a poster, a book, a newspaper, a building, furniture, and so on. Though this article is not a study of the grid, it is important to understand that the variables we will learn work within a grid. A grid allows us to break up the space into modules or smaller chunks like pieces in a puzzle that must come together to create the bigger picture. There are usually two ways to approach the application of a grid: predetermined or improvisational (also known as a visual or linear association).

Predetermined Grid

A predetermined grid is the division of the space into a certain amount of columns. There is even a one-column grid, also commonly called a manuscript grid (commonly seen in wedding invites and perhaps the first page of an article in a magazine).

We can keep adding columns to our grids and have two, three, four, five, and sometimes more. Software such as Adobe InDesign, Affinity Publisher, and others come equipped with the ability to determine what type of grid we want to use. It is usually easy to spot the grid used in a design piece. For example, if we look at a web page, we can usually spot the type of grid used — two, three, or four columns.

Perhaps the best examples of predetermined grids come from Modernist design and the Swiss Typography schools of thought.

Later on, Post Modern typography came along. Characterized by the juxtaposition of graphic elements, typography, and page use in a more organic way, it sought to find alternative typographic organizational arrangements. John Choi, a former student at NYUAD, wrote on the blog NYUAD Types of Art the following:

“Postmodern typography would be born out of the rejection of the modernist idea that certain forms, due to their inherent characteristics, are able to perform certain objective functions such as neutrality or legibility.”

As a result, the grid became a more organic and playful tool.

Improvisational Grid

Alternatively to a predetermined grid, an improvisational grid can be used. An improvisational grid is created when we lay down one element, perhaps in a very large size, and use it to extend its lines to organize elements around it. Thus, visual alignments or associations are emphasized or highlighted by placing elements following invisible lines emanating from them. For example, the image below does not feature the traditional vertical and horizontal modules that are common on a column grid. The image and the pattern created for the Evince Diagnostics logo at the top are the foundation for the organization of the type on the banner.

It is one of the funniest ways to create hierarchy because it allows for playful and unexpected results. However, it calls for attention to detail and sensitivity to the composition as a whole. Thus, it is both easy and difficult to master. It is frequently achieved by a large letter, but it can also be done with images or graphics.

Now that we have a basic understanding of the grid, let’s discuss our first variable or principle of hierarchy — proximity — in more detail.

Proximity

Proximity refers to the relative distance between elements, right? An easy metaphor is to think of friends, close friends, and strangers. The closer the friend, the closer the distance. The stranger the person, the farthest we stand from them. Our proximity or space shrinks or grows depending on our familiarity with things or people. Because it is usually easier for the students to refer to it as space, we will refer to proximity as space throughout the article.

When we discuss or think of space in a typographic hierarchy, we refer to things like space between letters, words, titles, paragraphs, margins, and how and where we place elements on the page.

In order to really understand proximity or space, we need to set some limits:

  • All type has to be 8-12 point size depending on the typeface;
  • It all has to be one size (even the titles);
  • No color;
  • A grid should be used from two to five columns, or an improvisational grid can be used. Please note that though we discussed the use of an improvisational grid based on size, when we leave elements at the same size, an improvisational grid can be used based on space or alignments.

The goal of this variable is to explore only the distance between any elements we choose and where we place our paragraphs and titles. You might be wondering, “how does space work in relation to typographic hierarchies? To answer this question, we will discuss some examples.

In the example above, we have a set of instructions, How to Fold a Crane, written by Chrissy Pk. As we can see, the columns of text are diagonally arranged. The grid, then, has been set before any other element has been placed on the page. By using diagonals, we create a sense of movement and energy around the composition.

Repetition of the title has been applied to create a sense of framing the page, and it serves to anchor the eye. Otherwise, we might feel that our eyes want to wander away from the page. Having the title repeated creates a kind of loop around the page and helps us keep our eyes contained. The type size is all consistent. The sense of movement and hierarchy comes from the title set in uppercase. To indicate each new step, instead of numbers or bullets, space and upper case letters in the first three words of the sentence are used.

Below are two analyses of the grid. The first one lets us see that the designer has probably divided the page into a four-column grid. In the second example, we can see that the diagonal grid has been applied over the four-column one.

To summarize what we see in this example:

  • We can use diagonal columns in place of vertical columns.
  • We can use uppercase to create a sense of hierarchy.
  • We can add spaces between items that follow a sequence instead of numbers or bullets.
  • We can repeat one element as long as it supports the purpose and conceptually keeps our eyes and mind focused on the subject.

In my experience, my students find that thinking of only the space or proximity is the hardest aspect of this study. But it is all about looking at the paragraphs, sentences, columns, and pages as shapes. If we think of each component in the example above as only shapes, we will see something like this below:

The page, space, and background, whether two or three-dimensional, is a shape. It can be a rectangle in portrait or landscape orientation or something more circular, or something organic like the shape of a guitar like this one titled MTV Unplugged, First Edition by Sarah Maralkey published in 1995:

The text in one of the spreads follows the gentle curve of the book:

If we consider the area we are using to organize our design as a shape, then the rest is a matter of subdividing that space in interesting ways. Thus, we always need to take the format into consideration.

Here is an interesting example of how to use a simple two-column grid:

As we move forward to the next variables, it is essential to note that how we treat the space will continue to be something we experiment with. We do not leave it behind. Let’s see how only changing the weight (bold versus regular + space) changes things around.

Weight

Weight refers to changes in the typeface as bold, regular, italic, heavy, medium, and so on. In this variable, we keep the sizes all even. In other words, we do not change the size at all.

It is worth mentioning that a typeface with no weight options will not be helpful in our exploration, as well as funky or heavily ornamental typefaces. Those are great for one instance or for display purposes such as a poster. However, in creating a hierarchy, it is best to stick to typefaces with well-proportioned shapes and multiple font options in their family.

In the image above, the layout is more traditional — a two-column grid with the text aligned to the left. The bold weight is used on the word Fold on the title and in the rest of the content each time the word Fold is used. This visual detail helps with establishing a conceptual and visual connection as well as a hierarchy. It is a visual reminder that these instructions are about learning to fold a crane.

In the following example, we have a much less traditional layout. The designer used a circular grid to subdivide the format or the space in the composition. The bold weight is more delicate here since the typeface is also more delicate. The text’s organization resembles a clock. The design requires more participation from the reader as they would need to turn the page around.

In addition to our first summary, we can add the following:

  • We can use organic shapes to subdivide the format.
  • We can follow a logical system to establish a visual hierarchy: bold a word and consistently bold the same word throughout the text.

Now, let’s move on to applying size but without weight.

Size

We understand that size refers to, well, sizes. How large or small the font used is displayed. For the purposes of this exercise, we will limit ourselves to three sizes, and we will refer to them in categories:

  • Body copy
    Depending on the typefaces’ x-height, anywhere from 8 points to 12. Never over 12.
  • Titles
    Here you can have some fun and play with contrast — very, very large. Anything over 14 points is considered a display, but you will find that it is still too small to make an impact.
  • Subheaders or accents
    Depending on what sizes you are using for the titles, you can select something in between the body copy size and the titles.

Something worth mentioning: these parameters are not solely mathematical. There is much to learn about how things look (regardless of size) once something is printed.

Along those lines, let’s discuss a note about titles. The best way to think of titles is to see them as a group of little cousins or a group of best friends who are really tight. The spaces (again, proximity) you create between each word on the title affect how the title is seen. In other words, do the words go together? If so, should there be a gap? This will become more clear in the discussion of the examples below:

We can see how the designer decided to create a sense of upward direction by setting the title along the column pointing towards the beginning of the text. The designer not only used size to create emphasis on the word CRANE but cleverly led the reader to the top. The rest is pretty straightforward, as we can see — using bullet points and space between the steps to conform to the sequential nature of the content.

Here we have three sizes used following the expected pattern (title, numbers to indicate sequence, and the text). But, notice how the numbers are not the same size as the text. They are a size in between the title and the text, indicating read the title first and then read in order.

In addition to the items we have added to our summary, we can add the following:

  • We can set one word of the title much larger than the rest.
  • We can direct the reader with the title to the beginning of the content by setting the title in an upwards orientation.
  • We can set numbers slightly larger than the text to indicate the reading order.

Now we will discuss variables in combination.

Size And Weight

We start here by combining two variables and still using proximity to create a hierarchy. We are still limiting ourselves to three size changes. In terms of weight, we can change the weight of words we think need to be seen but are not as important as the title or things like that. We can certainly make a word very large and bold. But, as you are experimenting, keep an eye on the balance of the page. Are things too heavy on one side? Is the page too busy on one side versus the other?

Size and weight experimentation also allow you to start playing with an improvisational grid. When making a letter or word really large, you may use it to establish visual alignments from it.

The example below is a page from a calendar I designed last Christmas holiday. Calendars are a great playground to explore sizes and weights. In this instance, I opted for the number of the month, the largest element on the page, while also increasing its weight, but right under the name — April — is very light or thin, creating a nice contrast between the two. The year is smaller but bold, as bold as the number above it. Though the contrast is sharp, the three pieces together create a nice typographic unit working together to create the focal point of the piece. The right side is the list of the month’s dates in bold. The holidays are stated in lightweight.

Of particular note is that if you notice, the words April and 2022 are tucked in under the vertical line of the number. This typeface has serifs (the little eyelashes at the bottom of the number). I aligned the two words under the number within its serifs. By doing this, I reinforce the visual alignment and implied vertical lines of the number.

In addition to the items we have added to our summary, we can add the following:

  • We can make a word very large on the page. If you go big, go big.
  • We can bold the largest element. Though not always necessary, it can sometimes create a nice and juicy hierarchy.
  • We can create units or groupings by keeping the type contained within an imaginary box.
  • We can use visual alignments or improvised grids to reinforce the typographic grouping.

With what we have learned so far, we will move on to color.

Color

Discussing color can be an article all by itself. There are many resources available both online and printed about color. Indeed, here are a few Smashing articles by Cameron Chapman covering the subject more broadly:

In this article, however, we will focus on how color enhances or emphasizes hierarchy, how it helps to create a composition that keeps the eye inside of itself, and how it helps the eye navigate the page. For these reasons, when studying this variable, we limit the use of color to two or three colors. By limiting the use of color, we can focus on how it helps to establish a hierarchy in typography.

Factors That Affect The Use And Application Of Color

I do not mean we use color arbitrarily. It is important to read the content to establish a sense of the article. In other words, let’s assume we are designing a leaflet for a school-aged children’s birthday party. We would probably use vibrant colors and convey a sense of fun. Alternatively, if we are designing a leaflet for hospital patients with instructional material, perhaps the colors we use might be less vibrant, softer, and aimed to provide a sense of calm. There are usually three essential aspects to consider when using color and designing in general:

  • Content,
  • Audience,
  • Context.

The audience determines not only how the content is written but also the typefaces, sizes, weights, and overall design of the content. The context of the content also determines how we design: is the content meant to be read at a distance, as in a poster, or is the content meant to be read closer to us, as in a mobile device or a book? Because color affects how we perceive the content, we must become familiar with that content. Thus, reading the content given to us by our clients helps us make smart design decisions.

Now that we discussed factors that are important for the use of color, let’s look at examples of the use of color as it pertains to this exercise:

In the example above, we can see how all the colors and attention have been dedicated to the title. It has also been added to the name of the author of the instructions, but because of its small size, it does not create conflict. The layout takes advantage of once making everything on the title large; it creates a nice pocket of space where the instructions can be easily tucked in. In this way, even though there is no color used on the body copy, it does not matter because we have no choice but to land our eyes on the beginning of the text.

Above, we see how the background has been turned black. Once you read the title and read a little bit of the text, it makes sense. The text has a pessimistic and somber tone to it. Thus, no cheerful colors. With that, notice how the column of text is concentrated to the right side, creating asymmetry, once again creating a sense of visual instability to enhance the text’s meaning.

Below is a greeting card for Mother’s Day in the United States. I designed this card to honor my best friend’s mom. Though I am using a picture, it is used in a way that helps the text come together in the lowercase a. The lowercase a is the largest element on the page. Its bowl or empty space creates a nice place to tuck something in — a picture, pattern, letters, and so on. The rest of the letters are capitalized, but the lowercase a continues to be the focal point. We can also notice that there are four sizes here. I broke the rule of using only three sizes… but it does not feel that there is competition. The colors are vibrant because, in this case, Cuquin was a vibrant person, and the colors are needed to honor her.

In addition to the items we have added to our summary, we can add the following:

  • We can use color to convey personality and tone.
  • We can break a rule as long as it works within the system we have established and does not compete with the focal point.
  • We can create spaces within the letters or words to tuck in text, patterns, or pictures.

Our last variable to discuss is visual punctuation. Let’s take a look at how everything comes together in this variable.

Visual Punctuation

A common question I often hear from my students is, “What is visual punctuation?” We see it all the time but don’t think about it. Visual punctuation refers to the use of lines, shapes, symbols, and other geometric elements to enhance the hierarchy. Remember, the goal is always to enhance the hierarchy and help the reader’s eye move around the space.

Let’s see some examples of how visual punctuation is actually frequently used and applied in typographic compositions:

The example above uses visual punctuation in the form of the crane to cleverly point to the title. Then it repeats the use of white in the text at the beginning of the instructions. The similarity established creates unity, and the word FOLD pulls our eye back to the top. Notice how the designer also bolded the beginning of each instruction. We saw this before in the weight discussion. The use of the bold weight on each instruction helps us move from one to the other sequentially. It also helps to signal each new step without numbers.

The above example was designed to undermine the sometimes unnecessary rules and regulations that we find in places of worship. The point is not to follow all the rules but rather to focus on the object of affection. Here, a visual point is made to emphasize the conceptual point:

Circles are a great way to call attention to something. And so are the dotted lines. In this example, the dotted and playful line is colored in the same color as the circle on the top left. It points to the new number in the address aligned or set on the imaginary line the base of the number 2 provides. The rest of the address is provided following the same color palette. It creates a type of triangular movement from the top left to the middle right to the bottom left. Notice the sizes too. The numbers are the largest item on the card. There is a nice relationship between the numbers and the top left circle.

In addition to the items we have added to our summary, we can add the following:

  • We can and should use visual punctuation to enhance the meaning, the concept, or the message.
  • We can use only one color and one shape.
  • We can also use more than one color to create a hierarchy.

Now that we have discussed all the variables, it would be a good idea to see them all used together.

All Variables In Examples

We have discussed the variables of proximity, weight, size, size and weight, color, and visual punctuation. Take a look at the following examples and see how many you can identify:

Like these, we can find more examples of the variables used together. In fact, they are used and applied so ubiquitously that we don’t really see them independently from each other. When starting out with typography, it is a good idea to isolate what we see. This is true for any discipline: isolate and then combine them. Learn each one well and then start adding and mixing.

The poster below was designed for a youth program called Empowered. It was a research-based project led by Dr. Krista Mehari with the goal of empowering marginalized young teens to make effective and productive decisions. When she asked me to work with them, we had several brainstorming sessions. The Watch, Wave, and Wait is a poster intended to help the kids memorialize the process of dealing with emotions. In this poster, I broke some rules. While still sticking to the three sizes rule, I managed to create a pattern using repetition of the outline words mimicking the internal thought process we engage in when upset: calm down, calm down, or counting or something similar.

Your Turn!

At this point, after reading this article, you might want to give this process a try. If so, I have prepared a simple table for you to use. Below are some instructions:

  • Pick content that isn’t too long. For example, a two-page editorial would be too long. But a set of ten-step instructions would be better suited. An excerpt from an essay would be good too.
  • Do not use letter-size pages. Think smaller: eight inches by eight inches format would be best. We do this to focus on the content and not feel strange if the page does not look “full.” Your sketches, which should be small, will also be square.
  • Always do your sketches. Always do sketches first. It is the best way to literally think outside the box since you are outside the box, that is, the computer. Do as many sketches as you can think.
  • For each of the variables, sketch several. Maybe think of four options for each.
  • Then, take the best two or three for each variable and put them on the computer.
  • When you print, and you should always print to “see” how the proportions are working, use crop marks to cut the page.
  • Once you have printed them, tape them to a wall away from you. But tape them upside down. It is the best way to assess proportions, space, hierarchy, balance, tension, and so on.
  • After you do this, revise them on the computer, print them again, and tape them upside down again.
  • Once you are certain you have attained a good typographic hierarchy, you can make a small booklet out of them. Below you can see the booklet my former student Anh Dang did for her project, How to Fold a Crane. Or you can create a virtual flipbook showing your masterpieces!

And you needn’t stop there. As you get comfortable with the process, perhaps you want to try designing a poster. Or tackle that two-page editorial layout? Give it a try!

Conclusion

So far, we have seen how these six variables can powerfully transform the content in any format. It is all about how creative we are about organizing things within the parameters. After all, that is what design is about — creative solutions within a set of parameters. The more you practice, the better you get at something, right?

This old adage has proven itself to be true consistently. It applies to typography and anything design. Fine-tuning our senses comes with exposure and repetition. Take any opportunity to design and establish a hierarchy. Even small things like a business card can look incredible when you add a contrast of space, weight, size, size and weight, color, and visual punctuation. If we think about it, we are exposed to these variables daily and constantly. We just don’t look at them as isolated variables that can affect the entire composition. But they do. And once we know how to use them, we can push the boundaries and create pieces with more impact and intention.

Below I am listing resources to look at for more inspiration.

Resources

]]>
hello@smashingmagazine.com (Alma Hoffmann)
<![CDATA[State Of CSS Survey: Influence The Future Of CSS]]> https://smashingmagazine.com/2022/10/state-css-survey-2022/ https://smashingmagazine.com/2022/10/state-css-survey-2022/ Mon, 24 Oct 2022 19:00:00 GMT This year, I joined the team and helped design the survey together with the community which led to a number of improvements. If you write CSS frequently, investing a few minutes to fill it in could come back to you hundredfold, since implementers make decisions on what to work on based on the developer pain points identified through the survey every year. In fact, Chrome is funding work on the survey for this very reason.

Past Surveys

So, how did past surveys help web developers? Let’s look at the impact in Chrome, as described to us by Nicole Sullivan, Product Manager for Chrome at Google:

“I showed the ‘Missing features’ section to my team before the pandemic and we got to work on it. Several things on that list are underway.”

Indeed, literally everything in that list is now being worked on or finished unless there was no (stable) specification for it:

  • Container queries
    Size queries have shipped in Chrome 106 , style queries behind a flag.
  • Parent selector/:has selector
    Shipped in Chrome 105.
  • Nesting
    Currently underway, delayed a bit due to discussions in the CSS Working Group about last minute changes to the syntax.
  • 🟡 Functions
    No specification to implement yet, but is being worked on in the CSS WG.
  • Scoping
    Experimental implementation in Chrome 105 behind a flag.
  • 🟡 Mixins
    No specification to implement yet, but ideas are being explored in the CSS WG.
  • Subgrid
    Implementation underway.

Let’s look at the corresponding section in the 2020 results. A lot of overlap, but some additional items:

The 2021 corresponding section includes roughly the same items, with one new thing: color functions. And lo and behold, the color functions for which there is a stable specification are being implemented in Chrome as we speak, and Chrome has funded specification work on the rest.

And it’s not just Chrome. The focus of Interop 2022 was largely shaped by these results.

What’s Next?

We’re taking on the world of styles and selectors to try and identify upcoming trends, and figure out what featurs and tools to learn next. What’s more, the survey results will also help browser vendors prioritize their roadmaps and work towards better compatibility between browsers.

What do you want to see more of in CSS? Better typography? New responsive layout features? New features to improve maintainability? Layout? Components? Something else? The sky is the limit! Make sure to share your CSS dreams with us in the survey, and they may well start coming true.

]]>
hello@smashingmagazine.com (Lea Verou)
<![CDATA[Futuristic CSS]]> https://smashingmagazine.com/2022/10/futuristic-css/ https://smashingmagazine.com/2022/10/futuristic-css/ Fri, 21 Oct 2022 12:00:00 GMT I run the yearly State of CSS survey, asking developers about the CSS features and tools they use or want to learn. The survey is actually open right now, so go take it!

The goal of the survey is to help anticipate future CSS trends, and the data is also used by browser vendors to inform their roadmap.

This year, Lea Verou pitched in as lead survey designer to help select which CSS features to include. But even though we added many new and upcoming features (some of which, like CSS nesting, aren’t even supported yet), some features were so far off, far-fetched, and futuristic (or just plain made-up!) that we couldn’t in good conscience include them in the survey.

But it’s fun to speculate. So today, let’s take a look at some CSS features that might one day make their way to the browser… or not!

CSS Toggles

The CSS checkbox hack has been around for over ten years, and it still remains the only way to achieve any kind of “toggle effect” in pure CSS (I actually used it myself recently for the language switcher on this page).

But what if we had actual toggles, though? What if you could handle tabs, accordions, and more, all without writing a single line of JavaScript code?

That’s exactly what Tab Atkins and Miriam Suzanne’s CSS Toggles proposal wants to introduce. The proposal is quite complex, and the number of details and edge cases involved makes it clear that this will be far from trivial for browser vendors to implement. But hey, one can dream, and in fact, an experimental implementation recently appeared in Chrome Canary!

CSS Switch Function

A major trend in recent years — not only in CSS but in society at large — has been recognizing that we’ve often done a poor job of serving the needs of a diverse population. In terms of web development, this translates into building websites that can adapt not only to different devices and contexts but also to different temporary or permanent disabilities such as color blindness or motion sickness.

The result is that we often need to target these different conditions in our code and react to them, and this is where Miriam Suzanne’s switch proposal comes in:

.foo {
  display: grid;
  grid-template-columns: switch(
    auto /
     (available-inline-size > 1000px) 1fr 2fr 1fr 2fr /
     (available-inline-size > 500px) auto 1fr /
   );
}

While the initial proposal focuses on testing available-inline-size as a way to set up grid layouts, one can imagine the same switch syntax being used for many other scenarios as well, as a complement to media and container queries.

Intrinsic Typography

Intrinsic typography is a technique coined by Scott Kellum, who developed the type-setting tool Typetura. In a nutshell, it means that instead of giving the text a specific size, you let it set its own size based on the dimensions of the element containing it:

Instead of sizing and spacing text for each component at every breakpoint, the text is given instructions to respond to the areas it is placed in. As a result, intrinsic typography enables designs to be far more flexible, adapting to the area in which it is placed, with far less code.

This goes beyond what the already quite useful Utopia Type Scale Calculator can offer, as it only adapts based on viewport dimensions — not container dimensions.

The only problem with Typetura is that it currently requires a JavaScript library to work. As is often the case, though, one can imagine that if this approach proves popular, it’ll make its way to native CSS sooner or later.

We can already achieve a lot of this today (or pretty soon, at least) with container query units, which lets you reference a container’s size when defining units for anything inside it.

Sibling Functions

It’s common in Sass to write loops when you want to style a large number of items based on their position in the DOM. For example, to progressively indent each successive item in a list, you could do the following:

@for $i from 1 through 10 {
  ul:nth-child(#{$i}) {
    padding-left: #{$i * 5px}
  }
}

This would then generate the equivalent of 10 CSS declarations. The obvious downside here is that you end up with ten lines of code! Also, what if your list has more than ten elements?

An elegant solution currently in the works is the sibling-count() and sibling-index() functions. Using sibling-index(), the previous example would become:

ul > li {
  padding-left: calc(sibling-index() * 5px); 
}

It’s an elegant solution to a common need!

CSS Patterns

A long, long time ago, I made a little tool called Patternify that would let you draw patterns and export them to base64 code to be dropped inline in your CSS code. My concept was to let you use patterns inside CSS but with CSS Doodle. Yuan Chuan had the opposite idea: what if you used CSS to create the patterns?

Now pure-CSS pattern-making has been around for a while (and recently got more elaborate with the introduction of conic gradients), but Yuan Chuan definitely introduced some key new concepts, starting with the ability to randomize patterns or easily specify a grid.

Obviously, CSS Doodle is probably far more intricate than a native pattern API would ever need to be, but it’s still fun to imagine what we could do with just a few more pattern-focused properties. The @image proposal might be a step in that direction, as it gives you tools to define or modify images right inside your CSS code.

Native HTML/CSS Charts

Now we’re really getting into wild speculation. In fact, as far as I know, no one else has ever submitted a proposal or even blogged about this. But as someone who spends a lot of their time working on data visualizations, I think native HTML/CSS charts would be amazing!

Now, most charts you’ll come across on the web will be rendered using SVG or sometimes Canvas. In fact, this is the approach we use for the surveys through the DataViz library Nivo.

The big problem with this, though, is that neither SVG nor Canvas are really responsive. You can scale them down proportionally, but you can’t have the same fine-grained control that something like CSS Grid offers.

That’s why some have tried to lay out charts using pure HTML and CSS, like charting library Charts.css.

The problem here becomes that once you go past simple blocky bar charts, you need to use a lot of hacks and complex CSS code to achieve what you want. It can work, and libraries like Charts.css do help a lot, but it’s not easy by any means.

That’s why I think having native chart elements in the browser could be amazing. Maybe something like:

<linechart>
  <series id="series_a">
    <point x="0" y="2"/>
    <point x="1" y="4"/>
    <point x="2" y="6"/>
  </series>
  <series id="series_b">
    <point x="0" y="6"/>
    <point x="1" y="4"/>
    <point x="2" y="2"/>
  </series>
</linechart>

You would then be able to control the chart’s spacing, layout, colors, and so on by using good old CSS — including media and container queries, to make your charts look good in every situation.

Of course, this is something that’s already possible through web components, and many are experimenting in this direction. But you can’t beat the simplicity of pure HTML/CSS.

And Also…

Here are a couple more quick ones just to keep you on your toes:

Container Style Queries

You might already know that container queries let you define an element’s style based on the width or height of its containing element. Container style queries let you do the same, but based on that container’s — you guessed it — style, and there’s actually already an experimental implementation for it in Chrome Canary.

As Geoff Graham points out, this could take the form of something like:

.posts {
  container-name: posts;
}

@container posts (background-color: #f8a100) {
  /* Change styles when `posts` container has an orange background */
  .post {
    color: #fff;
  }
}

This is a bit like :has(), if :has() lets you select based on styles and not just DOM properties and attributes, which, now that I think about it, might be another cool feature too!

Random Numbers

People have tried to simulate a random number generator in CSS for a long time (using the “Cicada Principle” technique and other hacks), but having true built-in randomness would be great.

A CSS random number generator would be useful not just for pattern-making but for any time you need to make a design feel a little more organic. There is a fairly recent proposal that suggests a syntax for this, so it’ll be interesting to see if we ever get CSS randomness!

Grid Coordinates Selector

What if you could target grid items based on their position in a grid or flexbox layout, either by styling a specific row or column or even by targeting a specific item via its x and y coordinates?

It might seem like a niche use case at first, but as we use Grid and Subgrid more and more, we might also need new ways of targeting specific grid items.

Better Form Styling

Styling form inputs has traditionally been such a pain that many UI libraries decide to abstract away the native form input completely and recreate it from scratch using a bunch of divs. As you might imagine, while this might result in nicer-looking forms, it usually comes at the cost of accessibility.

And while things have been getting better, there’s certainly still a lot we could improve when it comes to forming input styling and styling native widgets in general. The new <selectmenu> element proposal is already a great start in that direction.

Animating To Auto

We’ve all run into this: you want to animate an element’s height from 0 to, well, however big it needs to be to show its contents, and that’s when you realize CSS doesn’t let you animate or transition to auto.

There are workarounds, but it would still be nice to have this fixed at the browser level. For this to happen, we’ll also need to be able to use auto inside calc, for example calc(auto / 2 + 200px / 2).

Predicting The Future

Now let’s be real for a second: the chances of any of these features being implemented (let alone supported in all major browsers) are slim, at least if we’re looking at the next couple of years.

But then again, people thought the same about :has() or native CSS nesting, and it does look like we’re well on our way to being able to use those two — and many more — in our code sooner than later.

So let’s touch base again five years from now and see how wrong I was. Until then, I’ll keep charting the course of CSS through our yearly surveys. And I hope you’ll help us by taking this year’s survey!

Thanks to Lea Verou and Bramus Van Damme for their help with this article.

]]>
hello@smashingmagazine.com (Sacha Greif)
<![CDATA[What’s New In DevTools: Halloween Edition 🎃]]> https://smashingmagazine.com/2022/10/devtools-updates-halloween-edition/ https://smashingmagazine.com/2022/10/devtools-updates-halloween-edition/ Thu, 20 Oct 2022 14:00:00 GMT I can’t believe it’s already been nine months since I last wrote about the new DevTools features across browsers! You folks are due for an update. And what an update this is going to be!

Our friendly DevTools teams at Mozilla, Google, Microsoft, and Apple have once again been hard at work. And in this article, I’ll attempt to summarize the most impactful new features that are now available in browser developer tools.

So much has happened over these past few months that I may have missed a few things, but hopefully, you’ll find something that helps you in this article. There should be a little bit for everyone, whatever your level of experience with web development is, and whatever browser you use.

So, without further ado, let’s jump right in.

Note: Because Edge is based on Chromium, the open-source browser engine that also powers Chrome, all of the Chrome features listed below are also available in Edge (unless otherwise noted).

CSS Debugging

We’ve got a lot of long-awaited and profoundly impacting new features in CSS lately.

To name just a few:

  • Container Queries help us style components based on the size of their containers,
  • The :has() pseudo-class lets us style elements based on what they contain, and
  • CSS Cascade Layers make it easy to gracefully handle increasing complexity in our websites’ code.

But, although these features are amazing, shipping support for them in browsers is only part of the story. For people to comfortably use them, documentation and tooling are necessary too.

We’re in luck because both Container Queries and CSS Cascade Layers have associated DevTools features now.

Specifically, Container Queries are supported in Safari WebInspector and Chrome DevTools where information about the corresponding @container at-rules is displayed when viewing CSS in the Styles sidebar.

Here is an example in Safari WebInspector:

Chrome, Safari, and Firefox DevTools also now have support for CSS Cascade layers in their Elements (or Inspector) tools. The layer to which a particular rule belongs is now displayed next to that rule:

Maybe a little less popular, but still very useful, the hwb() CSS function makes it possible to express colors in a more natural way based on hue and an amount of whiteness and blackness.

hwb() is now supported in all major browsers, and Chrome (and Chromium-based browsers), as well as Firefox, both have support for it in DevTools. That means they will show hwb in the autocomplete list when editing CSS in the Styles (or Rules) sidebar and will show the same color swatch used for other color formats too.

Next, it has also become easier to edit CSS in the Styles sidebar and get meaningful autocompletion results across browsers.

Chrome now previews all CSS variable values when autocompleting the var() function, not just colors, and it also displays @supports and @scope at-rules.

Safari now uses fuzzy matching when auto-completing CSS, making it much faster to type property names and values.

And Firefox added support for the color-mix() function in its auto-complete too.

Talking about Firefox, the browser has had the amazing Inactive CSS feature since 2019, which lets you know when a particular CSS declaration doesn’t have an impact on the current element.

Firefox continued to improve this feature over time and recently added more coverage for use cases such as warning when border-image-* is used on elements within a table with border-collapse or warning when width or height are used on ruby elements.

And, while we’re on the topic of inactive CSS, the Chrome team is actually working on a similar feature. In fact, it’s already available in Chrome (and all Chromium-based browsers) by enabling the CSS authoring hints experiment under Settings (F1) > Experiments in DevTools and should become available by default with Chrome 108.

Over the past few years, browser DevTools has gotten fantastic layout debugging tools to inspect, understand, and tweak grid and flex layouts. More recently, Safari has been adding more features in this area as well.

You can now use CSS alignment controls in the Styles sidebar and inspect Flexbox layouts too.

JavaScript Debugging

Let’s change gears a bit and talk about JavaScript debugging.

It’s very common to use external libraries and frameworks in a JavaScript codebase, to avoid having to re-implement things that have already been solved. For some years already, DevTools have allowed users to ignore third-party scripts when debugging (see docs for Chrome, Edge, Firefox).

Hiding scripts makes it easier to debug your code. It avoids ending up in foreign-looking library code when stepping through your own logic.

Recently, Firefox shipped a new feature that builds on top of this. You can now ignore pieces of code within a file. If you have a function that keeps getting called all the time but isn’t interesting for what you’re trying to debug, you can simply ignore that one function now.

Over in Chrome (and Chromium), a whole lot of small and not-so-small JavaScript debugging improvements were made:

The Page source tree was improved, and there’s now a way to group sources by authored (to show the original source files, thanks to source maps) or deployed (to show the actual files on the server).

It is now also possible to live edit the code of a function while debugging. If you’re paused at a breakpoint inside a function and want to test a quick fix, you can edit the code right there and save the file. The function will be restarted with the new code.

Next, stack traces for asynchronous operations are now reported entirely, showing you the full story of what happened before your current breakpoint, even if those things happened asynchronously.

Stack traces now also automatically ignore known third-party scripts, making it much easier to debug your own code.

Performance Investigation

Web performance is probably an area where we depend on tools even more than in other areas. You can’t really guess what’s running slow or eating too much memory until you profile your webpage. Fortunately, we keep on getting new options to investigate performance and memory problems, making our lives easier.

In fact, Chrome shipped an entirely new panel dedicated just to this!

Note: This panel is available in Chrome only and not in other Chromium-based browsers.

The Performance Insights panel shipped with Chrome 102 and has gradually gotten better and better, with recent additions like First Contentful Paint, Last Contentful Paint, Time To Interactive metrics and text flashes identification.

Think of the Performance Insights panel as a simpler version of the (sometimes scary) Performance panel:

Talking about the Performance panel, it recently got a brand new Interactions track in Chromium-based browsers, giving you a way to know when user events occur and how long they last, making it easier to debug responsiveness issues.

Edge has also been busy shipping new features in this area.

In the Performance tool, source maps can now be used to display original function names, even when sharing recorded profiles with other people:

In the Memory tool, you now get a summary of your heap snapshots organized by node types. Heap snapshots are hard to dig through, and these node types make it easier to see what is using the most memory on your webpage. There are also new ways to filter memory retainers to find memory leak culprits quickly.

Finally, Firefox has also been active in this area over the past few months. A number of years ago, Firefox created a brand new Performance tool for its own use. The idea, at the time, was to have a tool to debug performance problems in the browser code itself. But over time, the tool was adapted to become useful to web developers too.

And now, the final changes have been made, and the old Firefox DevTools’ Performance panel has fully been replaced with the new one:

Network Debugging

Debugging your frontend code is important, but sometimes problems can happen in the network layer of your app when communicating with your server. Thankfully, a few very useful features were recently added to the Network tools in various browsers.

In Edge, a new column was added to the Network log. The Fulfilled by column makes it easier to debug your service worker logic and Progressive Web Apps.

You can now discover straight away whether a request was handled by the service worker, the browser cache, or your server.

Firefox just shipped a completely redesigned version of its Edit and Resend feature. This feature has been available in Firefox for a long time already and is a great way to debug your server-side APIs or just test something quickly.

With it, you can right-click on any HTTP request displayed in the Network tool, select Edit and Resend, then manipulate the request parameters, headers, and body, and finally send the modified request.

Firefox completely redesigned it recently. It’s now much easier to edit the parameters before sending a new request.

And finally, Safari has added quite a few great features in this area too. You can now block network requests entirely, and you can also locally override requests by using regular expressions.

Editor Integration

Microsoft also does VSCode, which is a very popular code editor amongst web developers, and some time ago, the Edge team released the Edge Tools extension for VS Code. The extension gives you an embedded browser and the browser DevTools right in VS Code alongside your code.

This year, the team continued to work on the extension and added more features. In particular, the following things are now possible:

  • The extension now has the Console and Application tools available. Previously, only the Elements and Network tools were available. Console logs used to go to VSCode’s output, but now they also go to the Console tool in the embedded DevTools.
  • The embedded browser has been completely redesigned and features a lot of emulation and rendering options to test your webpage under different conditions. For example, you can emulate different media types or the prefers-code-scheme media feature. You can also emulate different color vision deficiencies.
  • Next, you can launch the embedded browser and DevTools simply by right-clicking an HTML file in VS Code.
  • Finally, you can use VS Code’s Quick Fix options to automatically fix a number of issues reported by the extension in your code.

One more thing, if you like using Visual Studio (not VS Code), note that the team released an extension for it too. Check out the Edge Developer Tools for Visual Studio extension.

Test Automation

It’s possible to automate browsers nowadays, and it can be very useful for testing. With browser automation libraries such as WebDriver, Puppeteer, and Playwright, you can write tests that mimic what users would do on your website and verify that these scenarios continue working over time, as you make changes to your product.

This area is in constant evolution; in particular, the WebDriver spec is evolving with a new bi-directional version. Also, the Chrome DevTools team has been innovating quite a lot lately. They shipped a new tool called the Recorder last year and have been improving it over time.

Here are some of the new features that got added to the panel in recent months:

  • It’s now possible to wait until elements are visible and clickable before continuing a recording.
  • Element selectors are better supported.
  • You can import and export recorded flows as JSON.
  • Double-click, mouse over, and right-click events can be recorded too.
  • There’s also an option to replay a recording slowly or step-by-step.
  • And, finally, the Recorder tool now supports extensions to export recordings to a variety of test automation formats, such as Cypress, WebPageTest, Nightwatch, or WebdriverIO.

Miscellaneous Updates

Phew, that was a lot! But we’re not done. Let’s wrap this up with a list of somewhat random but very useful features.

Chrome made a lot of source maps and stack traces improvements, providing a more stable and easier-to-use debugging experience. If you usually debug your JavaScript code with logs, now may be a good time to give breakpoint debugging a try and see if it speeds things up for you.

Talking about logging, they also made it possible to properly style logs with ANSI escape codes.

Next, you can now pick colors from anywhere on your screen when changing colors in the Styles sidebar.

Note: This was made possible thanks to the EyeDropper API, which you can also use on your web pages.

Edge shipped a feature to publish and retrieve production source maps from Azure, making it much easier to securely debug your code in production even when you don’t want to publish source maps and original source code to your server.

Read more about publishing your source maps and consuming them from DevTools.

The team also opened a new public feedback repository on GitHub which you can use to report ideas, issues, and features or just discuss them.

Finally, they shipped a redesigned Welcome tool where you can find all sorts of useful videos and links to documentation.

Switching to Firefox, the DevTools team continued to keep their Compatibility panel up to date with new browser compatibility data, so you can get relevant cross-browser support issues right when debugging your CSS.

The team also made it possible to disable and re-enable any event listener for a given element in the Inspector.

Finally, Edge just shipped a cool new experimental feature that enables one to type commands and access common browser and DevTools features from one keyboard shortcut.

The Command palette experiment lets you enter commands in the browser by pressing Ctrl+Q (note that prior to Edge 108, the shortcut was Ctrl+Shift+Space).

And that’s it for today. I hope you found a few things that will be useful for your web development projects.

DevTools has gotten impressively full of features over the years, and it’s hard to keep track, but here are a few pointers that I hope will make it easier to discover new features:

And with this, thanks for reading, and if you have great DevTools tips you want to share with everyone, please drop us a comment!

]]>
hello@smashingmagazine.com (Patrick Brosset)
<![CDATA[Understanding Privacy: A New Smashing Book Is Here]]> https://smashingmagazine.com/2022/10/understanding-privacy-book-release/ https://smashingmagazine.com/2022/10/understanding-privacy-book-release/ Wed, 19 Oct 2022 12:45:00 GMT Understanding Privacy, our brand new Smashing Book to make sense of privacy, and learn how to create inclusive, safe and privacy-aware digital experiences. eBook now available, print shipping in early December.]]> To many of us, privacy might feel like a complex, abstract concept. We can’t hold privacy in our hands, we can’t touch it, we can’t explore its volume or shape with our eyes or our fingertips. Surely it’s a part of each of us, yet it feels so intangible and so invisible — beyond reach and out of view.

So what is privacy? What exactly does it mean? How do we consider, manage and maintain privacy? And how dow do we design and build experiences that have privacy at their heart? That’s exactly what Understanding Privacy is all about: a practical guide to privacy on the web, from data collection and use of personal data to creating safe, inclusive experiences for everyone. Jump to table of contents ↓

About The Book

Understanding Privacy is a practical guide to the concepts and ideas that inform privacy on the web. It’s about all the fundamental values of privacy as a concept, which precede privacy as a legal compliance issue. It’s about the ways these concepts impact your work as a designer, a developer, or a project manager. And it’s about the ways you can adopt these principles to create a healthy, user-centric approach to privacy in everything you do.

Heather Burns, a tech policy and regulation specialist, explains what she has experienced working on privacy from every angle — human rights, law, policy, and web development — in the simplest way possible, and in the most positive way possible, in ways you can understand, use, and adapt in your work on the web right away.

All chapters in the book have custom illustrations, highlighting the topic of the book.

This book is not a legal reference manual. After reading it, you will have shifted your understanding from a negative view of privacy as a scary legal compliance obligation to a positive view of privacy as an opportunity to build and design a better web. Download a free PDF sample (11MB).

288 pages. Written by Heather Burns. Cover design by Espen Brunborg. eBook now available, print shipping in early December.

You’ll Learn:
  • Fundamental concepts, definitions and frameworks behind privacy and data protection,
  • Healthy approach to user privacy into everything you build and design,
  • Common privacy issues and how you can make a difference,
  • How to lay the ground for future developers, designers, and project managers to build a better web for tomorrow,
  • The obligations we have to safeguard user privacy and health data.

Who Is This Book For?

Understanding Privacy is for designers, developers, and project managers who want to understand what privacy really is about and who want to integrate a healthy approach to user privacy into everything they do not only to put their users first today but also to help build a better web for tomorrow.

A double-spread of Understanding: an honest, practical and clear guide to privacy. Table Of Contents
1. Privacy and You
+

In the book’s first section, “Privacy and You,” Heather reviews the fundamental concepts, definitions and frameworks behind privacy and data protection.

2. Privacy and Your Work
+

In the second section, “Privacy and Your Work,” Heather discusses how to integrate a healthy approach to user privacy into everything you do, whether you are a designer, a developer, or a project manager.

3. Privacy and Your Users
+

“Privacy and Your Users” covers issues around user privacy where you can make a difference. We’re going to learn how to consider the power dynamics of what you create, regardless of the role you play.

4. Privacy and Your Future
+

In “Privacy and Your Future,” Heather suggests a few critical areas that make the web a better place and lay the ground for future developers, designers, and project managers to build a better web for tomorrow’s users.

Postscript: Privacy and Health Data
+

In the final section, “Privacy and Health Data,” Heather addresses an even more pressing recent issue: the obligations we have to safeguard user privacy and health data, and how to do it as best we can.

288 pages. eBook now available, print shipping in early December. Written by Heather Burns. Cover design by Espen Brunborg.

About the Author

Heather Burns (@WebDevLaw) is a tech policy professional and an advocate for an open Internet which upholds the human rights to privacy, accessibility, and freedom of expression. She’s been passionate about privacy since she built her first web site in 1996, and has educated thousands of professionals worldwide on the fundamentals of a healthy approach to protecting people and their data. She lives in Glasgow, Scotland.

The book comes with practical guidelines and checklists to keep in mind when designing and building with privacy in mind. Reviews and Testimonials
“Heather's broad knowledge, experience, and ability to articulate these complex matters is nothing short of astounding. I’ve learned an amazing amount from her. She always informs and entertains, and she does so from the heart.”

Mike Little, Co-Founder of WordPress
“No more excuses for overlooking privacy: Heather’s guide is an essential toolbox for user-centric product developers and for anyone interested in building a better web. Expect the full sweep, from historical context and core concepts in US and EU privacy practice, to practical tips and advice — dispensed in highly readable style.”

Natasha Lomas, Senior Reporter, Techcrunch.com
“Privacy is an oft-talked about and rarely understood part of our modern digital lives. Heather has been on the forefront for the battle of our privacy for decades. In this book she makes the case for why privacy is one of the foundational pillars on which our society rests, and why eroding our privacy means eroding a cornerstone of our lives, our communities, and our democracy. A must-read for anyone working on or with the web.”

Morten Rand-Hendriksen, Senior Staff Instructor, LinkedIn Learning
“Privacy can seem complicated but it doesn’t need to be. Heather covers all that you need to know with astonishing clarity. This book gives you all you need to understand and handle privacy work, and makes for great teaching material that experts could rely on.”

Robin Berjon, former Head of Data Governance at The New York Times
Technical Details
  • ISBN: 978-3-945749-64-7 (print)
  • Quality hardcover, stitched binding, ribbon page marker.
  • Free worldwide shipping from Germany, starting in early December 2022.
  • eBook is already available as PDF, ePUB, and Amazon Kindle.
  • Get the book (Print Hardcover + eBook)
Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Steven and Stefan are two of these people. Have you checked out their books already?

Touch Design for Mobile Interfaces

How touchscreen devices work and how people use them.

Add to cart $44

TypeScript In 50 Lessons

Everything about TypeScript, its type system and its benefits.

Add to cart $44

Smart Interface Design Patterns

Deck of 166 cards with common UX questions to ask.

Add to cart $39

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[WordPress Full-Site Editing: A Deep Dive Into The New Feature]]> https://smashingmagazine.com/2022/10/wordpress-full-site-editing/ https://smashingmagazine.com/2022/10/wordpress-full-site-editing/ Mon, 17 Oct 2022 12:00:00 GMT Full-Site Editing is one of the main improvements added to the WordPress platform with version 5.9. It allows users to make sweeping changes to their website design and layout via a graphic interface, thus moving WordPress closer to the experience of a page builder. In addition, it offers new ways to create and customize themes.

These drastic changes have great consequences not only for the WordPress user experience but also for large parts of the platform’s ecosphere. For that reason, in this post, I am planning to take a deep dive into WordPress Full-Site Editing (or FSE for short, there are also discussions about changing the name because it’s a bit of a mouthful).

In the following, I will first talk about what Full-Site Editing is and provide a tutorial on how to use it to make changes to your site. I will also examine the tools it provides for theme development and close with a discussion of how the arrival of this feature will impact developers, theme authors, and existing page-builder plugins.

Table of Contents

Let’s get started.

Quick note: While FSE was first added to WordPress in version 5.9, it has since been further enhanced by WordPress 6.0. This post includes the latest changes.

What Is WordPress Full-Site Editing?

In a nutshell, Full-Site Editing means that WordPress now offers the ability to create and edit page templates and elements like headers and footers in a block-based graphic user interface.

This is part of phase two of the Gutenberg project and the preliminary culmination of a development that saw its beginning with the introduction of the WordPress block editor in WordPress 5.0. Since its initial release, the block workflow has branched out to other parts of the WordPress user interface. For example, you can now also use it for widget management.

One of the main goals of Full-Site Editing is to provide users with a singular workflow for making changes to their WordPress sites. In the past, you often needed to know several different systems to create a new menu, compose a page or post content, populate the sidebar, or adjust the color scheme. Even more complex changes required you to know how to edit page template files or write CSS. With Full-Site Editing, you can now make changes to everything in pretty much the same way (even if much of it still happens in different menus).

For everyday users, the benefit is reduced dependence on front-end developers. Site owners can now do a lot by themselves that, in the past, would require technical chops or professional help, such as making changes to page templates. Plus, those changes are now visible in the editor right away instead of having to go back and forth between the front end and back end of your site or even a code file.

At the same time, Full-Site Editing makes it easier for theme developers and designers to create markup and allows for quicker templating.

Main Features

Here are the main building blocks that Full-Site Editing consists of:

  • Page templates and template parts
    The central attractions are two new editor interfaces that allow you to customize page layouts similar to the normal content editor. You can move page elements around, change their design (colors, fonts, alignment, and so on), and add or remove them at will. The same is also possible for single template parts such as headers and footers. It’s even possible to edit them separately. Plus, you can export your templates to use and distribute them as themes.
  • Global styles and theme.json
    A common feature in WordPress page builder plugins, Full-Site Editing allows you to define global styling for your entire site, such as colors and typography, in a central place. In the past, you would have to change the styling in different locations (e.g., the Customizer and block editor). FSE also introduces the theme.json file, which acts as a nexus for different APIs and contains the majority of styling information in block-based themes.
  • Template blocks and block patterns
    Full-Site Editing adds new block types to WordPress and the WordPress editor. These include static blocks like the site logo but also dynamic elements such as blocks for navigation, post titles, and featured images. These change according to settings in other places. There is even a full-fledged query block that’s basically the WordPress PHP loop. It lets you display a list of posts anywhere on the page. Each block also comes with its own design and configuration options.

Sounds exciting? Then let’s dive into how to use this new WordPress feature practically.

How To Use Full-Site Editing To Customize WordPress

In the following, I will first go over how to take advantage of Full-Site Editing as a user. Later, we will also examine what makes this a useful feature for developers and theme designers.

Prerequisites For Using FSE

In order to take advantage of Full-Site Editing, the most important thing is that you have a WordPress site running at least version 5.9. You can also use a lower version, but then you need to have the Gutenberg plugin installed and up to date.

The second thing you require is a block theme. That’s a theme that can take advantage of the new feature. We will go over how these are different from classic themes later. For now, a good option is Twenty Twenty-Two, which also came out with WordPress 5.9. I will be using it for this Full-Site Editing tutorial. Refer to the resources section at the end for other options.

Finally, if you are giving WordPress Full-Site Editing a spin for the first time, I recommend using a staging site or local development environment for it. That way, you can make all the mistakes you want without anyone knowing.

Overview Of The User Interface

When you are logged into your test site, you can access Full-Site Editing via Appearance > Editor (also notice that the widget and Customizer options are missing).

An alternative way to get there is via the Edit Site link in the WordPress admin taskbar on the front end. Either will land you on the main editor interface.

Let’s walk through all the options available here:

  1. Top left corner: Let’s start here because it’s easy to overlook. A click on the WordPress logo opens up a menu to edit templates and template parts. It also has a link to return to the WordPress dashboard.
  2. Top bar: This should look familiar to anyone who has used the Gutenberg editor before. It contains the option to add blocks and block patterns, toggle between editing and selecting blocks, and undo/redo buttons. You can also open a list view of the current page, select different template parts, and jump directly to them.
  3. Top right corner: Contains the buttons to save changes and preview the design on different screen sizes. The gear icon opens up settings for templates as a whole and individual blocks. Besides, that is the option to customize Global Styles. The three-dot icon contains display options for the editor, the ability to export templates and template parts, and access to the welcome guide.
  4. Center: In the middle is the main editing screen. Here is where you will make changes to page templates and work with blocks. It is also an accurate representation of what your design will look like and contains some controls to add blocks and other elements to the page.

Most of these are togglable, so you can only have those options open that you really need and want.

Global Style Presets

As mentioned above, you can access this menu by clicking the half-black, half-white circle in the top right corner. It offers two types of styling options: for the entire website and for individual blocks. What exactly is available here depends on your theme.

For Twenty Twenty-Two, you have options for typography, colors, and layout. We will get to those below. For now, let’s turn to the most exciting part of the Global Styles menu — the preset color themes. You can find them when you click on Browse styles.

In this menu, developers have the possibility to offer styling presets for the entire theme. Hover over one of the options to see a preview of its color and font scheme, and then adopt the look for your entire theme with a single click.

I really like this feature as it offers users different versions of the same theme that they can easily use as jump-off points for their own creations. It’s a bit like themes shipping together with a number of their own child themes. You can also go back to the previous state by clicking the three dots at the top and choosing Reset to defaults.

Global Styles: Typography

When you click on Typography, you get to a submenu where you can choose whether to customize the styling for general text or links.

Another click gets you to a subsection where you can make the actual changes.

As you can see, it’s possible to customize the font family, size, line height, and appearance, meaning font-weight and slant. Options here depend on the theme as well. For example, under Font family, you can only choose System Font and Source Serif Pro as these are the only options Twenty Twenty-Two ships with.

However, this is also due to the fact that full support for (local) web fonts only became available in WordPress 6.0, and this theme came out before that.

Likewise, the numbers under Size represent defaults set by the theme authors. You also have the option to click on the little icon in the upper right corner to set a custom value.

Line height should be self-explanatory. The Appearance drop-down menu lets you choose font variations from a list.

If you pick any of these options, changes will automatically become visible on the editing screen.

If you don’t like the modifications you have made, you can always reset to defaults, as mentioned above.

Global Colors And Layout

Under Colors, you can change the hue of different elements (duh!).

What’s interesting here is the Palette option, where the theme can provide its own color palette, including gradients. This is besides the default options Gutenberg offers and custom colors that users can create.

Besides that, just like for typography, the theme provides different options for elements for which you can change colors. In Twenty Twenty-Two, that’s Background, Text, and Links.

After choosing any of these, you get to a screen where you can easily pick a color or gradient from available options or create your own. When you do, your pick automatically translates to what you see on the editing screen.

There is even a color picker that lets you set custom hues or enter color codes in RGB, HSL, or HEX format.

Finally, in this theme, the Layout option only allows you to add padding around the homepage.

Changing Styles For Individual Blocks

Styling defaults are not only available for the website as a whole, but you can also set them for individual blocks. For that, you find an option in Global Styles at the bottom where it says Blocks.

When you click it, you find a list of all the WordPress default blocks.

Click those in turn to find similar options to customize their design on a per-block basis. For example, below, I have set the link color globally to blue but set the color for the Post Title block (which is also a link) to orange. As a consequence, orange overwrites the initial value, and the title comes out in that color.

If you have ever worked with CSS, this principle should be very familiar. Set some site-wide standards at the top of the style sheet and then overwrite them with customizations further down in the cascade. It’s the same thing here.

Moving Blocks Around

Making layout changes works the same way as in the main WordPress block editor. Everything you see on the screen is made up of blocks. Some may be combined as groups or block patterns, but they are blocks nevertheless.

As such, you can move and customize them however you want. For example, the main part of the homepage is the Query Loop block, whose function is to serve up the latest blog posts. However, it, too, is made up of different blocks, namely Post Title, Post Featured Image, Post Excerpt, Post Date, Spacer, and Pagination.

If you want to change something about the way it looks, you can very easily do so. For example, you may click on the Post Featured Image block and then use the arrows in the toolbar to move it below or above the post title.

Alternatively, hover over the block and then use the Drag button (which looks like six dots) to move it to another position. If you hit Save after this, it will translate to the design on your site.

Using Block Options

In addition to the ability to move them around, every block also comes with its own settings. Like in the Gutenberg content editor, you can access those via the gear icon in the upper right corner. When a block is selected, you will see its customization options there.

What’s available in this place depends on the block you are working with. For example:

  • Post Featured Image: Has options to add the margin, padding, and configure image dimensions.
  • Pagination: Control the justification and orientation of its elements, wrapping, colors, and whether to show arrows, chevrons, or nothing as indicators.
  • Post Title: Besides setting colors, you can decide if the title should be a link, open in a new tab, or have a rel= attribute. You can also control colors and typography (including the ability to use Title Case) and add a margin.

You get the gist. Be aware that there are often more settings hidden that you can access via a plus or three-dot icon within the sections.

In addition, there are settings in the toolbar atop blocks when they are selected. You should not forget those as they can be decisive. For example, in the case of the Post Title block, it’s where you determine what order of heading (h1-h6) it takes, an important factor for SEO.

Adding And Removing Blocks

Of course, you can not just customize the available blocks, but you are also able to add your own. This works the same way as in the content editor and comes with different options:

  1. Hover over an empty space in the template until a plus button appears, and click it. Then search or choose what you want from a list of blocks.
  2. Click existing blocks and use the options button in the top bar to pick Insert before and Insert after.
  3. Use the plus button in the upper left corner to see and search the full list of available blocks, then drag and drop them where you want.

In some places and existing blocks, you will also find icons to add more blocks. Plus, you have the ability to add block patterns, but we will talk about this further below.

Leaves the question, how is any of this helpful?

Well, it means you can easily add both static and dynamic content to the homepage. An example would be a heading and paragraph above the Query Loop block as an introduction to your blog.

Naturally, you can also remove blocks you don’t want just as easily. Simply select one and hit the Del or backspace button on your keyboard, or remove it via the block options.

You also have the ability to open a list view at the top (the icon with three staggered lines) and navigate to blocks from there or choose to delete them right away.

This option also gives you a great overview of the block structure of whatever part of the site you are currently editing.

Exchanging And Editing Template Parts

Template parts are entire sections inside templates that you can exchange as a whole and modify separately. In the case of Twenty Twenty-Two, that is the header and footer. You can see this in the template options on the right or when you click the arrow in the top bar.

Template parts are just groups of blocks on the page, so you can edit them as described above. However, what’s special about them is that themes can offer variations that allow you to change the entire part with one click.

For example, when you select the header in the example, it will show a Replace option in the settings bar at the bottom.

When you click it, you can see the variations the theme offers for this template part, as well as fitting block patterns.

Twenty Twenty-Two has several default options to choose from. Click any of them, and Full-Site Editing will automatically replace the entire header with the new option.

The same works for the footer, of which Twenty Twenty-Two also has a few to offer.

Customizing And Creating Template Parts

To edit template parts separately, click on the WordPress logo in the upper left corner to open the following menu.

At the bottom, you will find a menu item called Template Parts. Click it to see a list of all available template parts on your site.

Alternatively, you can also select a template part and choose to edit it from its options.

In the Template Parts menu, click Add New in the upper right corner to create additional ones. This is useful if you want to make another version of the footer, for example. The cool thing is when you click it, besides asking for a name, WordPress automatically gives you templates for both header and footer, so you don't have to start from scratch (unless you want to).

Besides that, you may also just click on existing parts in the list to edit them. This works the same way as in the main editor. The only thing that is different for template parts is that you have handles on the left and right that you can use to shrink and expand the size in order to check its behavior on smaller screens, i.e., mobile devices.

Just like a template file, anything you change and save here will translate to all pages and templates that use this part.

Finally, if you have set up a group of blocks on the main screen, you can turn them into a template part as well. Click the options in the main screen or in the list view and pick Make template part.

You need to give it a name and choose what area it belongs to. When you then save it, it is available as a template part.

Editing Page Templates

In the WordPress logo menu, there is also an item called Templates. Unsurprisingly, it contains a list of all page templates available on your site, from the 404-page over archives and single pages to single posts.

Page templates are usually files that control the basic layout of different types of content. If you change the template, all content of that type changes, too. With Full-Site Editing, you can edit existing templates and create your own in the user interface instead of a code editor.

Note, however, that FSE only lets you create standard page templates via Add New. More on that soon.

Something that comes especially handy here (and also for template parts) is block patterns. These are predesigned layouts consisting of several blocks you can add to website pages to instantly create entire sections. Examples include newsletter sign-up forms, pricing tables, and event lists, but also simple things like a styled divider or an image with a quote or caption.

Patterns allow you to put together entire designs quickly. They are easy to use, too! When editing a template, simply click the plus symbol in the upper left and go to the Patterns tab.

Filter the patterns via the drop-down menu at the top, e.g., by featured patterns, footers, pages, or buttons. If you find something you like, simply drag and drop it on the page. You can also search for something specific, like a “header” at the top, which will even show blocks from the WordPress block directory.

For a better overview, it helps to click on Explore to access the block pattern explorer.

This shows the block patterns in a larger window with the ability to search and filter them on the left. A click on a pattern you like automatically adds it to the template editor, where you can position and customize it as usual.

By the way, you can clear all customizations you have made for individual templates by clicking the three-dot icon in the Template menu and choosing so.

Adding New Block Patterns

Besides using what’s available, you also can add external block patterns from the pattern directory.

Search and filter to your needs. If you find something you like, simply use the Copy Pattern button on the pattern page to get it on your site.

After that, go back to the Full-Site Editing editor and paste it. The pattern will then show up there.

If you like it and likely want to use it again, click the three dots in the options bar and choose Add to Reusable blocks.

That way, it will, from now on, be available in the block menu under Reusable.

Using The Standalone Templates Editor

There is a second way to edit and create page templates, which happens in the normal Gutenberg content editor. It offers less complexity than the site editor interface (e.g., no access to other templates) but works similarly.

Simply create a new post or page, then, in the document settings sidebar, locate the Template panel below Status & visibility.

Here, it lists your current template and makes other options available in the drop-down menu. You can edit what’s already there via the Edit button or create a new template by selecting New. Each opens the more limited template editing experience.

Edit and save the template in the same way as in the site editor. Anything you create this way will also show up in the list of templates in the Full-Site Editing editor.

Available Blocks For Templating

To make templating in FSE possible, the developers have added a number of dynamic blocks that can pull content from the database depending on the following:

  • Site title, tagline, and logo;
  • Post title, featured image, content, excerpt, author, avatar, author biography, date, tags, categories, next and previous post, read more;
  • Post comments, single comment, comments query loop, author, date, content, count, comment form, and link;
  • Archive title and term description;
  • Query loop, post list, post template, pagination;
  • Template part.

These are also available in the normal WordPress editor. There are more to come in future versions, and you can get early access to them via the Gutenberg plugin.

Preview And Save Changes

When you have made all the changes you want, you have the option to preview them in different screen sizes by clicking Preview in the upper right corner.

If you are satisfied, a click on Save will make the modifications permanent. WordPress will also list which templates and template parts your changes will affect.

That way, if you want to discard them in one place but keep them elsewhere, you can do so. Simply uncheck those components where you don’t want to save your changes. Click Save again, and your choices will translate to the front end of your site.

Full-Site Editing For Developers And Designers

Full-Site Editing is also a useful tool for developers. You can use the interface to create templates and then export them as files to add to and publish as themes.

A Quick Primer On Block Theme Architecture

To take advantage of this, you need to be aware that FSE-ready block themes have a different architecture than classic WordPress themes. For one, the template and template-part files for Full-Site Editing no longer contain PHP but are HTML files with block markup.

Instead of style.css, styling is mostly taken over by theme.json. Here is where you set up styles for the block editor and individual blocks, styling presets, as well as CSS defaults (both for the front-end and backend editor). In fact, theme.json is so powerful that, by modifying it, you can change the style of an entire website.

Last week I created a quick demo of how the visual aesthetic of Twenty Twenty-Two can be drastically changed through its theme.json settings. This example swaps the default json file for one with different font, color, duotone, and spacing values. pic.twitter.com/ab9tyGwLOS

— kjellr (@kjellr) October 22, 2021

This also allows you to switch between different sets of global styles (i.e., theme.json files) in the same theme. It’s a feature that only arrived in WordPress 6.0.

Relying mostly on theme.json greatly reduces CSS in other places. For example, Twenty Twenty-Two’s style.css is only 148 lines long. For comparison, its predecessor Twenty Twenty-One has almost 6,000 lines in its style sheet.

In addition, theme.json uses a whole different kind of markup. Yet, you could write an entire article just on this one file, so you are better served to start with the documentation for details.

The minimum requirements for a block theme are to have an index.php, style.css, and an index.html file in a templates folder. The latter is what marks the theme as a block theme to WordPress.

If you want to add template parts, you will place those in a parts folder. Having a functions.php and theme.json files is optional. Finally, you can also include a styles folder for global style presets. For example, this can include different color schemes for the theme.

Besides the changed structure, you also have different ways of creating template files when using a block theme. While you can still do it manually, using the new WordPress interface is also possible.

Using FSE Or The Template Editor To Create Theme Files

If you want to use the page editors to create templates, the first step is to simply set up your templates as described in the first part of this article. One important option here is to know that you can use the Advanced settings for template-part blocks to change their type of HTML element.

When satisfied, you can download all your theme files at once. The option for that is available in the More tools & options menu, which you access by clicking the three dots in the upper right corner of the Full-Site Editing screen.

Here, locate the Export option. It will automatically download all template and template part files as a zip. Simply unpack them, and you can use them for your theme.

Manually Creating Block Theme Templates

Of course, it’s also possible to create template files by hand. For that, you just need to be familiar with block markup.

For the most part, these are just HTML comments that contain the name of a block prepended with wp:. Some of them are self-containing. For example, here’s how to add a site-title block to the template:

<!-- wp:site-title /-->

Others, like paragraphs, function like brackets:

<!-- wp:paragraph -->
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
<!-- /wp:paragraph -->

You can also call template parts by stating the file name via slug. Here’s how to call footer.html:

<!-- wp:template-part {"slug":"footer"} /-->

You can even customize the HTML tag (default: div) via the tagName attribute:

<!-- wp:group {"tagName":"main"} -->

<!-- /wp:group -->

Here, too, it’s possible to use one of the editors above to create blocks and then simply copy the markup over if you are not sure. Plus, if you save a file and then add it to the respective location in the theme directory, it will also show up in the FSE editor.

For more details, refer to the resource list below.

Consequences Of Full-Site Editing For The WordPress Ecosphere

Besides providing a tutorial on how to use Full-Site Editing, I also want to talk about what its arrival means for the WordPress environment and those working there.

Job Opportunities For Developers And Designers

As is to be expected, an important question is whether this kind of feature will eliminate the need for professional developers and designers. Are they still needed when users can seemingly do everything themselves?

The short answer is “yes.”

Neither the emergence of WordPress itself nor page builders or page builder plugins, or any other technology that makes it easier for laypeople to build their own websites have eradicated the need for professional help. And it won’t happen this time, either.

While these days, users don’t need help for every little thing (like changing colors or fonts), there are still lots of tasks that non-technical site owners simply can not do with the available tools and where they need someone to do it for them. Plus, if you want a unique design and not rely on a template that hundreds or thousands of other people might also be using, you still need a designer and/or developer.

Plus, with great power also comes a great opportunity to screw things up (to loosely quote Spiderman). Just because everyone has the tools at their disposal to make a well-designed website, that doesn’t mean everyone can. Design is more than mere technical ability.

What’s more, not everyone actually wants to do the work. They’d rather hire someone with the skills than acquire them from scratch. Finally, there is so much more to a successful website than “just” design, such as SEO, performance, security, and maintenance.

So, even if there are fewer obstacles to building websites, there is no need to think that designers and developers are a dying breed. In contrast, the switch to new tools offers plenty of opportunities to build services and products around them.

What Does FSE Mean For The Theme Market And Theme Designers?

So what about theme creators? Does everyone have to switch to block themes now?

Here, it’s first important to keep in mind that many themes have not yet switched to the Gutenberg block editor and that there are still many users on the Classic Editor. The latter will also continue to work for a while as the plugin will still be supported until at least the end of 2022.

Also, all of the features described above are optional, not mandatory. Therefore, the switch does not have to be immediate. You can even build hybrid themes that are not complete block themes but are able to use block templates. This option exists by default unless you specifically switch it off.

Nevertheless, in the long run, it’s probably a good idea to move your existing themes over to FSE capabilities. It’s something that WordPress users will likely grow to expect as it gives them more flexibility and power to customize themes on their own.

At the same time, as described above, you can also use Full-Site Editing to create themes with less coding, which can speed up development time. Plus, it offers new economic opportunities. Besides themes, theme authors can now offer extensions like blocks and block patterns, opening up whole new business models and opportunities.

Full-Site Editing vs. Page Builder Plugins

The existing page builder plugins are probably one of the biggest question marks. Will the likes of Divi, Elementor, and Co survive when WordPress can do a lot of what they were created to provide?

First of all, it’s unlikely that everyone will immediately switch away from the tools they are used to working with, so page builder plugins will likely stay around for a while. Also, many of them are currently more powerful than what Full-Site Editing is capable of in its present form. Another reason to stay with what you have.

Overall, these types of plugins have become very established over the last years, to the point that they sometimes ship packaged with themes. For that reason, it’s improbable that they will suddenly lose all their market share. Despite that, Full-Site Editing will likely eat into it over time, especially with new users who get to know it as a normal part of WordPress.

Just like everyone else, page builder plugins will have to evolve so that they offer things that FSE doesn’t to stay competitive. One way would be to offer kind of hybrid plugins that extend WordPress’ native page editor. Similar things already exist for Gutenberg and for the Classic Editor.

Full-Site Editing: Further Resources

If you want to get even deeper into the topic of WordPress Full-Site Editing, I recommend you start with these resources:

Final Thoughts On WordPress Full-Site Editing

Full-Site Editing is an exciting new chapter in the evolution of WordPress. It makes the design process easier and more uniform across the entire platform, offering new opportunities for content creators and users to customize their pages.

At the same time, FSE comes with interesting challenges for developers and theme designers. It changes the architecture of themes as well as introduces new markups and workflows. However, the feature also offers rewards in terms of new opportunities and a faster way for prototyping and creating themes that require less coding.

Above, we have gone over everything FSE has to offer in detail. My personal impression is that it is a well-thought-out feature, and I am impressed by how much it can already do. I’d definitely recommend adding it to your WordPress skill set.

Sure, there is room for improvement. For example, I could not find an option to change the hover or active color for links and other elements. Also, it is not as powerful as existing page builder plugins though I am sure that new features will close the gap in the future. Yet, I really like its modularity and the ability to customize different theme parts in different ways. I’ll surely consider using it more in the future. How about you?

What are your thoughts on WordPress Full-Site Editing? How do you think it will impact users, developers, and the WordPress sphere as a whole? Please share your opinion in the comments!

]]>
hello@smashingmagazine.com (Nick Schäferhoff)
<![CDATA[Effective Communication For Everyday Meetings]]> https://smashingmagazine.com/2022/10/effective-communication-everyday-meetings/ https://smashingmagazine.com/2022/10/effective-communication-everyday-meetings/ Thu, 13 Oct 2022 15:00:00 GMT Good communication is not about forcing everyone to say “Yes” or to sell something to people that they don’t want to buy. Good communication is about sharing your ideas as clearly as possible during the time you have allocated to the meeting. To do this well, you need to have a suitable structure based on the timing and people’s background knowledge.

In the following article, I will try to explain how to prepare this structure and give you some tips based on my own design experience.

We all like to listen to good stories. Good stories involve us and don’t need additional efforts to follow the ideas they present. There is no simple answer as to what makes a story good, but we can subconsciously tell what makes it bad, such as illogical storytelling structure, unclear motivation of the main hero, lengthy descriptions of some obvious things, and so on. All those aspects impede us from understanding what is going on. I can often observe something similar during regular meetings. At first, everybody goes with their communication plan, but in the end, it often turns into a flow of random sentences and abbreviations, and then at the end of the meeting, everyone tries to keep their initial opinions.

If you want to avoid such poor meeting outcomes, follow me along. Here is a guide based on my personal experience, and I hope it will help you conduct more effective meetings, too.

Сonscious Participation Or Conduction

There are two reasons why meetings happen:

  1. You want to present or discuss something with a few people.
  2. Somebody thinks inviting you to a conference/meeting is a good idea.

Each one of these reasons dictates a different preparation strategy.

Reason 1: You Are The Initiator

Start with the question, “Why does this meeting have to take place?”

Don’t get me wrong, but I haven’t actually met people who actually like meetings.

The truth is that people do their work in-between sessions. When you are in a meeting, you can’t do your work. You are distracted when you get a reminder about a meeting starting in 15 minutes. And after a one-hour session, you also need time to get back to work and to switch your mind to the things you were doing before the meeting. And being constantly distracted by meetings affects not only programmers but also designers.

So, rule No.1 is:

In 99% of the cases, a meeting takes more working hours than the time preallocated to it.

What can we do to improve the situation?

  • Invite people whose work is directly related to your meeting’s key topic.
  • Prepare and share the agenda in advance. People must understand why they should be there and what you will expect from them.
  • Describe the goal and the expected results so participants can prioritize the information they get during the meeting.
  • Plan the timeslots based on the agenda and the number of participants. If you have twenty people for a 30 minutes meeting, then everyone would have a 90 seconds timeslot. Just remember this fact.
    Note: Here, I don’t mean a presentation when you deliver some information to a group of people without the need for receiving feedback or just for hearing “yes” or “no” once the presentation ends.

Sometimes it’s hard to stop inviting people to a meeting because they may all look involved in the topic. So, ask yourself, “Will I cancel this meeting if that person cannot participate?” If the answer is “No,” then just go to the next participant in the list, and so on.

Rule No. 2:

The meeting is mainly for the tasks and decisions you cannot fulfill alone.

Of course, you can say that meetings keep the team together and help understand other project areas better. There are approaches based on regular meetings, such as Scrum, and I agree with that. But we talk about effective meetings now. Because if you invited ten people with an hourly rate of $ 50, the one-hour meeting session would cost you $500. Is this a reasonable price for a one-hour small talk?

Reason 2: You Are Invited As A Participant

In this case, you are on the opposite side. It means that you can (and should):

  • Ask in advance for the agenda and the goal of that session if this info is missing.
  • Clarify what would happen if you could not join, and then decide on joining the meeting (or not).
  • Prepare your communication plan based on the timeslot you have.
  • Investigate the documentation available on the topic so you can have some background and ask the right questions.
Preparing The Communication Plan
“If you don’t know where you’re going, any road will get you there.”
Said once the Cheshire Cat to Alice in Wonderland

Or

“Having no destination, I am never lost.”
Ikkyu

A communication plan is like a lantern that helps you keep the right track in the darkness of routine. When everything gets worse, you can count only on this communication plan, or at least on plan “B,” which you should have. And, unlike the Cheshire Cat, you definitely need to have a plan where you want to go!

Before starting, you need to ask yourself a few questions:

  • What do I want to get as a result of this meeting? Why do I go there?
  • How did I come up with the idea I want to present?
  • Why are other ideas, approaches, or alternative ways not so good? Have I explored all the options?
  • If I understand the weak side of my ideas, what strategy should I follow? Should we touch upon them in this meeting?

Answering these questions will help you clarify your vision first, as it’s impossible to communicate an idea effectively to somebody else if it is even unclear to you.

So, let’s get back to the things you want to communicate. Do you want approval for a new feature/mockup/technology, or do you want to gather various opinions and vote for the best option? Have you tried to solve it in a few different ways, and can you argue why other methods would work worse than this one? Are you open to discussing advice about improving your proposal, or do you think it’s already good enough?

Knowing the answers to these questions would make it much easier to move communication in the right, more productive direction. Otherwise, you will spend some time finding out the answers directly during the meeting.

Also, keep in mind the personal goals of the participants. When I was twenty, I was a freelancer and needed to chase projects. I completed over one hundred projects (mainly small-sized) and had a hundred kick-off meetings. When you are a freelancer, it means that during such kick-offs, you are trying to sell your service. At first, I was trying to explain the value of my designs, the excellent conversion of my landing pages, and how happy the users would be. Sometimes this approach worked, sometimes not.

I asked for feedback from a few people who didn’t want to buy my services. Once, I got honest feedback from a manager along these lines, “I’m a manager. I don’t care about customers or conversion rates. My goal is to complete the project on time and get my bonuses at the end of the year. Can you do it on time?”

When you double-check your communication plan, make sure it matches closely with the goals of the people you want to communicate with.

Conduction Of The Meeting

A smooth start

The worst thing you can do is go to a meeting, show something during the first minute and say, “This is it! Is it looking great?”

It’s like if you were trying to explain the movie Titanic as a man keeping a little cold piece of wood afloat in the Atlantic ocean and a woman trying to fit on that ice-cold piece of wood. Yes, that’s the scene when we usually cry during the movie, but we are not crying because of this scene — but because of the long and important set of events that brought Rose and Jack there.

So, start with the story that will help your audience dive into the right atmosphere. It could be the context of using the product, the moment when somebody meets the problem you are attempting to solve or something else that can help build a smooth way to the first piece of visual information or thesis. Don’t let people use their explanations about what they see now. Human imagination works faster than your talk, so getting them back to what you are saying will be challenging.

Also, it’s always good to deconstruct the idea and explain what was before. What is the reason why we do this? What did we discuss during the previous meeting, and where did we stop? What limitations do we have? What kind of information did we learn about the market, users, or competitors that should be shared here? The people around you should have these pieces of background knowledge to better understand the potential of your ideas. You shouldn’t tell everything you know. Remember that this is just an intro, and you need time for the “main dish.”

Switching Between Scenes And Ideas

When you move between details or screens, also tell your audience about the ways which may potentially lead to some dead-ends during your progress. The filament bulb may have looked small and simple when it was ready for mass production, but Thomas Edison (and not only he) conducted more than 1500 experiments before reaching that success. Tell people also about the failures to show the broader scope of work, and when you do so, this will give people the answers to the unasked questions — why you turned this way and not some other way. Those things might not be obvious at all to the people not as deeply involved in the project as you are.

The right side of the following illustration nicely demonstrates the way product designers work. I also strongly recommend talking about the “underwater side” during meetings. It helps to unfold the final result.

The most sensitive thing is to be able to present complex ideas during a conversation. You need some time to outline a basic concept, yet people may start asking questions even before you end up with the explanation. Most of the questions won’t be asked if the participants can see the whole picture, and the solution to this is pretty easy.

Warn that first, you will only outline the idea or the solution, and after that, you will go through that again step by step, answering all the questions in the proper context. It allows you to talk to an audience that understands your idea but can ask questions in the appropriate context. Using this approach, you can smoothly move from topic to topic and not make people wait till the end of the meeting to ask questions.

Depending on your soft skills, you will have different levels of engagement with your audience. So I recommend from time to time checking this through a dialogue with the audience. The simplest way is to ask, “Are you still following me?” But it’s a bit too direct, and there are many other, more subtle ways of ensuring you have an audience that follows you. We want every minute of the meeting to be valuable, right?

A Few Hints As To How To Keep People Involved:

Ask people to highlight the aspects they know better than you.
It can be a few words about using the product in real life, market information, limitations, and so on.

Ask people to express their expectations about the next presentation slide.
It helps you focus on the crucial things for the participants. Then, even if those expectations are not reflected in your solution, you have already been informed. So you can explain why you didn’t cover them or propose a plan for how to do that.

Use questions that can be answered with a “Yes,” “No,” or only a few words.
“Does this feature make sense?”, “Are we happy with the positioning of this red button?”, “Is it clear what would happen if I click here?” and so on.

If it’s an online meeting, ask people to turn on their cameras (if possible).
It’s easier to understand what’s happening if you see the people’s eyes. But, for many different reasons, people do not always like using cameras. So, be polite and explain why it is important to you and be honest. Here is a list of reasons that I compiled:

  • “I feel uncomfortable if I don’t see the people I am talking to.”
  • “I think I sound like a radio DJ. Could you please turn your camera on?”
  • “I’m a bit nervous when talking to the empty screen.”
  • And so on.

It’s better to say this while still inviting the people so they have time to prepare their cameras and backgrounds (real or virtual) in advance.

Sometimes your questions will meet the silence on the other side. There can be a lot of reasons why this happens, but it usually means that people don’t understand what you mean or maybe that you have invited introverts to the meeting. :-)

You need feedback that helps you understand the situation and get out of this corner. And if nobody wants to provide some feedback and critique, you must become your first critic. We do not live in an ideal world, and you probably know your idea’s weak sides and limitations. So speak about them loudly and show people that it’s OK for you (and for them!) to point out the wrong things if they happen because of your design decisions.

I don’t remember the name of the book where I read about this curious fact, but one company always included paid provocateurs in the focus groups to help people start talking about the issues in the company’s product. As a result, people provided a few times more feedback than before.

Managing Contexts

When everything looks fine, and your meeting appears on the right track, don’t forget to check whether all people are in the same context. For example, when you say, “On this page, the user makes a route from A to B,” everybody thinks about his own experience. So you have to take a step back and clarify how users do this and what obstacles they will encounter because your stakeholders usually are not your users; they stand on the business side.

Also, don’t forget about emotions. Here in Ukraine, we have a proverb, “The well-fed will never understand the hungry.” So, to understand the user, you should try to walk in his shoes. Help stakeholders understand the user through emotions as well. What’s going on when a user opens your app? Does the user have enough time to learn how to complete his task? What would happen if not? All those things will not be apparent to people looking at the static design image from your slide. Tell the story! We all like stories.

Conclusion

Before wrapping things up, a note about meeting notes. There is a common practice to take meeting notes, but personally, I’m not a fan of this. Of course, taking notes is OK if you have time or if somebody can write down the main ideas discussed. But don’t allow this to dictate the pace of the discussion. The goal is to move forward but not to make pauses because you need to time a few sentences.

Meeting notes are about the past, and in order to go ahead, you need the action plan. The action plan is the list of actions that need to be done before a few predefined deadlines are reached. All items in that list should be measurable and split into a few simple, understandable steps.

An example of an ineffective action plan:

  • Finalize the concept.
  • Think about better navigation.
  • Discuss the design concept with the users.

An example of an effective action plan:

  • Add a full search flow and a “Contact Us” page.
  • Create a minimum of two versions of the design concept with navigation based on best practices.
  • Conduct unmoderated user testing with at least five users.

Also, every item in the list should have a person assigned to it — an “action person.” It allows avoiding a situation where something is not done just because everybody thinks it’s not in their direct area of responsibility or a list of to-do items.

I hope this article will help you organize more productive meetings, save everyone’s time, and be more efficient. And if you have questions, I’d happily reply to them in the comments below.

A few extra tips:

  • The initiator is responsible for achieving the meeting results. So if you see that the discussion moves in the wrong direction, you are the person who should get it back onto the right track.
  • Sometimes, somebody may say, “As we are all here, can we also discuss...” Nope, it doesn’t work like that. The correct thing to do is to cover the agenda first, and then, if everyone agrees (and there’s some meeting time left!), you can discuss something else.
  • If you invite people who don’t know each other, it’s best to start by introducing everybody.
  • You can record the meeting; it’s a good option for people who can’t join. But before the recording is made, ensure everybody is OK with that.
  • Don’t make people ask about dropping off the meeting if/when the time is up. Instead, if you need more time, ask about a possible extension 5-10 minutes before the end of the scheduled timeslot and then adjust your plan accordingly. Discussing ideas with people who are late somewhere else is a bad idea, especially if the topic is complex and rather important! Make sure that it’s comfortable for everyone to extend the meeting a bit; if it is not, leave some of the topics and discussions for another time.
  • The traditional approach dictates you should invite all people related to one or more of the topics on the meeting agenda. But if you can discuss and resolve some of the questions in smaller groups or one-to-one meetings, please go this way; it’s much better. Ideally, every participant has to be involved in every aspect of the meeting plan. (It doesn’t feel right to join the forum only because of a five-minute question that concerns you, placed at the end of the meeting time.)
  • Try to hear the others. Unfortunately, sometimes we are so focused on our vision and following a plan that we can ignore the voices around us. As a result, good ideas may not get a chance to be heard and evaluated at the right time.

Further Reading

Here are a few additional resources on the topic of conducting effective meetings:

  • The Anatomy of an Amazon 6-pager,” Jesse Freeman
    How Amazon conducts its meetings? Amazon is well known for its lack of using PowerPoint. This works because before a meeting, you print out enough copies for everyone in the room, and you’re not allowed to read the document from your computer unless you are remote. This long read will share plenty of details about how Amazon and their meetings work.
    Note: During the pre-pandemic times, things were much different from what they are today, meetings included. People worked in offices and in person much more often.
  • Big Timer
    Some teams choose a very specific duration of the meeting, e.g., 18 minutes or 23 minutes, with a large countdown displayed in front of everybody to bring the meeting to the point and right on time.
  • Mental Health at Work (leverage focus blocks),” Cameron Moll
    In some teams, employees are allowed to block out hours for “focus work,” and no meetings can be scheduled during that time.
  • Meetings,” Paul Adams
    After a fantastic meeting, everyone feels like progress was made, that things are clearer than before, and that there is continued momentum. At the same time, meetings are also expensive. Consider the opportunity cost of people being at a meeting, as they could all be doing other important things.
  • Let’s Have Better Meetings!,” Laurel Hechanova & Patrick DiMichele
    How to run a tighter ship and make better use of everyone’s time.
  • Why Standups are Useless and How to Run Great Product Team Meetings,” Andy Johns
    There’s probably one flavor of a meeting that tops the charts in uselessness, and it’s the “status update” meeting.
  • The Cost of Interrupted Work: More Speed and Stress,” Gloria Mark (University of California, Irvine), and Daniela Gudith & Ulrich Klocke (Humboldt University, Berlin) [PDF document]
    This is a paper about productivity, namely about meetings that “steal” from people because of interruptions.
]]>
hello@smashingmagazine.com (Andrii Zhdan)
<![CDATA[Sustainable Web Development Strategies Within An Organization]]> https://smashingmagazine.com/2022/10/sustainable-web-development-strategies-organization/ https://smashingmagazine.com/2022/10/sustainable-web-development-strategies-organization/ Tue, 11 Oct 2022 15:30:00 GMT Sustainability is rightly becoming more widely discussed within the web development industry, just as it is an increasing concern in the wider public consciousness. Many countries around the world have committed to ambitious climate goals, although many have some way to go if they are to meet their targets.

All industries have a part to play, and that includes web design and development. The internet accounts for an estimated 3–4% of global emissions — equivalent to some countries. That means we, as tech workers, are in a position to make choices that contribute to reducing the environmental impact of our industry. Not only that, but as a well-connected industry, one that builds digital products often used by thousands or millions of people, we are also relatively well-positioned to influence the behavior of others.

In this article, we’ll explore some of the ways that we, as individuals, can use our skills to have a positive environmental impact within a digital organization.

Presenting The Case For Sustainability

One of the first hurdles to implementing sustainable practices within an organization (or on a project) is convincing stakeholders that it is worth the investment. Any change of practice, however small, will probably require some time investment by employees. Being able to present a business case, and demonstrate that the benefits outweigh the costs, will help justify focusing resources in the area of sustainability.

Cost-Effectiveness

It would be great to think that for every company, the idea of building a better world trumps financial concerns. Unfortunately, with some exceptions, that’s generally not the case. But there are plenty of actions we can take that reduce our environmental impact and reduce costs (or increase revenue) at the same time.

For example, changing our database architecture to be more efficient could save on server costs. Making performance improvements to a client’s site could result in happier clients who send more business our way. Identifying where sustainability and cost savings overlap is a good place to start.

Regulation

Despite financial impact being a fairly obvious incentive, it’s not the only one, and perhaps not even the most significant. In his recent Smashing Conference talk, green software expert Asim Hussain mentioned that the biggest shift he is seeing is as a result of regulation — or the threat of regulation.

With many countries publicly committed to Net Zero goals, it is increasingly likely that companies will need to submit to the regulation of their carbon emissions. The UK’s commitment is enshrined into law, with carbon budgets set over many years. Many companies are already taking the long view and looking to get ahead of the competition by reducing their emissions early.

Being able to demonstrate as a company that you are committed to sustainability can open up a greater number of opportunities. Organizations working with the UK government to build new digital services, for example, are required to meet standards defined in their Greening Government ICT and Digital Services Strategy.

Accreditation

Companies that can demonstrate their environmental credentials may be eligible for certification, such as ISO14001 standard in the UK. In the case of Ada Mode, the company I work for, this has directly contributed to winning us more work and has enabled us to partner with much larger organizations.

Businesses that achieve BCorp status can benefit (according to the website) from “committed and motivated employees, increased customer loyalty, higher levels of innovation, and market leadership”.

Certainly, organizations positioning themselves as environmentally conscious increase their chances of attracting sustainability-minded candidates for recruitment as more and more people seek meaningful work.

It’s All In The Branding

Another great bit of advice from Asim’s talk at the Smashing Conference was on branding. The “Eco” movement has long been associated with being somewhat spartan, taking away something, using or consuming less. Rather than giving our users a reduced experience, reducing the environmental impact of our digital products has the opportunity to deliver our users more. Asim talked about Performance Mode in Microsoft Edge: switching on Performance Mode means users get a faster website, while also saving resources. “Performance Mode” sounds a lot more appealing than “Eco Mode”, which sounds like something is being taken away.

The Bigger Picture

When presenting the case for investing time in sustainability efforts in an organization, it can be helpful to explain the relevance of small actions on a bigger scale. For example, Smashing’s editor, Vitaly Friedman, makes a case for reducing the size and quality of images on a site by explaining the overall cost and CO2 savings when taking into account page views over an entire year.

On the Fact Sheets page, we can save approx. 85% of images’ file sizes without a noticeable loss of image quality. With approx. 1,300,000 annual page views…this makes for 5.2 Terabyte of wasted traffic.
The difference is approx. EUR 1000–1650 in costs (on one single page!). Notably, this makes for 17.28 tons of CO2, which requires 925 trees to be planted, and that’s enough to drive an electric car for 295,000km — annually.
Get Organized

Affecting change at an organizational level is nearly always easier when you build consensus.

Forming A Team

Forming a green team within your organization enables you to support each other to achieve climate goals and identify new opportunities. ClimateAction.tech has some resources on starting a green team at your place of work.

If your organization is small, or there is a lack of interest, then finding a supportive community outside of work (such as ClimateAction.tech) can help you stay motivated and lend their advice. It’s also a great idea to connect with teams working on sustainability in other businesses.

Planning

Once you have a team, you’ll be in a good position to plan your actions. It can be hard to know where to focus your efforts first. One way we could do this is by drawing a diagram and sorting potential actions according to their impact and effort.

For example, switching to a green hosting provider could be a small-to-medium effort but result in a high impact. Re-writing your web app to use a more lightweight JS framework could be an extremely high effort for a relatively low impact.

The goal is to identify the areas where your efforts would be best focused. Low-effort/high-impact actions are easy wins and definitely worth prioritizing. Achieving a few aims early on is great for moral and helps keep the momentum going. High-effort/high-impact actions are worth considering as part of your long-term strategy, even if you can’t get to them right away. Low-effort/low-impact tasks might also be worth doing, as they won’t take up too much time and effort. High-effort/low-impact actions are generally to be avoided.

This isn’t the only way to prioritize, however. Other factors to consider include workload, resources (including financial), and the availability of team members. For example, if your development team are particularly stretched thin, it may be more prudent to focus on goals within the areas of design or project management or prioritize actions that can be easily integrated with the development workflow in a current project.

It’s not always the case that every sustainability effort needs to be meticulously planned and scheduled. Jamie Thompson from intelligent energy platform Kaluza explained in a recent talk how a developer spent just 30 minutes of spare time removing database logs, resulting in a large reduction in CO2 emissions — enough to offset Jamie’s train journey to the event.

Watch the video of Jamie’s talk from Green Tech South West.

Measuring The Impact

Measuring the impact of your sustainability efforts is a thorny subject and depends on what exactly you want to measure. To get some idea of the impact of changes to our websites, we can use tools such as Website Carbon Calculator, EcoPing, and Beacon. These tools are especially helpful in making the impact more tangible by comparing the amount of CO2 emitted to common activities such as traveling by car, boiling the kettle, or watching a video.

Where sustainability goals align with cost-saving (such as reducing server load), we may be able to measure the impact of the financial savings we’re making. But we should be careful not to conflate the two goals.

Some Areas To Consider

If you’re not sure where to start when it comes to making your digital organization more sustainable, here are a few areas to think about.

Green Your Website

There are many ways we can reduce the environmental impact of the websites and digital products we build, from reducing and optimizing our images to minimizing the amount of data we transfer to implementing a low-energy color scheme. Tom Greenwood’s book, Sustainable Web Design is packed with advice for building low-carbon websites.

When the architectural website Dezeen discovered how polluting their website was, they took steps to massively reduce its carbon footprint, resulting in some huge savings — according to their measurements, equivalent to the carbon sequestered by 96,600 mature trees.

Green Hosting

Our choice of web host can have a big impact on our organization’s carbon emissions. Consider switching to a host that uses renewable energy. The Green Web Foundation has a directory.

Switch Your Analytics

Do you really need Google Analytics on every site you build? How about switching to a lower-carbon alternative like Fathom or Cabin instead? As a bonus, you might not need that cookie banner, either.

Developer Toolchain

Eric Bailey writes in this article for Thoughtbot:

“If I was a better programmer, I’d write a script that shows you the cumulative CO₂ you’ve generated every time you type npm install.”

Clean up your dependencies and remove the ones you no longer need, especially if you’re working on a project or package that will be installed by a lot of developers. Consider whether a static site might serve your needs better than a bloated WordPress project in some instances. (Eric’s article also includes a bunch of other great tips for building more sustainably.)

Hardware And E-Waste

Several tonnes of carbon go into producing our MacBooks, PCs, tablets, and mobile devices, even before we start using them. Do we really need to upgrade our devices as regularly as we do? We must also consider their disposal, which also produces generates carbon emissions and produces harmful waste. It might be possible to repair the device or, if we need to upgrade, to sell or donate the old ones to someone who needs them, extending their useful life.

Gerry McGovern has written and spoken extensively about the problem of e-waste, including his book, World Wide Waste.

Electricity Use

It’s probably fairly obvious, but reducing our electricity consumption by switching off or powering devices when we don’t need them and switching to a green electricity supplier could make a big difference.

Travel

Does your team regularly drive or fly for work? It might be helpful to set some organization-level targets for reducing carbon-intensive travel and looking for sustainable alternatives where possible. Driving and flying are among the most polluting activities an individual can engage in.

Larger Organizations

If you work for a big corporation, the battle to get climate action on the agenda may be uphill — but, on the flip side, your efforts could have a far more wide-ranging impact. Small changes to improve the carbon footprint of a site can have a big impact when that site is used by millions of people. And in an organization of thousands, corporate policies on sustainable travel and electricity use can save a lot of carbon emissions.

Many of the big tech companies have the potential to use their lobbying power for the greater good. As tech workers, we can help push it up the agenda. Check out Climate Voice for some of the ways tech workers are attempting to use their influence.

Spread The Word

A common argument people make against action on climate change is that individual actions don’t make a difference. There’s a great podcast episode in the How To Save a Planet series called Is Your Carbon Footprint BS? which confronts exactly this dilemma. You could argue that when taken individually, our actions are of little consequence. But all of our actions have the potential to spark action in others and ripple outwards. Dr. Anthony Leiserowitz, who runs the Yale Center for Climate Change Communication is quoted in the episode saying:

“One of the single most important things that anyone, anyone can do. When people say, ‘What can I do about climate change?’ My answer, first and foremost, is to talk about it.”

By taking action at an organizational level, you’ve already extended your sphere of influence beyond just yourself. Encourage the people working at your company to be vocal about your climate commitments. We have the power to inspire action in others.

Inclusivity, Accessibility And Climate Justice

However we choose to take action on climate change and sustainability, it’s imperative to exclude no one. We should make sure our actions don’t overtly or covertly place undue burdens on already-marginalized people, including those with disabilities, people of color, those living in developing countries, people with below-average incomes, or LGBTQ+ people. Climate change is already exacerbating inequalities, with the people causing the least pollution the ones at the most risk from its effects. We must ensure that whatever climate action we take, we’re making fair and equitable decisions that include everyone.

Resources

  • Jon Gibbins, founder and director of As It Should Be, a UK-based agency helping digital teams design and build accessible and sustainable products and services, recently delivered a talk about accessibility and sustainability. You can watch his talk, Leave No One Behind, on the Green Tech South West website.
  • The Environment Variables podcast from the Green Software Foundation has an episode on Accessibility and Sustainability.
  • Read more about climate justice in this article from Carbon Brief.
]]>
hello@smashingmagazine.com (Michelle Barker)
<![CDATA[Delightful UI Animations With Shared Element Transitions API (Part 2)]]> https://smashingmagazine.com/2022/10/ui-animations-shared-element-transitions-api-part2/ https://smashingmagazine.com/2022/10/ui-animations-shared-element-transitions-api-part2/ Mon, 10 Oct 2022 09:00:00 GMT In the first part of this article, we covered Shared Element Transitions API (SET API) and how we can use it to effortlessly create complex transitions for various UI elements, which would usually require a lot of JavaScript code or an animation library to achieve.

But what about smooth and delightful transition animations between individual pages? This is probably one of the most often requested features in the past few years because even with all the frameworks like React and Svelte and animation libraries like GSAP and Framer Motion, transitions between pages are still really difficult to do.

In this article, we’re going to showcase same-document page transitions commonly found in Single Page Applications and talk about the future of the Shared Element Transitions API for cross-document (Multi Page Application) transitions. I’ll also showcase some awesome React, Astro, and Svelte implementation examples from the dev community.

Note: Shared Element Transitions API is currently supported only in Chrome version 104+ and Canary with the document-transition flag enabled. Examples will be accompanied by a video, so you can easily follow along with the article if you don’t have the required browser installed.

In case you haven’t checked out my previous article on the topic, here is a quick rundown of this exciting new API so you can follow along with the article.

Shared Element Transitions API

With Shared Element Transitions API, the browser does a lot of heavy lifting when it comes to animations allowing us to create complex UI animations in a more streamlined way. The main part of the API is a JavaScript function that takes screenshots of the UI state before and after the DOM update and apples a crossfade animation:

const moveTransition = document.createDocumentTransition();
await moveTransition.start(() => {
  /* Take screenshot of an outgoing state */
  /* Update the DOM - move item from one container to another */
  targetContainer.append(activeItem);
  /* Take screenshot of an incoming state and crossfade the states */
});

Just by calling the start function, we get a neat and simple crossfade animation between the outgoing and incoming states.

As you can see, we can still navigate between the pages; DOM is updated with the new content, and the URL in the browser changes. We are intercepting the browser’s default navigation behavior and handling the page loading and DOM updates all by ourselves while we remain on the same page.

By just passing the DOM update function as a callback to the SET API start function, we get a neat crossfade transition between pages right out of the box!

With just a few lines of CSS and JavaScript, we’ve created this beautiful transition animation. All we had to do was to identify the shared element (item image) on a clicked link using a page-transition-tag and signal the browser to keep track of its dimension and position.

We get a crossfade animation on a shared element on backward navigation for free because the selector we used document.querySelector(a[href="${url.pathname}"] .card__image) runs on the current page, so when we navigate back to items list page the tag doesn’t get applied and browser cannot match the shared element.

If we want to have the same animation on the shared element when navigating back to the item list page, we have to apply the tag to the correct image element in the grid after we fetch the contents of a target page.

Customizing Page-Transition Animation With CSS

Let’s use CSS animation properties to fine-tune the crossfade and item image animation. We want the crossfade animation to be quick and more subtle, and the more elaborate image animation to be more noticeable and have a nice custom easing function:

/* Speed up crossfade animations */
::page-transition-outgoing-image(*),
::page-transition-incoming-image(*) {
    animation-timing-function: ease-in-out;
    animation-duration: 0.25s;
}

/* Fine-tune shared element position and dimension animation */
::page-transition-container(product-image) {
    animation-timing-function: cubic-bezier(0.22, 1, 0.36, 1);
    animation-duration: 0.5s;
}

We also need to keep in mind that some users might prefer browsing the site without the complex animations with a lot of movement, so we want to either turn them off or provide more appropriate animation:

@media (prefers-reduced-motion) {
  ::page-transition-container(*),
  ::page-transition-outgoing-image(*),
  ::page-transition-incoming-image(*) {
    /* Or add appropriate animation alternatives */
    animation: none !important; 
  }
}

Crossfade animations now run faster, and the sizing and position animation runs a bit slower and with a different timing function.

In this example, I’ve only showcased code snippets relevant to creating page transition and SET API. If you are curious about the complete source code or want to check the demo in detail, feel free to check out the project repository and inspect the demo page.

Upcoming Cross-document Transitions

Proper Shared Element Transitions API support for MPAs is still a work in progress, but we can get a general idea of how it’s supposed to work from a rough draft by WICG.

In same-document transitions, we would use pageTransition.start(/* … */) function to let the browser keep track of the DOM updates. As for the cross-document transitions, we need to run the transition request function on the outgoing page before it’s unloaded and run the transition on the incoming page once it’s ready.

The following code snippets are copied from the WICG draft:

// In the outgoing page
document.addEventListener("pagehide", (event) => {
  if (!event.isSameOriginDocumentSwap) return;
  if (looksRight(event.nextPageURL)) {
    // This signals that the outgoing elements should be captured.
    event.pleaseLetTheNextPageDoATransitionPlease();
  }
});
// In the incoming page
document.addEventListener("beforepageshow", (event) => {
  if (
    event.previousPageWantsToDoATransition &&
    looksRight(event.previousPageURL)
  ) {
    const transitionReadyPromise = event.yeahLetsDoAPageTransition();
  }
});

Shared Element Transitions API for cross-document transitions would also need to be heavily restricted for security reasons.

Framework Implementation Examples

During the past few weeks, I saw some jaw-dropping examples of using Shared Element Transitions API for page navigation, added with progressive enhancement to various frameworks like React and Svelte.

Adding page transitions with SET API in frameworks can be tricky. In this example, we’ve had control over the DOM update functions, but this is not usually the case with front-end frameworks. Hopefully, as this API gets proper browser support and traction in the dev community, frameworks and router libraries will follow suit and provide better ways to integrate Shared Element Transitions API in navigation.

So, I would like to highlight some awesome examples of framework implementations from the community, especially those that provide reusable functions and hooks.

React / Preact

Jake Archibald created a great video playlist example using Preact, TypeScript, and a custom page transition hook. This example uses a custom router implementation to apply class names to the html element to customize the animation and toggle different types of animation depending on the navigation direction.

Astro

Maxi Ferreira implemented page transitions similarly as in our example with Navigation API but with Astro and explained the process in great detail on top of building a stunning movie database app.

He also worked with Ben Myers on this awesome guitar shop example with cool animations on both the guitar image and item background, which expands into a full description background container. This is also a good example of how to create elaborate but seamless and tasteful animations that add to the user experience.

Svelte

Moving onto Svelte, Geoff Rich built this neat fruit nutritional data app and explained the whole process in great detail in his article. SvelteKit has a built-in navigating store, and Geoff created a handy util function for intercepting page transitions and applying the Shared Element Transitions API depending on its browser support.

Conclusion

Shared Element Transitions API allows us not only to implement complex UI animations on a component level but also on a page level. Same-document transitions in Single Page Applications can be implemented today with progressive enhancement, and we can achieve impressive app-like page transitions with just a few lines of JavaScript and CSS. And all that without a JavaScript animation library! More popular and more complex cross-document transitions for Multi Page Applications are still a work in progress, and I can see it being a massive game-changer once it’s released and gains wider browser support.

Judging from the impressive examples we’ve seen online, some of which are featured in this article, we can safely say that the community is more than excited about this API. If you’ve built something awesome using Shared Element Transitions API, feel free to reach out on Twitter or LinkedIn and share your work.

Many thanks to Nikola Vranesic for reviewing the article for technical accuracy.

References

]]>
hello@smashingmagazine.com (Adrian Bece)
<![CDATA[A Roadmap For Building A Business Chatbot]]> https://smashingmagazine.com/2022/10/roadmap-building-business-chatbot/ https://smashingmagazine.com/2022/10/roadmap-building-business-chatbot/ Fri, 07 Oct 2022 08:00:00 GMT The widespread adoption of chatbots was imminent with the stellar rise and consolidation of instant messaging. However, the accelerated pace at which chatbots have evolved from accepting scripted responses to holding natural-sounding conversations has been unprecedented. According to Google Trends, the interest in AI Chatbots has increased ten-fold over the last five years!

With chatbots getting smarter, value-driven, and user-friendly, it has fueled customer-led demand for chatbot-driven interaction at every touchpoint. As a result, businesses are scrambling to keep up with this requirement and investing aggressively in chatbot development; so much so that the market for chatbots is expected to reach a valuation of USD 10.2.29 in 2026 with an impressive CAGR of 34.75%.

On that note, it makes absolute sense to hop aboard this chatbot train. In fact, it is believed that about 80% of CEOs plan on revamping customer engagement with conversational chatbots. So, if you are looking for a way to get started, here is a step-by-step guide for chatbot creation!

What Is A Chatbot? And Why Does It Matter?

In its simplest sense, a “chatbot” is a portmanteau of human “chatter” as conducted by a “bot.” It is a software application or computer program that can simulate human conversations through speech or text.

Such a service can rake in the following advantages:

  • Unlike the unidirectional view of chatbots as customer service agents, they are highly versatile. Regardless of the industry, businesses can leverage chatbots for sales and marketing activities, HR and personnel management, IT service helpdesk, knowledge management, and more!
  • Chatbots can help with collecting and qualifying leads, booking product demos, and engaging audiences which can increase sales by a whopping 67%!
  • Almost 88% of consumers reported a positive or neutral experience with a chatbot, thereby paving the way for customer satisfaction and retention.
  • 69% of consumers attempt to resolve any issue by themselves, but only a third of companies offer this facility. Chatbot fills this gap by offering self-servicing options 24/7 and without depending on human resources!
  • About 67% of buyers expect an immediate response to their marketing, sales, or customer service inquiry — “immediate” being 10 minutes or less. With chatbots in the picture, businesses can set up live communication channels and cater to this need nearly 3x faster!
  • Apart from lending scalability to business operations, it can reduce costs by 30%. The banking, retail, and healthcare industries are expected to save 2.5 billion hours and USD 11 billion through the implementation of chatbots by 2023.
  • They will not only keep businesses relevant with the current times but also future-proof them by laying the foundation for conversational marketing, automation, and so on.
  • Speaking of automation, chatbots can singularly handle 68.9% of end-to-end customer interactions and 80% of standard, repetitive tasks, thereby reducing the personnel load by 65%. More importantly, they can deliver these results without any errors or bias.
  • In addition to increasing customer satisfaction levels through maximum engagement, omnichannel chatbots can reduce churn by plugging in leakage in a multi-touchpoint environment.
  • The personification of chatbots can humanize brands and help them foster emotional and meaningful customer relationships.

Given the whole suite of advantages listed above, the role of chatbots boils down to empowering businesses by making them human, accessible, responsive, and reliable. For some businesses, it can also function as a competitive differentiator that sets them apart from others. And as a culmination of these qualities, your organization can achieve the highest level of customer approval and satisfaction. Who doesn’t want that?

Roadmap For Building A Business Chatbot

Now that we’ve established that a chatbot can be a valuable addition to your business allow us to lead the way. We have formulated a detailed step-by-step guide on how to build a business chatbot — from identifying when it is the right fit, understanding the different types of chatbots, and defining goals, to launching and improving the chatbot. The following is your almanac to building a business chatbot:

Identifying Whether A Chatbot Is A Right Fit

While chatbots offer a plethora of advantages, it is not advisable for all businesses to hop on this trend. After all, the process of building a business chatbot from scratch is not easy on the pocket. Plus, it is a time-consuming and resource-intensive process.

Therefore, it would be wise for business leaders and C-suite executives to involve the crucial stakeholders and ask probing questions, such as those illustrated below, to audit the business processes and identify the need for a chatbot as a solution:

  • Is the workforce heavily engaged in routine, repetitive tasks?
  • Do customers often consult on similar topics?
  • Is the business looking to reduce the customer service load and corresponding costs?
  • Is the business a multilingual customer base spread across time zones?
  • Does the business want to streamline sales and marketing activities?
  • Is the business anticipating peak internal and/or external interactions during specific seasons?
  • Is the business looking for ways to delight customers and stand out from the competitors?

If the answer is a resounding yes to the above questions, then it is time to give it serious thought. Apart from the intangible and non-monetary benefits, a cost-to-benefit analysis and Return on Investment (ROI) calculation can be performed to justify the impending financial implications.

Understanding The Different Types Of Chatbots

As cliche as it may sound, not all chatbots are created alike. Depending on various factors (some of which we discussed in the previous section), businesses can settle for something as simple as a menus-based chatbot. Alternatively, businesses with resources and bandwidth could create something as elaborate as a conversational chatbot with sentiment analysis and Natural Language Processing (NLP) capabilities. Frankly, that’s your decision to make. However, to help you in this direction here’s a quick overview of some of the commonly available options:

  • Menu-Based Chatbots
    Being one of the simplest forms of chatbots, these are essentially decision tree hierarchies presented in a chatbot form. Users can select the appropriate options that will eventually lead to the answer. They are often employed to answer FAQs.
  • Rule-Based or Linguistic Chatbots
    These chatbots construct conversational flows along the if-then-else logic. Developers often embed business rules in the form of algorithms, and accordingly, the chatbots will navigate the conversation. However, do bear in mind that the research stage of this chatbot development would have to be exhaustive as one has to account for every permutation and combination of questions that may be asked.
  • Keyword Recognition-Based Chatbots
    Unlike menu-based chatbots that participate passively, keyword recognition-based chatbots seek customized trigger words to respond appropriately. These chatbots often employ NLP, a subset of Artificial Intelligence (AI), to hybridize menu-based chatbots with keyword recognition.
  • Contextual Chatbots
    These chatbots are a powerhouse of possibilities. They combine a blend of Machine Learning and Artificial Intelligence to understand context, learn from previous iterations, and improve with use and time. They also retain user preferences to make the experience more personalized and customer-centric.
  • Hybrid Chatbots
    Hybrid chatbots feature cherry-picked models, architectures, and frameworks from any or some of the chatbot types discussed above, to cater to specific business requirements.
  • Voice Bots
    As smart speakers gain more traction amongst end-users and digital assistants like Siri and Alexa become more popular, businesses are harnessing their capabilities to dive into voice bot development. The vernacular approach is found to be more in demand, as evident by a PwC survey.) that highlighted how 71% of consumers prefer voice searches over typing.

Knowing these basics will help one understand what is right for the business. Once that is out of the way, you can define the chatbot goals, as discussed in the subsequent section. (Or the following sections may shed light on how to make this decision. It works both ways!)

Defining The Chatbot Goals

Chatbots are as versatile as they are diverse. One could use them in lead generation activities, closing deals, upselling or cross-selling during sales, offering technical support, and more! As such, businesses must define their goal right at conception to stay focused on the outcomes.

To understand the primary objective of the chatbot, ask the beneficiary team or department the following questions:

  • What problem will the chatbot solve?
  • How will the chatbot solve the problem?

Outline the answer according to the SMART (Specific, Measurable, Achievable, Relevant, and Time-Bound) format, and one can stay laser-focused on the results and workflows during development. As an illustration, say that one wants to develop a chatbot to help with customer service requests. The SMART goal in this regard could be that the chatbot will automate 30% of customer queries regarding product details and specifications within the first three months of implementation.

Upon defining the roles and responsibilities of the chatbot, you can then move on to fleshing out additional details using the following steps.

Selecting The Chatbot Channels And Languages

Once the basics of the chatbot are outlined t, it is time to make a few strategic decisions, namely the channel and the language. Though chatbots are commonly found on websites and landing pages, they can also be implemented across instant messaging platforms like WhatsApp or Messenger. As such, businesses must identify the viable channels they wish to target.

One will have to gather user data to make a well-rounded decision in this regard. For instance, determine the following:

  • What channels do the employees or customers prefer while availing of chatbot services?
  • How do the Key Performance Indicators (KPIs), such as response rates, Net Promoter Score (NPS), and so on, reflect across these environments?

Based on these findings, shortlist about three to five media for a truly multichannel experience. Follow a similar approach while deciding on the language support offered by the chatbot. After determining the channels and languages, you can move on to assimilating such a solution within your business infrastructure.

Addressing The Integrations

Chatbots do not operate in a vacuum; they have to function in harmony with other tools and systems employed by your business. Making such provisions right at the design and development stage will lend immense flexibility and scalability to the chatbot and make it future-proof to some degree. Given this fact, one will have to work out integrations between the chatbot environment and disparate systems, such as the Customer Relationship Management (CRM) platform, calendar, cloud storage, maps, payment systems, and more. Again, one will have to take a call on the impact and importance of certain integrations and prioritize them over the others.

Hiring Talent

Businesses taking on the mammoth task of in-house chatbot development will have to put together a robust team that can lead the mission to success. Start by treating the process as any other digital transformation project. Prepare a requirement report containing all the features, specifications, and outcomes expected from the chatbot; one may have already done that by following the preceding steps.

After completing the homework, one will need to add the following members to the chatbot development team:

  • Project Manager: to oversee the chatbot development process, manage resources, budget, and timelines, and handle risks.
  • Flow Designer: to orchestrate the chatbot conversation flow.
  • User Researcher: to understand the needs and preferences of the target audience.
  • Copywriter: to work with the flow designer and create responses that are appropriate, branded, and consistent.
  • Developer: to carry out all the under-the-hood chatbot building by creating databases, building APIs, establishing protocols, and so on.
  • AI/ML trainer: to teach the AI/ML engine to understand user inputs better and make smarter decisions.
  • Data analyst: to extract meaningful, data-driven insights, whether related to chatbot performance or user.

Put together your A-team by handpicking experts from various fields, or we have another shortcut approach that you can try — outsourcing!

Outsourcing Chatbot Development

Building a chatbot development team and maintaining them can be costly, especially when complexities get involved. Not to mention, it is also a hassle to recruit and retain talent, sustain engagement and productivity, and keep everyone motivated towards the goal. In this scenario, outsourcing appears to be a viable alternative.

That being said, choosing the right chatbot development agency is key to the project’s success. Here’s how one can find the right fit:

  • Find an agency that can operate as the developer, strategist, partner, and tech enabler. In other words, they should have the business’s best interests in mind.
  • Explore their main services, target industries, typical clients, area of expertise, channels, languages, etc.
  • Seek recommendations from the networks or get in touch with agencies that may have developed chatbots for the business’s competition.
  • Make it a point to conduct thorough background research on the shortlisted agencies by checking out customer testimonials and feedback.
  • Request portfolios or past projects to establish credibility and have them vetted too.
  • Screen in agencies that understand business’s custom requirements and possess the skill and competencies to realize a unique chatbot.
  • Discuss the budget by comparing chatbot development packages. Account for any additional charges relating to integration, maintenance, post-development support, and so on.
  • Ask questions related to chatbot and source code ownership.

Sharing Project Requirements

Once a chatbot development team has been put together, or the expertise of an agency has been engaged, it is time to get down to business. Whatever chatbot-related details that one has collected, such as expectations, desired outcomes, and project deliverables, will have to be shared with the developer/development team. This information will act as a baseline for them and allow them to ideate and innovate without losing focus on the primary goal. At this point, the team might also refine the ideas or negotiate on certain terms so that your chatbot is realistically possible.

Upon discussing all these nitty-gritty, prepare a roadmap with well-defined Key Performance Indicators (KPIs), milestones, deliverables, timelines, and so on.

Now that you’ve laid down the complete foundation, you can start chatbot development!

Developing The Chatbot

Based on all the inputs, the development team will work on creating the chatbot as per the business requirements. One may be required to actively participate in the development process, so be prepared to step up!

The Dry Run

Rather than delivering a fresh, out-of-the-box chatbot solution, the development team will first deliver a Proof of Concept (POC) or a Minimum Viable Product (MVP). This prototype, of sorts, will help to test the chatbot’s performance in real-world conditions.

A prototype gives the opportunity to identify and fix issues before they turn catastrophic. As such, the typical chatbot performance assessment will evaluate across the following spheres:

  • Personality: Does the chatbot communicate per the brand’s voice and tone?
  • Onboarding: How do new users respond to the chatbot, and how fast do they adopt it?
  • Understanding: How effectively can the chatbot understand customer requests?
  • Answering: What are the elements of a typical response? Are they relevant and contextual?
  • Navigation: How easily can chatbots go through an end-to-end conversation? What is its impact on engagement?
  • Error management: What is the fallback rate? How efficient is the chatbot in handling the resulting errors and recovering from them?
  • Security: How secure is the conversation? Is the chatbot compliant with any data security and privacy regulations?
  • Intelligence: Can the chatbot retain any information, and does it use it to gain context about the user?
  • Response times: How quickly can the chatbot resolve queries?

Of course, this list is purely indicative, and you will have to modify it according to your industry, chatbot type, roles and responsibilities, and other variables.

Launch

After the beta testing is a success, the development team will start creating the full version of the chatbot. All the necessary changes will be implemented, additions will be incorporated, and integrations will be tested. Once the chatbot performs to expectations, it is time to launch it into the real world!

Bup bup bup! Hold your horses because that’s not all! You have one final consideration to make for the continuous improvement of your chatbot. We discuss that in the final section.

Testing, Measuring, Tracking

Simply launching a chatbot is not enough. After all, chatbots are not “build it and forget it” things.

Businesses need to vigilantly monitor their performance to pave the way for continuous growth. First, the business will have to define certain KPIs and corresponding parameters that serve as benchmarks to analyze the chatbot’s performance. Next, businesses will have to take note of every anomaly or discrepancy and find justification for the same. Then, perform corrections are required to get the performance back to optimal values. Finally, the business will have to detect any underlying patterns.

In the meantime, one will also find opportunities to scale and expand your chatbot capabilities based on market conditions, ongoing trends, customer feedback, and metrics like satisfaction rates. Such a holistic approach will allow your chatbot to improve at every iteration!

Tips And Tricks To Master Chatbot Development

Now that we have indulged in some heavyweight reading about chatbot development, let’s polish this knowledge with a few important tips and tricks to make the process fun:

  • For a cash-strapped startup or a small business looking for a DIY approach, they might find some chatbot builder platforms online.
  • Grant a unique name and personality to the chatbot and maintain it consistently on all fronts.
  • Humans want to connect with humans. So, put in the effort to humanize the chatbot and make it friendly and approachable.
  • Train the chatbot to communicate in simple language so that they are easily understood by various users.
  • Delegate complex and repetitive tasks to chatbots but also grant users the opportunity to switch to a human agent.
  • Evaluate and optimize the bot regularly, but avoid overwhelming audiences by unveiling all features at once. Follow a graded approach.
Closing Thoughts

Considering that nearly 3 out of 4 customers expect to encounter a chatbot while visiting a business website, chatbots have become more of a necessity than a “nice-to-have” feature. Fortunately, the business already has a head-start in meeting this expectation, given that it has reached the end of this manual.

As a treat for your perseverance, here’s a quick recap of the detailed 10-step process:

  1. Start by identifying whether or not a chatbot is a right fit for your business model.
  2. Understand the different types of chatbots and identify the ones you need.
  3. After settling on a type, give your chatbot a purpose by defining its goals.
  4. Once the end goal is in view, iron out the details surrounding the language, channels, and so on.
  5. Work out the different integrations that will be required and find out ways to accommodate them.
  6. Recruit a team of experienced professionals or outsource the entire job (you do you)!
  7. Regardless of your choice above, document and share defined project requirements so you can get a chatbot as per your expectations.
  8. Get started with the chatbot development, and once ready, send the prototype on a dry run.
  9. Finally, when you have worked out the kinks, gear up for D-day as you launch the chatbot.
  10. Round up the chatbot development process with continuous testing, measuring, and tracking its progress so that it continues delivering value to your business.

Sure, the process seems overwhelming, but it is well worth the effort. All it takes is a little initiative to get the ball rolling, and once such an ambitious project gains momentum, it would put your business on the fast track to customer-friendliness.

So, get started with building the business chatbot now!

]]>
hello@smashingmagazine.com (Devansh Bansal)
<![CDATA[Node.js Authentication With Twilio Verify]]> https://smashingmagazine.com/2022/10/nodejs-authentication-twilio-verify/ https://smashingmagazine.com/2022/10/nodejs-authentication-twilio-verify/ Fri, 07 Oct 2022 06:00:00 GMT Building authentication into an application is a tedious task. However, making sure this authentication is bulletproof is even harder. As developers, it’s beyond our control what the users do with their passwords, how they protect them, who they give them to, or how they generate them, for that matter. All we can do is get close enough to ensure that the authentication request was made by our user and not someone else. OTPs certainly help with that, and services like Twilio Verify help us to generate secured OTPs quickly without having to bother about the logic.

What’s Wrong With Passwords?

There are several problems faced by developers when using password-based authentication alone since it has the following issues:

  1. Users might forget passwords and write them down (making them steal-able);
  2. Users might reuse passwords across services (making all their accounts vulnerable to one data breach);
  3. Users might use easy passwords for remembrance purposes, making them relatively easy to hack.

Enter OTPs

A one-time password (OTP) is a password or PIN valid for only one login session or transaction. Once it can only be used once, I’m sure you can already see how the usage of OTPs makes up for the shortcomings of traditional passwords.

OTPs add an extra layer of security to applications, which the traditional password authentication system cannot provide. OTPs are randomly generated and are only valid for a short period of time, avoiding several deficiencies that are associated with traditional password-based authentication.

OTPs can be used to substitute traditional passwords or reinforce the passwords using the two-factor authentication (2FA) approach. Basically, OTPs can be used wherever you need to ensure a user’s identity by relying on a personal communication medium owned by the user, such as phone, mail, and so on.

This article is for developers who want to learn about:

  1. Learn how to build a Full-stack express.js application;
  2. Implement authentication with passport.js;
  3. How to Twilio Verify for phone-based user verification.

To achieve these objectives, we’ll build a full-stack application using node.js, express.js, EJS with authentication done using passport.js and protected routes that require OTPs for access.

Note: I’d like to mention that we’ll be using some 3rd-party (built by other people) packages in our application. This is a common practice, as there is no need to re-invent the wheel. Could we create our own node server? Yes, of course. However, that time could be better spent on building logic specifically for our application.

Table Of Contents
  1. Basic overview of Authentication in web applications;
  2. Building an Express server;
  3. Integrating MongoDB into our Express application;
  4. Building the views of our application using EJS templating engine;
  5. Basic authentication using a passport number;
  6. Using Twilio Verify to protect routes.
Requirements
  • Node.js
  • MongoDB
  • A text editor (e.g. VS Code)
  • A web browser (e.g. Chrome, Firefox)
  • An understanding of HTML, CSS, JavaScript, Express.js

Although we will be building the whole application from scratch, here’s the GitHub Repository for the project.

Basic Overview Of Authentication In Web Applications

What Is Authentication?

Authentication is the whole process of identifying a user and verifying that a user has an account on our application.

Authentication is not to be confused with authorization. Although they work hand in hand, there’s no authorization without authentication.

That being said, let’s see what authorization is about.

What Is Authorization?

Authorization at its most basic, is all about user permissions — what a user is allowed to do in the application. In other words:

  1. Authentication: Who are you?
  2. Authorization: What can you do?
Authentication comes before Authorization.
There is no Authorization without Authentication.

The most common way of authenticating a user is via username and password.

Setting Up Our Application

To set up our application, we create our project directory:

mkdir authWithTwilioVerify
Building An Express Server

We’ll be using Express.js to build our server.

Why Do We Need Express?

Building a server in Node could be tedious, but frameworks make things easier for us. Express is the most popular Node web framework. It enables us to:

  • Write handlers for requests with different HTTP verbs at different URL paths (routes);
  • Integrate with view rendering engines in order to generate responses by inserting data into templates;
  • Set common web application settings — like the port used for connecting, and the location of templates used for rendering the response;
  • Add additional request processing middleware at any point within the request handling pipeline.

In addition to all of these, developers have created compatible middleware packages to address almost any web development problem.

In our authWithTwilioVerify directory, we initialize a package.json that holds information concerning our project.

cd authWithTwilioVerify
npm init -y

In Keeping with the Model View Controller(MVC) architecture, we have to create the following folders in our authWithTwilioVerify directory:

mkdir public controllers views routes config models

Many developers have different reasons for using the MVC architecture, but for me personally, it’s because:

  1. It encourages separation of concerns;
  2. It helps in writing clean code;
  3. It provides a structure to my codebase, and since other developers use it, understanding the codebase won’t be an issue.

  4. Controllers directory houses the controllers;

  5. Models directory holds our database models;
  6. Public directory holds our static assets e.g. CSS files, images e.t.c.;
  7. Views directory contains the pages that will be rendered in the browser;
  8. Routes directory holds the different routes of our application;
  9. Config directory holds information that is peculiar to our application.

We need to install the following packages to build our app:

  • nodemon automatically restarts our server when we make changes;
  • express gives us a nice interface to handle routes;
  • express-session allows us to handle sessions easily in our express application;
  • connect-flash allows us to display messages to our users.
npm install nodemon -D

Add the script below in the package.json file to start our server using nodemon.

"scripts": {
    "dev": "nodemon index"
    },
npm install express express-session connect-flash --save

Create an index.js file and add the necessary packages for our app.

We have to require the installed packages into our index.js file so that our application runs well then we configure the packages as follows:

const path = require('path')
const express = require('express');
const session = require('express-session')
const flash = require('connect-flash')

const port = process.env.PORT || 3000
const app = express();

app.use('/static', express.static(path.join(__dirname, 'public')))
app.use(session({ 
    secret: "please log me in",
    resave: true,
    saveUninitialized: true
    }
));

app.use(express.json())
app.use(express.urlencoded({ extended: true }))

// Connect flash
app.use(flash());

// Global variables
app.use(function(req, res, next) {
    res.locals.success_msg = req.flash('success_msg');
    res.locals.error_msg = req.flash('error_msg');
    res.locals.error = req.flash('error');
    res.locals.user = req.user
    next();
});

//define error handler
app.use(function(err, req, res, next) {
    res.render('error', {
        error : err
    })
})

//listen on port
app.listen(port, () => {
    console.log(`app is running on port ${port}`)
});

Let’s break down the segment of code above.

Apart from the require statements, we make use of the app.use() function — which enables us to use application level middleware.

Middleware functions are functions that have access to the request object, response object, and the next middleware function in the application’s request and response cycle.

Most packages that have access to our application’s state (request and response objects) and can alter those states are usually used as middleware. Basically, middleware adds functionality to our express application.

It’s like handing the application state over to the middleware function, saying here’s the state, do what you want with it, and call the next() function to the next middleware.

Finally, we tell our application server to listen for requests to port 3000.

Then in the terminal run:

npm run dev

If you see app is running on port 3000 in the terminal, that means our application is running properly.

Integrating MongoDB Into Our Express Application

MongoDB stores data as documents. These documents are stored in MongoDB in JSON (JavaScript Object Notation) format. Since we’re using Node.js, it’s pretty easy to convert data stored in MongoDB to JavaScript objects and manipulate them.

To install MongoDB in your machine visit the MongoDB documentation.

In order to integrate MongoDB into our express application, we’ll be using Mongoose. Mongoose is an ODM(which is the acronym for object data mapper).

Basically, Mongoose makes it easier for us to use MongoDB in our application by creating a wrapper around Native MongoDB functions.

npm install mongoose --save

In index.js, it requires mongoose:

const mongoose = require('mongoose')

const app = express()

//connect to mongodb
mongoose.connect('mongodb://localhost:27017/authWithTwilio', 
{ 
    useNewUrlParser: true, 
    useUnifiedTopology: true 
})
.then(() => {
    console.log(`connected to mongodb`)
})
.catch(e => console.log(e))

The mongoose.connect() function allows us to set up a connection to our MongoDB database using the connection string.

The format for the connection string is mongodb://localhost:27017/{database_name}.

mongodb://localhost:27017/ is MongoDB’s default host, and the database_name is whatever we wish to call our database.

Mongoose connects to the database called database_name. If it doesn’t exist, it creates a database with database_name and connects to it.

Mongoose.connect() is a promise, so it’s always a good practice to log a message to the console in the then() and catch() methods to let us know if the connection was successful or not.

We create our user model in our models directory:

cd models
touch user.js

user.js requires mongoose and create our user schema:

const mongoose = require('mongoose');

const userSchema = new mongoose.Schema({
    name : {
        type: String,
        required: true
    },
    username : {
        type: String,
        required: true
    },
    password : {
        type: String,
        required: true
    },
    phonenumber : {
        type: String,
        required: true
    },
    email : {
        type: String,
        required: true
    },
    verified: Boolean
})

module.exports = mongoose.model('user', userSchema)

A schema provides a structure for our data. It shows how data should be structured in the database. Following the code segment above, we specify that a user object in the database should always have name, username, password, phonenumber, and email. Since those fields are required, if the data pushed into the database lack any of these required fields, mongoose throws an error.

Though you could create schemaless data in MongoDB, it is not advisable to do so — trust me, your data would be a mess. Besides, schemas are great. They allow you to dictate the structure and form of objects in your database — who wouldn’t want such powers?

Encrypting Passwords

Warning: never store users’ passwords as plain text in your database.
Always encrypt the passwords before pushing them to the database.

The reason we need to encrypt user passwords is this: in case someone somehow gains access to our database, we have some assurance that the user passwords are safe — because all this person would see would be a hash. This provides some level of security assurance, but a sophisticated hacker may still be able to crack this hash if they have the right tools. Hence the need for OTPs, but let’s focus on encrypting user passwords for now.

bcryptjs provides a way to encrypt and decrypt users’ passwords.

npm install bcryptjs

In models/user.js, it requires bcryptjs:

//after requiring mongoose
const bcrypt = require('bcryptjs')

//before module.exports
//hash password on save
userSchema.pre('save', async function() {
    return new Promise( async (resolve, reject) => {
        await bcrypt.genSalt(10, async (err, salt) => {
            await bcrypt.hash(this.password, salt, async (err, hash) => {
                if(err) {
                    reject (err)
                } else {
                    resolve (this.password = hash)
                }
            });
        });
    })
})
userSchema.methods.validPassword = async function(password) {
    return new Promise((resolve, reject) => {
        bcrypt.compare(password, this.password, (err, res) => {
            if(err) {
                reject (err)
            } 
            resolve (res)
        }); 
    })
}

The code above does a couple of things. Let’s see them.

The userSchema.pre('save', callback) is a mongoose hook that allows us to manipulate data before saving it to the database. In the callback function, we return a promise which tries to hash(encrypt) bcrypt.hash() the password using the bcrypt.genSalt() we generated. If an error occurs during this hashing, we reject or we resolve by setting this.password = hash. this.password being the userSchema password.

Next, mongoose provides a way for us to append methods to schemas using the schema.methods.method_name. In our case, we’re creating a method that allows us to validate user passwords. Assigning a function value to *userSchema.methods.validPassword*, we can easily use bcryptjs compare method bcryprt.compare() to check if the password is correct or not.

bcrypt.compare() takes two arguments and a callback. The password is the password that is passed when calling the function, while this.password is the one from userSchema.

I prefer this method of validating users’ password because it’s like a property on the user object. One could easily call User.validPassword(password) and get true or false as a response.

Hopefully, you can see the usefulness of mongoose. Besides creating a schema that gives structure to our database objects, it also provides nice methods for manipulating those objects — that would have been otherwise somewhat though using native MongoDB alone.

Express is to Node, as Mongoose is to MongoDB.
Building The Views Of Our Application Using EJS Templating Engine

Before we start building the views of our application, let’s take a look at the front-end architecture of our application.

Front-end Architecture

EJS is a templating engine that works with Express directly. There’s no need for a different front-end framework. EJS makes the passing of data very easy. It also makes it easier to keep track of what’s going on since there is no switching from back-end to front-end.

We’ll have a views directory, which will contain the files to be rendered in the browser. All we have to do is call the res.render() method from our controller. For example, if we wish to render the login page, it’s as simple as res.render('login'). We could also pass data to the views by adding an additional argument — which is an object to the render() method, like res.render('dashboard', { user }). Then, in our view, we could display the data with the evaluation syntax <%= %>. Everything with this tag is evaluated — for instance, <%= user.username %> displays the value of the username property of the user object. Aside from the evaluation syntax, EJS also provides a control syntax (<% %>), which allows us to write program control statements such as conditionals, loops, and so forth.

Basically, EJS allows us to embed JavaScript in our HTML.

npm install ejs express-ejs-layouts --save

In index.js, it requires express-ejs-layouts:

//after requiring connect-flash
const expressLayouts = require('express-ejs-layouts')

//after the mongoose.connect logic
app.use(expressLayouts);
app.set('view engine', 'ejs');

Then:

cd views
touch layout.ejs

In views/layout.ejs,

<!DOCTYPE html>
<html lang="en">
    <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <meta http-equiv="X-UA-Compatible" content="ie=edge" />
        <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.6.3/css/all.css" integrity="sha384-UHRtZLI+pbxtHCWp1t77Bi1L4ZtiqrqD80Kn4Z8NTSRyMA2Fd33n5dQ8lWUE00s/" crossorigin="anonymous">
        <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/semantic-ui@2.4.2/dist/semantic.min.css">
        <link rel="stylesheet" href="/static/css/app.css">
        <link rel="stylesheet" href="/static/css/intlTelInput.css">
    <title>Node js authentication</title>
    </head>
    <body>

    <div class="ui container">
        <%- body %>
    </div>
    <script
        src="https://code.jquery.com/jquery-3.3.1.slim.min.js"
        integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo"
        crossorigin="anonymous"
    ></script>
    <script src="https://cdn.jsdelivr.net/npm/semantic-ui@2.4.2/dist/semantic.min.js"></script>
    </body>
</html>

The layout.ejs file serves like an index.html file, where we can include all our scripts and stylesheets. Then, in the div with classes ui container, we render the body — which is the rest of our application views.

We’ll be using semantic UI as our CSS framework.

Building The Partials

Partials are where we store re-usable code, so that we don’t have to rewrite them every single time. All we do is include them wherever they are needed.

You could think of partials like components in front-end frameworks: they encourage DRY code, and also code re-usability. Think of partials as an earlier version of components.

For example, we want partials for our menu, so that we do not have to write code for it every single time we need the menu on our page.

cd views
mkdir partials

We’ll create two files in the /views/partials folder:

cd partials
touch menu.ejs message.ejs

In menu.ejs,

<div class="ui secondary  menu">
    <a class="active item" href="/">
        Home
    </a>
    <% if(locals.user) { %>
        <a class="ui item" href="/users/dashboard">
        dashboard
        </a>
        <div class="right menu">
        <a class='ui item'>
            <%= user.username %>
        </a>
        <a class="ui item" href="/users/logout">
            Logout
        </a>
        </div>
    <% } else {%>
        <div class="right menu">
        <a class="ui item" href="/users/signup">
            Sign Up
        </a>
        <a class="ui item" href="/users/login">
            Login
        </a>
        </div>
    <% } %>
    </div>

In message.ejs,

<% if(typeof errors != 'undefined'){ %> <% errors.forEach(function(error) { %>
    <div class="ui warning message">
        <i class="close icon"></i>
        <div class="header">
            User registration unsuccessful
        </div>
        <%= error.msg %>
    </div>
<% }); %> <% } %> <% if(success_msg != ''){ %>
<div class="ui success message">
    <i class="close icon"></i>
    <div class="header">
        Your user registration was successful.
    </div>
    <%= success_msg %>
</div>
<% } %> <% if(error_msg != ''){ %>
<div class="ui warning message">
    <i class="close icon"></i>
    <div class="header">

    </div>
    <%= error_msg %>
</div>
<% } %> <% if(error != ''){ %>
<div class="ui warning message">
    <i class="close icon"></i>
    <div class="header">

    </div>
    <%= error %>
</div>
<% } %>

Building The Dashboard Page

In our views folder, we create a dashboard.ejs file:

<%- include('./partials/menu') %>
<h1>
    DashBoard
</h1>

Here, we include the menu partials so we have the menu on the page.

Building The Error Page

In our views folder, we create an error.ejs file:

<h1>Error Page</h1>
<p><%= error %></p>

Building The Home Page

In our views folder, we create a home.ejs file:

<%- include('./partials/menu') %>
<h1>
    Welcome to the Home Page
</h1>

Building The Login Page

In our views folder, we create a login.ejs file:

<div class="ui very padded text container segment">
    <%- include ('./partials/message') %>
    <h3>
        Login Form
    </h3>

    <form class="ui form" action="/users/login" method="POST">
    <div class="field">
        <label>Email</label>
        <input type="email" name="email" placeholder="Email address">
    </div>
    <div class="field">
        <label>Password</label>
        <input type="password" name="password" placeholder="Password">
    </div>
    <button class="ui button" type="submit">Login</button>
    </form>
</div>

Building The Verify Page

In our views folder, we create a login.ejs file:

<%- include ('./partials/message') %>
<h1>Verify page</h1>
<p>please verify your account</p>
<form class="ui form" action="/users/verify" method="POST">
    <div class="field">
        <label>verification code</label>
        <input type="text" type="number" name="verifyCode" placeholder="code">
    </div>
    <button class="ui button" type="submit">Verify</button>
</form>
<br>
<a class="ui button" href="/users/resend">Resend Code</a>

Here, we provide a form for users to enter the verification code that will be sent to them.

Building The Sign Up Page

We need to get the user’s mobile number, and we all know that country codes differ from country to country. Therefore, we’ll use the [intl-tel-input](https://intl-tel-input.com/) to help us with the country codes and validation of phone numbers.

npm install intl-tel-input
  1. In our public folder, we create a css directory, js directory and img directory:

    cd public
     mkdir css js img
     
  2. We copy the intlTelInput.css file from node_modules\intl-tel-input\build\css\ file into our public/css directory.

  3. We copy both the intlTelInput.js and utils.js from node_modules\intl-tel-input\build\js\ folder into our public/js directory.
  4. We copy both the flags.png and `flags@2x.pngfromnode_modules\intl-tel-input\build\img` folder into our public/img directory.

We create an app.css in our public/css folder:

cd public
touch app.css

In app.css, add the styles below:

.iti__flag {background-image: url("/static/img/flags.png");}

@media (-webkit-min-device-pixel-ratio: 2), (min-resolution: 192dpi) {
    .iti__flag {background-image: url("/static/img/flags@2x.png");}
}
.hide {
    display: none
}
.error {
    color: red;
    outline: 1px solid red;
}
.success{
    color: green;
}

Finally, we create a signup.ejs file in our views folder:

<div class="ui very padded text container segment">
    <%- include ('./partials/message') %>
    <h3>
        Signup Form
    </h3>

    <form class="ui form" action="/users/signup" method="POST">
    <div class="field">
        <label>Name</label>
        <input type="text" name="name" placeholder="name">
    </div>
    <div class="field">
        <label>Username</label>
        <input type="text" name="username" placeholder="username">
    </div>
    <div class="field">
        <label>Password</label>
        <input type="password" name="password" placeholder="Password">
    </div>
    <div class="field">
        <label>Phone number</label>
        <input type="tel" id='phone'>
        <span id="valid-msg" class="hide success">✓ Valid</span>
        <span id="error-msg" class="hide error"></span>
    </div>
    <div class="field">
        <label>Email</label>
        <input type="email" name="email" placeholder="Email address">
    </div>

    <button class="ui button" type="submit">Sign up</button>
    </form>
</div>
<script src="/static/js/intlTelInput.js"></script>
<script>
    const input = document.querySelector("#phone")
    const errorMsg = document.querySelector("#error-msg")
    const validMsg = document.querySelector("#valid-msg")

    const errorMap = ["Invalid number", "Invalid country code", "Too short", "Too long", "Invalid number"];
    const iti = window.intlTelInput(input, {
        separateDialCode: true,
        autoPlaceholder: "aggressive",
        hiddenInput: "phonenumber",
        utilsScript: "/static/js/utils.js?1590403638580" // just for formatting/placeholders etc
    });
    var reset = function() {
        input.classList.remove("error");
        errorMsg.innerHTML = "";
        errorMsg.classList.add("hide");
        validMsg.classList.add("hide");
    };
    // on blur: validate
    input.addEventListener('blur', function() {
        reset();
        if (input.value.trim()) {
        if (iti.isValidNumber()) {
            validMsg.classList.remove("hide");
        } else {
            input.classList.add("error");

            var errorCode = iti.getValidationError();
            errorMsg.innerHTML = errorMap[errorCode];
            errorMsg.classList.remove("hide");
        }
        }
    });
    // on keyup / change flag: reset
    input.addEventListener('change', reset);
    input.addEventListener('keyup', reset);

    document.querySelector('.ui.form').addEventListener('submit', (e) => {
        if(!iti.isValidNumber()){
        e.preventDefault()
        }
    })
</script> 

Basic Authentication With Passport

Building authentication into an application can be really complex and time-draining, so we need a package to help us with that.

Remember: do not re-invent the wheel, except if your application has a specific need.

passport is a package that helps out with authentication in our express application.

passport has many strategies we could use, but we’ll be using the local-strategy — which basically does username and password authentication.

One advantage of using passport is that, since it has many strategies, we can easily extend our application to use its other strategies.
npm install passport passport-local

In index.js we add the following code:

//after requiring express
const passport = require('passport')

//after requiring mongoose
const { localAuth } = require('./config/passportLogic')

//after const app = express()
localAuth(passport)

//after app.use(express.urlencoded({ extended: true }))
app.use(passport.initialize());
app.use(passport.session());

We’re adding some application level middleware to our index.js file — which tells the application to use the passport.initialize() and the passport.session() middleware.

Passport.initialize() initializes passport, while the passport.session() middleware let’s passport know that we’re using session for authentication.

Do not worry much about the localAuth() function. That takes the passport object as an argument, and we’ll create the function just below.

Next, we create a config folder and create the needed files:

mkdir config
touch  passportLogic.js middleware.js

In passportLogic.js,

//file contains passport logic for local login
const LocalStrategy = require('passport-local').Strategy;
const mongoose = require('mongoose')
const User = require('../models/user')
const localAuth = (passport) => {
    passport.use(
        new LocalStrategy(
        { usernameField: 'email' }, async(email, password, done) => {
            try {
                const user = await User.findOne({ email: email }) 

                if (!user) {
                    return done(null, false, { message: 'Incorrect email' });
                }
                //validate password
                const valid = await user.validPassword(password)
                if (!valid) {
                    return done(null, false, { message: 'Incorrect password.' });
                }
                return done(null, user);
            } catch (error) {
                return done(error)
            }
        }
    ));
    passport.serializeUser(function(user, done) {
        done(null, user.id);
    });

    passport.deserializeUser(function(id, done) {
        User.findById(id, function(err, user) {
            done(err, user);
        });
    });
}
module.exports = {
    localAuth
}

Let’s understand what is going on in the code above.

Apart from the require statements, we create the localAuth() function, which will be exported from the file. In the function, we call the passport.use() function that uses the LocalStrategy() for username and password based authentication.

We specify that our usernameField should be email. Then, we find a user that has that particular email — if none exists, then we return an error in the done() function. However, if a user exists, we check if the password is valid using the validPassword method on the User object. If it’s invalid, we return an error. Finally, if everything is successful, we return the user in done(null, user).

passport.serializeUser() and passport.deserializeUser() helps in order to support login sessions. Passport will serialize and deserialize user instances to and from the session.

In middleware.js,

//check if a user is verified
const isLoggedIn = async(req, res, next) => {
    if(req.user){
        return next()
    } else {
        req.flash(
            'error_msg',
            'You must be logged in to do that'
        )
        res.redirect('/users/login')
    }
}
const notLoggedIn = async(req, res, next) => {
    if(!req.user) {
        return next()
    } else{
        res.redirect('back')
    }
}


module.exports = {
    isLoggedIn,
    notLoggedIn
}

Our middleware file contains two(2) route level middleware, which will be used later in our routes.

Route-level middleware is used by our routes, mostly for route protection and validation, such as authorization, while application level middleware is used by the whole application.

isLoggedIn and notLoggedIn are route level middleware that checks if a user is logged in. We use these middlewares to block access to routes that we want to make accessible to logged-in users.

Building The Sign-Up Controllers

cd controllers
mkdir signUpController.js loginController.js

In signUpController.js, we:

  1. Check for users’ credentials;
  2. Check if a user with that detail(email or phone-number) exists in our database;
  3. Create an error if the user exists;
  4. Finally, if such a user does not exist, we create a new user with the given details and redirect to the login page.
const mongoose = require('mongoose')
const User = require('../models/user')

//sign up Logic
const getSignup = async(req, res, next) => {
    res.render('signup')
}
const createUser = async (req, res, next) => {
    try {
        const { name, username, password, phonenumber, email} = await req.body
        const errors = []
        const reRenderSignup = (req, res, next) => {
            console.log(errors)
            res.render('signup', {
                errors,
                username,
                name,
                phonenumber,
                email
            })
        }
        if( !name || !username || !password || !phonenumber || !email ) {
            errors.push({ msg: 'please fill out all fields appropriately' })
            reRenderSignup(req, res, next)
        } else {
            const existingUser = await User.findOne().or([{ email: email}, { phonenumber : phonenumber }])
            if(existingUser) {
            errors.push({ msg: 'User already exists, try changing your email or phone number' })
            reRenderSignup(req, res, next)
            } else {
                const user = await User.create(
                    req.body
                )
                req.flash(
                    'success_msg',
                    'You are now registered and can log in'
                );
                res.redirect('/users/login')
            }

        }
    } catch (error) {
        next(error)
    }
}
module.exports = {
    createUser,
    getSignup
}

In loginController.js,

  1. We use the passport.authenticate() method with the local scope (email and password) to check if the user exists;
  2. If the user doesn’t exist, we give out an error message and redirect the user to the same route;
  3. if the user exists, we log the user in using the req.logIn method, send them a verification using the sendVerification() function, then redirect them to the verify route.
const mongoose = require('mongoose')
const passport = require('passport')
const User = require('../models/user')
const { sendVerification } = require('../config/twilioLogic')
const getLogin = async(req, res) => {
    res.render('login')
}
const authUser = async(req, res, next) => {
    try {
        passport.authenticate('local', function(err, user, info) {
            if (err) { 
                return next(err) 
            }
            if (!user) { 
                req.flash(
                    'error_msg',
                    info.message
                )
                return res.redirect('/users/login')
            }
            req.logIn(user, function(err) {
                if (err) { 
                    return next(err)
                }
                sendVerification(req, res, req.user.phonenumber)
                res.redirect('/users/verify');
            });
        })(req, res, next);
    } catch (error) {
        next(error)
    }

}
module.exports = {
    getLogin,
    authUser
}

Right now, sendVerification() doesn’t exactly work. That’s because we’ve not written the function, so we need Twilio for that. Let’s install Twilio and get started.

Using Twilio Verify To Protect Routes

In order to use Twilio Verify, you:

  1. Head over to https://www.twilio.com/;
  2. Create an account with Twilio;
  3. Login to your dashboard;
  4. Select create a new project;
  5. Follow the steps to create a new project.

To install the Twilio SDK for node.js:

npm install twilio

Next, we need to install dotenv to help us with environment variables.

npm install dotenv

We create a file in the root of our project and name it .env. This file is where we keep our credentials, so we don’t push it to git. In order to do that, we create a .gitignore file in the root of our project, and add the following lines to the file:

node_modules
.env

This tells git to ignore both the node_modules folder and the .env file.

To get our Twilio account credentials, we login into our Twilio console, and copy our ACCOUNT SID and AUTH TOKEN. Then, we click on get trial number and Twilio generates a trial number for us, click accept number. Now from the console copy, we copy our trial number.

In .env,

TWILIO_ACCOUNT_SID = <YOUR_ACCOUNT_SID>
TWILIO_AUTH_TOKEN = <YOUR_AUTH_TOKEN>
TWILIO_PHONE_NUMBER = <TOUR_TWILIO_NUMBER>

Don’t forget to replace <YOUR_ACCOUNT_SID>, <YOUR_AUTH_TOKEN>, and <TOUR_TWILIO_NUMBER> with your actual credentials.

We create a file named twilioLogic.js in the config directory:

cd cofig
touch twilioLogic.js

In twilioLogic.js,

require('dotenv').config()
const twilio = require('twilio')
const client = twilio(process.env.TWILIO_ACCOUNT_SID, process.env.TWILIO_AUTH_TOKEN)
//create verification service
const createService = async(req, res) => {
    client.verify.services.create({ friendlyName: 'phoneVerification' })
        .then(service => console.log(service.sid))
}

createService();

In the code snippet above, we create a new verify service.

Run:

node config/twilioLogic.js

The string that gets logged to our screen is our TWILIO_VERIFICATION_SID — we copy that string.

In .env, add the line TWILIO_VERIFICATION_SID = <YOUR_TWILIO_VERIFICATION_SID>.

In config/twilioLogic.js, we remove the createService() line, since we need to create the verify service only once. Then, we add the following lines of code:

//after createService function creation

//send verification code token
const sendVerification = async(req, res, number) => {

        client.verify.services(process.env.TWILIO_VERIFICATION_SID)
            .verifications
            .create({to: `${number}`, channel: 'sms'})
            .then( verification => 
                console.log(verification.status)
            ); 
}

//check verification token
const checkVerification = async(req, res, number, code) => {
    return new Promise((resolve, reject) => {
        client.verify.services(process.env.TWILIO_VERIFICATION_SID)
            .verificationChecks
            .create({to: `${number}`, code: `${code}`})
            .then(verification_check => {
                resolve(verification_check.status)
            });
    })
}
module.exports = {
    sendVerification,
    checkVerification
}

sendVerification is an asynchronous function that returns a promise that sends a verification OTP to the number provided using the sms channel.

checkVerification is also an asynchronous function that returns a promise that checks the status of the verification. It checks if the OTP provided by the users is the same OTP that was sent to them.

In config/middleware.js, add the following:

//after notLoggedIn function declaration

//prevents an unverified user from accessing '/dashboard'
const isVerified = async(req, res, next) => {
    if(req.session.verified){
        return next()
    } else {
        req.flash(
            'error_msg',
            'You must be verified to do that'
        )
        res.redirect('/users/login')
    }
}

//prevent verified User from accessing '/verify'
const notVerified = async(req, res, next) => {
    if(!req.session.verified){
        return next()
    } else {
        res.redirect('back')
    }
}


module.exports = {
    //after notLoggedIn
    isVerified, 
    notVerified
}

We’ve created two more route level middleware, which will be used later in our routes.

isVerified and notVerified check if a user is verified. We use these middlewares to block access to routes that we want to make accessible to only verified users.

cd controllers
touch verifyController.js

In verifyController.js,

const mongoose = require('mongoose')
const passport = require('passport')
const User = require('../models/user')
const { sendVerification, checkVerification } = require('../config/twilioLogic')
const loadVerify = async(req, res) => {
    res.render('verify')
}
const resendCode = async(req, res) => {
    sendVerification(req, res, req.user.phonenumber)
    res.redirect('/users/verify')
}
const verifyUser = async(req, res) => {
    //check verification code from user input
    const verifyStatus = await checkVerification(req, res, req.user.phonenumber, req.body.verifyCode)

    if(verifyStatus === 'approved') {
        req.session.verified = true
        res.redirect('/users/dashboard')
    } else {
        req.session.verified = false
        req.flash(
            'error_msg',
            'wrong verification code'
        )
        res.redirect('/users/verify')
    }

}
module.exports = {
    loadVerify,
    verifyUser,
    resendCode
}

resendCode() re-sends the verification code to the user.

verifyUser uses the checkVerification function created in the previous section. If the status is approved, we set the verified value on req.session to true.

req.session just provides a nice way to access the current session. This is done by express-session, which adds the session object to our request object.

Hence the reason I said that most application level middleware do affect our applications state (request and response objects)
Building The User Routes

Basically, our application is going to have the following routes:

  1. /user/login: for user login;
  2. /user/signup: for user registration;
  3. /user/logout: for log out;
  4. /user/resend: to resend a verification code;
  5. /user/verify: for input of verification code;
  6. /user/dashboard: the route that is protected using Twilio Verify.
cd routes
touch user.js

In routes/user.js, it requires the needed packages:

const express = require('express')
const router = express.Router()
const { createUser, getSignup } = require('../controllers/signUpController')
const { authUser, getLogin } = require('../controllers/loginController')
const { loadVerify, verifyUser, resendCode } = require('../controllers/verifyController')
const { isLoggedIn, isVerified, notVerified, notLoggedIn } = require('../config/middleware')

//login route
router.route('/login')
    .all(notLoggedIn)
    .get(getLogin)
    .post(authUser)

//signup route
router.route('/signup')
    .all(notLoggedIn)
    .get(getSignup)
    .post(createUser)
//logout
router.route('/logout')
    .get(async (req, res) => {
        req.logout();
        res.redirect('/');
    })
router.route('/resend')
    .all(isLoggedIn, notVerified)
    .get(resendCode)
//verify route
router.route('/verify')
    .all(isLoggedIn, notVerified)
    .get(loadVerify)
    .post(verifyUser)
//dashboard
router.route('/dashboard')
    .all(isLoggedIn, isVerified)
    .get(async (req, res) => {
        res.render('dashboard')
    })

//export router
module.exports = router

We’re creating our routes in the piece of code above, let’s see what’s going on here:

router.route() specifies the route. If we specify router.route('/login'), we target the login route. .all([middleware]) allows us specify that all request to that route should use those middleware.

The router.route('/login').all([middleware]).get(getController).post(postController) syntax is an alternative to the one most developers are used to.

It does the same thing as router.get('/login', [middleware], getController) and router.post('/login, [middleware], postController).

The syntax used in our code is nice because it makes our code very DRY — and it’s easier to keep up with what’s going on in our file.

Now, if we run our application by typing the command below in our terminal:

npm run dev 

Our full-stack express application should be up and running.

Conclusion

What we have done in this tutorial was to:

  1. Build out an express application;
  2. Add passport for authentication with sessions;
  3. Use Twilio Verify for route protection.

I surely hope that after this tutorial, you are ready to rethink your password-based authentication and add that extra layer of security to your application.

What you could do next:

  1. Try to explore passport, using JWT for authentication;
  2. Integrate what you’ve learned here into another application;
  3. Explore more Twilio products. They provide services that make development easier(Verify is just one of the many services).

Further Reading On Smashing Magazine

]]>
hello@smashingmagazine.com (Alexander Godwin)
<![CDATA[Easy Fluid Typography With clamp() Using Sass Functions]]> https://smashingmagazine.com/2022/10/fluid-typography-clamp-sass-functions/ https://smashingmagazine.com/2022/10/fluid-typography-clamp-sass-functions/ Wed, 05 Oct 2022 12:00:00 GMT Fluid typography is getting a lot more popular, especially since the clamp() math function is available in every evergreen browser. But if we’re honest, it’s still a lot of mathematics to achieve this. You can use tools such as utopia.fyi, which are fantastic. But in large projects, it can get messy pretty fast. I’m a big fan of readable and maintainable code and always want to see what my code is doing at a glance. I’m sure there are many more of you like that, so instead of adding a full clamp() function inside of our code, maybe we can make this a bit more readable with Sass.

Why Should We Use Fluid Typography?

Usually, when designing for different screen sizes, we use media queries to determine the font size of our typographic elements. Although this usually gives enough control for the more conventional devices, it doesn’t cover all of the screen sizes.

By using fluid typography, we can make the typography scale more logically between all sorts of different devices.

This is now possible in all evergreen browsers because of the clamp() function in CSS. It is perfect for the job and reduces our media query writing, thus saving us a bit of file size along the way.

How Exactly Does This clamp() Function Work For Typography?

In short, the clamp function looks like this:

clamp([min-bound], [value-preferred], [max-bound]);

This takes into account three numbers: a minimum bound, preferred value, and a maximum bound. By using rem values, we can increase the accessibility a bit, but it’s still not 100% foolproof, especially for external browser tools.

If you want a more in-depth explanation of the math, I suggest you read this post from Adrian Bece “Modern Fluid Typography Using CSS Clamp ”.

However, there is a bit of a problem. When you read those clamp functions inside your CSS, it’s still hard to see exactly what is happening. Just imagine a file full of font sizes that look like this:

clamp(1.44rem, 3.44vw + 0.75rem, 2.81rem)

But with a little help from the sass function, we can make these font sizes much more readable.

What Do We Want To Achieve With This Simple Sass Function?

In short, we want to do something like this: We have a minimum font size, from the moment our breakpoint is larger than 400px, we want it to scale it to our biggest font size until the maximum breakpoint is reached.

The minimum and maximum font sizes are covered quite easily. If we want a minimum font size of 16px (or 1rem) and a maximum font size of 32px (or 2rem), we already have the two parts of our clamp function:

clamp(1rem, [?], 2rem)
Creating A Basic Automated Fluid Function

This is where things get tricky, and I suggest you follow the article by Adrian Bece, who gives a great in-depth explanation of the math behind this.

In short, the equation is the following:

(maximum font-size - minimum font-size) / (maximum breakpoint - minimum breakpoint)

Let’s get ready to do some mathematics for this to happen in Sass, so let’s create our fluid-typography.scss function file and start by adding sass:math and the function with the values we’ll need:

@use "sass:math";

@function fluid($min-size, $max-size, $min-breakpoint, $max-breakpoint, $unit: vw) {

}

Now, let’s add the calculation for the slope inside of our function with some sass:math:

@function fluid($min-size, $max-size, $min-breakpoint, $max-breakpoint, $unit: vw) {
 $slope: math.div($max-size - $min-size, $max-breakpoint - $min-breakpoint);
}

To get a value we can work with, we’ll need to multiply our slope by 100:

$slope-to-unit: $slope * 100;

All that is left is for us to find our intercept to build the equation. We can do this with the following function:

$intercept: $min-size - $slope * $min-breakpoint;

And finally, return our function:

@return clamp(#{$min-size}, #{$slope-to-unit}#{$unit} + #{$intercept}, #{$max-size});

If we call the created sass function in our scss, we should now get fluid typography:

h1 {
   font-size: #{fluid(1rem, 2rem, 25rem, 62.5rem)}
}

A Note About Units

In most cases, we will be using a viewport width when it comes to fluid typography, so this makes a good default. However, there are some cases, especially when using the clamp() function for vertical spacing, where you want to use a viewport height instead of width. When this is desired, we can change the outputted unit and use a minimum and maximum breakpoint for the height:

h1 {
   font-size: #{fluid(1rem, 2rem, 25rem, 62.5rem, vh)}
}
Updating The Function To Make The Calculations Feel More Natural

We got what we need, but let’s be honest, most of the time, we are implementing a design, and it doesn’t feel natural to pass our viewports as rems. So, let’s update this function to use pixels as a viewport measurement. While we’re at it, let’s update the font sizes so we can use pixel values for everything. We will still convert them to rem units since those are better for accessibility.

First, we’ll need an extra function to calculate our rem values based on a pixel input.

Note: This won’t work if you change your base rem value.

@function px-to-rem($px) {
    $rems: math.div($px, 16px) * 1rem;
    @return $rems;
}

Now we can update our fluid function to output rem values even though it gets pixels as input. This is the updated version:

@function fluid($min-size, $max-size, $min-breakpoint, $max-breakpoint, $unit: vw) {
    $slope: math.div($max-size - $min-size, $max-breakpoint - $min-breakpoint);
    $slope-to-unit: $slope * 100;
    $intercept-rem: px-to-rem($min-size - $slope * $min-breakpoint);
    $min-size-rem: px-to-rem($min-size);
    $max-size-rem: px-to-rem($max-size);
    @return clamp(#{$min-size-rem}, #{$slope-to-unit}#{$unit} + #{$intercept-rem}, #{$max-size-rem});
}

Now we can use the following input:

font-size: #{fluid(16px, 32px, 320px, 960px)}

This will result in the following:

font-size: clamp(1rem, 2.5vw + 0.5rem, 2rem);

At first glance, this seems perfect, but mostly that’s because I’ve been using very simple values. For example, when clamping to a maximum value of 31px instead of 32px, our rem values won’t be so rounded, and our output will get a bit messy.

Input:

font-size: #{fluid(16px, 31px, 320px, 960px)}

Output:

font-size: clamp(1rem, 2.34375vw + 0.53125rem, 1.9375rem);

If you’re like me and find this a bit messy as well, we could round our values a little bit to increase readability and save some bytes in our final CSS file. Also, it might get a bit tedious if we always have to add the viewport, so why not add some defaults in our function?

Rounding Our Values And Adding Some Defaults

Let’s start by adding a rounding function to our Sass file. This will take any input and round it to a specific amount of decimals:

@function round($number, $decimals: 0) {
    $n: 1;
    @if $decimals > 0 {
        @for $i from 1 through $decimals {
            $n: $n * 10;
        }
    }
    @return math.div(math.round($number * $n), $n);
}

Now we can update our output values with rounded numbers. Update the function accordingly. I would suggest setting at least two decimals for the output values for the most consistent results:

@function fluid($min-size, $max-size, $min-breakpoint, $max-breakpoint, $unit: vw) {
    $slope: math.div($max-size - $min-size, $max-breakpoint - $min-breakpoint);
    $slope-to-unit: round($slope * 100, 2);
    $intercept-rem: round(px-to-rem($min-size - $slope * $min-breakpoint), 2);
    $min-size-rem: round(px-to-rem($min-size), 2);
    $max-size-rem: round(px-to-rem($max-size), 2);
    @return clamp(#{$min-size-rem}, #{$slope-to-unit}#{$unit} + #{$intercept-rem}, #{$max-size-rem});
}

Now the same example as before will give us a much cleaner result.

Input:

font-size: #{fluid(16px, 31px, 320px, 960px)};

Output:

font-size: clamp(1rem, 2.34vw + 0.53rem, 1.94rem);

Adding A Default Breakpoint

If you don’t feel like repeating yourself, you can always set a default breakpoint to your function. Try updating the function like this:

$default-min-bp: 320px;
$default-max-bp: 960px;

@function fluid($min-size, $max-size, $min-breakpoint: $default-min-bp, $max-breakpoint: $default-max-bp, $unit: vw) {
    // ...
}

Now, we don’t need to repeat these viewports all the time. We can still add a custom breakpoint but a simple input such as:

font-size: #{fluid(16px, 31px)};

Will still result in:

font-size: clamp(1rem, 2.34vw + 0.53rem, 1.94rem);

Here is the full function:

@use 'sass:math';

$default-min-bp: 320px;
$default-max-bp: 960px;

@function round($number, $decimals: 0) {
    $n: 1;
    @if $decimals > 0 {
        @for $i from 1 through $decimals {
            $n: $n * 10;
        }
    }
    @return math.div(math.round($number * $n), $n);
}

@function px-to-rem($px) {
    $rems: math.div($px, 16px) * 1rem;
    @return $rems;
}

@function fluid($min-size, $max-size, $min-breakpoint: $default-min-bp, $max-breakpoint: $default-max-bp, $unit: vw) {
    $slope: math.div($max-size - $min-size, $max-breakpoint - $min-breakpoint);
    $slope-to-unit: round($slope * 100, 2);
    $intercept-rem: round(px-to-rem($min-size - $slope * $min-breakpoint), 2);
    $min-size-rem: round(px-to-rem($min-size), 2);
    $max-size-rem: round(px-to-rem($max-size), 2);
    @return clamp(#{$min-size-rem}, #{$slope-to-unit}#{$unit} + #{$intercept-rem}, #{$max-size-rem});
}
A Final Note: Be A Happy Clamper For All users Out There

If you followed this little tutorial and were amazed by it, you might want to add this clamp() method for everything, but there is an important side note when it comes to accessibility.

Note: When you use vw units or limit how large text can get with clamp(), there is a chance a user may be unable to scale the text to 200% of its original size.

If that happens, it is WCAG failure. As Adrian Bece mentioned, it’s not 100% foolproof. Adrian Roselli has written some examples on this, which you might find interesting.

We can use this method today because of the great browser support. By being smart on the usage, I’m sure it can be a beautiful addition to your upcoming project or as an upgrade to a previous one.

]]>
hello@smashingmagazine.com (Brecht De Ruyte)
<![CDATA[Delightful UI Animations With Shared Element Transitions API (Part 1)]]> https://smashingmagazine.com/2022/10/ui-animations-shared-element-transitions-api-part1/ https://smashingmagazine.com/2022/10/ui-animations-shared-element-transitions-api-part1/ Mon, 03 Oct 2022 13:00:00 GMT Animations are an essential part of web design and development. They can draw attention, guide users on their journey, provide satisfying and meaningful feedback to interaction, add character and flair to make the website stand out, and so much more!

Before we begin, let’s take a quick look at the following video and imagine how much CSS and JavaScript would take to create an animation like this. Notice that the cart counter is also animated, and the animation runs right after the previous one completes.

Although this animation looks alright, it’s just a minor improvement. Currently, the API doesn’t really know that the image (shared element) that is being moved from the container to the overlay is the same element in their respective states. We need to instruct the browser to pay special attention to the image element when switching between states, so let’s do that!

Creating A Shared Element Animation

With page-transition-tag CSS property, we can easily tell the browser to watch for the element in both outgoing and incoming images, keep track of element’s size and position that are changing between them, and apply the appropriate animation.

We also need to apply the contain: paint or contain: layout to the shared element. This wasn’t required for the crossfade animations, as it’s only required for elements that will receive the page-transition-tag. If you want to learn more about CSS containment, Rachel Andrew wrote a very detailed article explaining it.

.gallery__image--active {
  page-transition-tag: active-image;
}

.gallery__image {
  contain: paint;
}

Another important caveat is that page-transition-tag has to be unique, and we can apply it to only one element during the duration of the animation. This is why we apply it to the active image element right before the image is moved to the overlay and remove it when the image overlay is closed and the image is returned to its original position:

async function toggleImageView(index) {
   const image = document.getElementById(js-gallery-image-${index});

  // Apply a CSS class that contains the page-transition-tag before animation starts.
  image.classList.add("gallery__image--active");

  const imageParentElement = image.parentElement;

  const moveTransition = document.createDocumentTransition();
  await moveTransition.start(() => moveImageToModal(image));

  overlayWrapper.onclick = async function () {
    const moveTransition = document.createDocumentTransition();
    await moveTransition.start(() => moveImageToGrid(imageParentElement));

    // Remove the class which contains the page-transition-tag after the animation ends.
    image.classList.remove("gallery__image--active");
  };
}

Alternatively, we could have used JavaScript to toggle the page-transition-tag property inline on the element. However, it’s better to use the CSS class toggle to make use of media queries to apply the tag conditionally:

// Applies page-transition-tag to the image.
image.style.pageTransitionTag = "active-image";

// Removes page-transition-tag from the image.
image.style.pageTransitionTag = "none";

And that’s pretty much it! Let’s take a look at our example with the shared element applied:

Customizing Animation Duration And Easing Function

We’ve created this complex transition with just a few lines of CSS and JavaScript, which turned out great. However, we expect to have more control over the animation properties like duration, easing function, delay, and so on to create even more elaborate animations or compose them for even greater effect.

Shared Element Transitions API makes use of CSS animation properties and we can use them to fully customize our state animation. But which CSS selectors to use for these outgoing and incoming states that the API is generating for us?

Shared Element Transition API introduces new pseudo-elements that are added to DOM when its animations are run. Jake Archibald explains the pseudo-element tree in his Chrome developers article. By default (in case of crossfade animation), we get the following tree of pseudo-elements:

::page-transition
└─ ::page-transition-container(root)
   └─ ::page-transition-image-wrapper(root)
      ├─ ::page-transition-outgoing-image(root)
      └─ ::page-transition-incoming-image(root)

These pseudo-elements may seem a bit confusing at first, so I’m including WICG’s concise explanation for these pseudo-elements and their general purpose:

  • ::page-transition sits in a top-layer, over everything else on the page.
  • ::page-transition-outgoing-image(root) is a screenshot of the old state, and ::page-transition-incoming-image(root) is a live representation of the new state. Both render as CSS replaced content.
  • ::page-transition-container animates size and position between the two states.
  • ::page-transition-image-wrapper provides blending isolation, so the two images can correctly cross-fade.
  • ::page-transition-outgoing-image and ::page-transition-incoming-image are the visual states to cross-fade.

For example, when we apply the page-transition-tag: active-image, its pseudo-elements are added to the tree:

::page-transition
├─ ::page-transition-container(root)
│  └─ ::page-transition-image-wrapper(root)
│     ├─ ::page-transition-outgoing-image(root)
│     └─ ::page-transition-incoming-image(root)
└─ ::page-transition-container(active-image)
   └─ ::page-transition-image-wrapper(active-image)
      ├─ ::page-transition-outgoing-image(active-image)
      └─ ::page-transition-incoming-image(active-image)

In our example, we want to modify both the crossfade (root) animation and the shared element animation. We can use the universal selector * with the pseudo-element to change animation properties for all available transition elements and target pseudo-elements for specific animation using the page-transition-tag value.

In this example, we are applying 400ms duration for all animated elements with an ease-in-out easing function, and then override the active-image transition easing function and setting a custom cubic-bezier value:

::page-transition-container(*) {
  animation-duration: 400ms;
  animation-timing-function: ease-in-out;
}

::page-transition-container(active-image) {
  animation-timing-function: cubic-bezier(0.215, 0.61, 0.355, 1);
}

Accessible Animations

It’s important to be aware of accessibility requirements when working with animations. Some people prefer browsing the web with reduced motion, so we must either remove an animation or provide a more suitable alternative. This can be easily done with a widely supported prefers-reduced-motion media query.

The following code snippet turns off animations for all elements using the Shared Element Transitions API. This is a shotgun solution, and we need to ensure that DOM updates smoothly and remains usable even with the animations turned off:

@media (prefers-reduced-motion) {
    /* Turn off all animations */
    ::page-transition-container(*),
    ::page-transition-outgoing-image(*),
    ::page-transition-incoming-image(*) {
        animation: none !important;
    }

    /* Or, better yet, create accessible alternatives for these animations  */
}

@keyframes fadeOut {
    from {
        filter: blur(0px) brightness(1) opacity(1);
    }
    to {
        filter: blur(6px) brightness(8) opacity(0);
    }
}

@keyframes fadeIn {
    from {
        filter: blur(6px) brightness(8) opacity(0);
    }
    to {
        filter: blur(0px) brightness(1) opacity(1);
    }
}

Now, all we have to do is assign the exit animation to the outgoing image pseudo-element and the entry animation to the incoming image pseudo-element. We can set a page-transition-tag directly to the HTML image element as it’s the only element that will perform this animation:

/* We are applying contain property on all browsers (regardless of property support) to avoid differences in rendering and introducing bugs */
.gallery img {
    contain: paint;
}

@supports (page-transition-tag: supports-tag) {
    .gallery img {
        page-transition-tag: gallery-image;
    }

    ::page-transition-outgoing-image(gallery-image) {
        animation: fadeOut 0.4s ease-in both;
    }

    ::page-transition-incoming-image(gallery-image) {
        animation: fadeIn 0.4s ease-out 0.15s both;
    }
}

Even the seemingly simple crossfade animations can look cool, don’t you think? I think this particular animation fits really nicely with the dark theme we have in the example.

/* We are applying contain property on all browsers (regardless of property support) to avoid differences in rendering and introducing bugs */
.product__dot {
  contain: paint;
}

.shopping-bag__counter span {
  contain: paint;
}

@supports (page-transition-tag: supports-tag) {
  ::page-transition-container(cart-dot) {
    animation-duration: 0.7s;
    animation-timing-function: ease-in;
  }

  ::page-transition-outgoing-image(cart-counter) {
    animation: toDown 0.3s cubic-bezier(0.4, 0, 1, 1) both;
  }

  ::page-transition-incoming-image(cart-counter) {
    animation: fromUp 0.3s cubic-bezier(0, 0, 0.2, 1) 0.3s both;
  }
}

@keyframes toDown {
  from {
    transform: translateY(0);
    opacity: 1;
  }
  to {
    transform: translateY(4px);
    opacity: 0;
  }
}

@keyframes fromUp {
  from {
    transform: translateY(-3px);
    opacity: 0;
  }
  to {
    transform: translateY(0);
    opacity: 1;
  }
}

And that is it! It amazes me every time how elaborate these animations can turn out with so few lines of additional code, all thanks to Shared Element Transitions API. Notice that the header element with the cart icon is fixed, so it sticks to the top, and our standard animation setup works like a charm, regardless!

See the Pen Add to cart animation - completed (2) [forked] by Adrian Bece.

Conclusion

When done correctly, animations can breathe life into any project and offer a more delightful and memorable experience to users. With the upcoming Shared Element Transitions API, creating complex UI state transition animations has never been easier, but we still need to be careful how we use and implement animations.

This simplicity can give way to bad practices, such as not using animations correctly, creating slow or repetitive animations, creating needlessly complex animations, and so on. It’s important to learn best practices for animations and on the web so we can effectively utilize this API to create truly amazing and accessible experiences or even consult with the designer if we are unsure on how to proceed.

In the next article, we’ll explore the API’s potential when it comes to transition between different pages in Single Page Apps (SPA) and the upcoming Cross-document same-origin transitions, which are yet to be implemented.

I am excited to see what the dev community will build using this awesome new feature. Feel free to reach out on Twitter or LinkedIn if you have any questions or if you built something amazing using this API.

Go ahead and build something awesome!

Many thanks to Jake Archibald for reviewing this article for technical accuracy.

References

]]>
hello@smashingmagazine.com (Adrian Bece)
<![CDATA[October Vibes For Your Desktop (2022 Wallpapers Edition)]]> https://smashingmagazine.com/2022/09/desktop-wallpaper-calendars-october-2022/ https://smashingmagazine.com/2022/09/desktop-wallpaper-calendars-october-2022/ Fri, 30 Sep 2022 13:30:00 GMT When we look closely, inspiration can lie everywhere. In the leaves shining in the most beautiful colors in many parts of the world at this time of year, in a cup of coffee and a conversation with a friend, or when taking a walk on a windy October day. Whatever your secret to finding new inspiration might be, our monthly wallpapers series is bound to give you a little inspiration boost, too.

For this October edition, artists and designers from across the globe once again challenged their creative skills and designed wallpapers to spark your imagination and make the month a bit more colorful than it already is. Like every month since we embarked on this wallpapers adventure more than eleven years ago.

The wallpapers in this collection all come in versions with and without a calendar for October 2022 — so no matter if you want to keep an eye on your deadlines or plan to use your favorite design even after the month has ended, we’ve got you covered. Speaking of favorites: As a little bonus goodie, you’ll also find some oldies but goodies from past October editions at the end of this post. A big thank-you to everyone who shared their designs with us — this post wouldn’t exist without you!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.
Dreamy Autumn Girl

“Our designers were inspired by the coziness of autumn and the mood that it evokes — the only desire that appears is to put on a warm cozy sweater, take a cup of warm tea, and just enjoy the view outside the window. If you want more free calendars on other thematic, check out our listicle.” — Designed by MasterBundles from Ukraine.

Spooky Season

“Trick or treating, Tim Burton movies, Edgar Allan Poe poems — once these terms rise up to the top of Google searches, we know that the spooky season is here. We witch you a happy Halloween!” — Designed by PopArt Studio from Serbia.

Boo!

Designed by Mad Fish Digital from Portland, OR.

Fall Colors

“Fall is about orange, brown, and earthly colors. People still enjoy waling through the parks, even if it’s a little bit colder, just to take in the fall palette of colors.” — Designed by Andrew from the United States.

King Of The Pirates

Designed by Ricardo Gimenes from Sweden.

Tarzan In The Jungle

“We start this October with Tarzan in his jungle. Luckily Chita helps us!” — Designed by Veronica Valenzuela from Spain.

Happy Halloween

Designed by Ricardo Gimenes from Sweden.

Design Your Thinking

“Thinking helps us challenge our own assumptions, discover new things about ourselves and our perspective, stay mentally sharp, and even be more optimistic. Using divergent thinking strategies can help you examine a problem from every angle and identify the true root of the issue. Deep thinking allows you to try on perspectives that you may not have considered before.” — Designed by Hitesh Puri from Delhi, India.

Welcome Maa Durga!

“Welcome the power — Shakti. Welcome the love. Welcome her blessings. Welcome Maa Durga!” — Designed by Rahul Bhattacharya from India.

Old Tree

“No surprise, with October, Halloween time is back. In the north, days are becoming shorter. The night atmosphere takes place and a slightly scary feeling surrounds everything. It’s not only a matter of death. I had taken a picture of this old tree. Who knows if there is really noone in there?” — Designed by Philippe Brouard from France.

Oldies But Goodies

Hidden in our wallpapers archives, we rediscovered some almost-forgotten treasures from past October editions. May we present… (Please note that these designs don’t come with a calendar.)

Autumn Vibes

“Autumn has come, the time of long walks in the rain, weekends spent with loved ones, with hot drinks, and a lot of tenderness. Enjoy.” — Designed by LibraFire from Serbia.

The Night Drive

Designed by Vlad Gerasimov from Georgia.

The Return Of The Living Dead

Designed by Ricardo Gimenes from Sweden.

Goddess Makosh

“At the end of the kolodar, as everything begins to ripen, the village sets out to harvesting. Together with the farmers goes Makosh, the Goddess of fields and crops, ensuring a prosperous harvest. What she gave her life and health all year round is now mature and rich, thus, as a sign of gratitude, the girls bring her bread and wine. The beautiful game of the goddess makes the hard harvest easier, while the song of the farmer permeates the field.” — Designed by PopArt Studio from Serbia.

Bird Migration Portal

“October is a significant month for me because it is when my favorite type of bird travels south. For that reason I have chosen to write about the swallow. When I was young, I had a bird’s nest not so far from my room window. I watched the birds almost every day; because those swallows always left their nests in October. As a child, I dreamt that they all flew together to a nicer place, where they were not so cold.” — Designed by Eline Claeys from Belgium.

Game Night And Hot Chocolate

“To me, October is all about cozy evenings with hot chocolate, freshly baked cookies, and a game night with friends or family.” — Designed by Lieselot Geirnaert from Belgium.

Magical October

“‘I’m so glad I live in a world where there are Octobers.’ (L. M. Montgomery, Anne of Green Gables)” — Designed by Lívi Lénárt from Hungary.

Hello Autumn

“Did you know that squirrels don’t just eat nuts? They really like to eat fruit, too. Since apples are the seasonal fruit of October, I decided to combine both things into a beautiful image.” — Designed by Erin Troch from Belgium.

First Scarf And The Beach

“When I was little, my parents always took me and my sister for a walk at the beach in Nieuwpoort, we didn't really do those beach walks in the summer but always when the sky started to turn grey and the days became colder. My sister and I always took out our warmest scarfs and played in the sand while my parents walked behind us. I really loved those Saturday or Sunday mornings where we were all together. I think October (when it’s not raining) is the perfect month to go to the beach for ‘uitwaaien’ (to blow out), to walk in the wind and take a break and clear your head, relieve the stress or forget one’s problems.” — Designed by Gwen Bogaert from Belgium.

Haunted House

“Love all the Halloween costumes and decorations!” — Designed by Tazi from Australia.

Autumn Gate

“The days are colder, but the colors are warmer, and with every step we go further, new earthly architecture reveals itself, making the best of winters’ dawn.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Ghostbusters

Designed by Ricardo Gimenes from Sweden.

Spooky Town

Designed by Xenia Latii from Germany.

Strange October Journey

“October makes the leaves fall to cover the land with lovely auburn colors and brings out all types of weird with them.” — Designed by Mi Ni Studio from Serbia.

Autumn Deer

Designed by Amy Hamilton from Canada.

Dope Code

“October is the month when the weather in Poland starts to get colder, and it gets very rainy, too. You can’t always spend your free time outside, so it’s the perfect opportunity to get some hot coffee and work on your next cool web project!” — Designed by Robert Brodziak from Poland.

Tea And Cookies

“As it gets colder outside, all I want to do is stay inside with a big pot of tea, eat cookies and read or watch a movie, wrapped in a blanket. Is it just me?” — Designed by Miruna Sfia from Romania.

Discovering The Universe!

“Autumn is the best moment for discovering the universe. I am looking for a new galaxy or maybe… a UFO!” — Designed by Verónica Valenzuela from Spain.

Transitions

“To me, October is a transitional month. We gradually slide from summer to autumn. That’s why I chose to use a lot of gradients. I also wanted to work with simple shapes, because I think of October as the ‘back to nature/back to basics month’.” — Designed by Jelle Denturck from Belgium.

A Very Pug-o-ween

“The best part of October is undoubtedly Halloween. And the best part of Halloween is dog owners who never pass up an o-paw-tunity to dress up their pups as something a-dog-able. Why design pugs specifically in costumes? Because no matter how you look at it, pugs are cute in whatever costume you put them in for trick or treating. There’s something about their wrinkly snorting snoots that makes us giggle, and we hope our backgrounds make you smile all month. Happy Pug-o-ween from the punsters at Trillion!” — Designed by Trillion from Summit, NJ.

Whoops

“A vector illustration of a dragon tipping over a wheelbarrow of pumpkins in a field with an illustrative fox character.” Designed by Cerberus Creative from the United States.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Unconscious Biases That Get In The Way Of Inclusive Design]]> https://smashingmagazine.com/2022/09/unconscious-biases-inclusive-design/ https://smashingmagazine.com/2022/09/unconscious-biases-inclusive-design/ Thu, 29 Sep 2022 21:00:00 GMT As designers, we want to design optimal experiences for the diverse range of people a product will serve. To achieve this, we take steps in our research and design decisions to minimize the risk of alienating product-relevant social identities, including but not limited to disability, race/ethnicity, gender, skin color, age, sexual orientation, and language.

According to psychologists, we all have unconscious biases. So, designs are often biased, just like we are. This article is for anyone involved in the product design and development process — writers, researchers, designers, developers, testers, managers, and stakeholders. We’ll explore how our biases impact design outcomes and what we can do to design more inclusive experiences.

Once we recognize our unconscious biases, we can take steps to reduce their influence on our decision-making, both as individuals and as collective development and design teams. In this article, we will discuss six unconscious biases that commonly result in delivering user experiences that fall short of being inclusive.

Let’s discuss the six most common unconscious biases are:

Confirmation Bias

This is probably one of the most well-known biases, yet we tend to underestimate how much it impacts our own behavior. Confirmation bias is the tendency to unconsciously look for and give more weight to data, feedback, and users’ behavior that affirms our existing assumptions.

What Is The Impact?

When we approach our work with a confirming and validating mindset, we are more likely to skew our research plan and ignore or minimize any findings that contradict our beliefs. These flaws undermine the purpose of doing research — the goal of inclusive design — and can result in building the wrong thing or the right thing the wrong way. It can also create overconfidence in our assumptions and incline us not to conduct any research at all.

Abercrombie & Fitch dominated the teen clothing market in the 1990s and early 2000s, promoting a very exclusive, all-American, cool-kid image. In the early 2010s, when consumer preferences shifted, the company failed to listen to consumers and maintain its exclusive brand image. After three years of declining sales and pressure from investors, CEO Mike Jefferies resigned. The new CEO, Fran Horowitz, rebranded the company saying, “We are a much more inclusive company, we are closer to the customer, we’re responding to the customer wants and not what we want them to want.”

What Can We Do?

  • Be curious.
    Approach conversations with users with a curiosity mindset and ask non-leading and open-ended questions. Having someone else take notes can serve as an accountability partner as you may hear things differently and can discuss them to clear up discrepancies. And, as much as possible, document exact quotes instead of inferences.
  • Be responsive.
    View each design idea as a hypothesis with a willingness to change direction in response to research findings. Until we conduct primary research with users, our design concepts are merely our best guess based on our own experiences and limited knowledge about our users. We start with that hypothesis as a prototype, then test it with a diverse cross-section of our audience before coding. As quoted by Renee Reid at a UX Research Conference, we should “investigate not validate” our design concepts.

Optimism Bias

While optimism has been linked to many health benefits, optimism bias can be detrimental. Our tendency to minimize the potential of negative outcomes and underestimate risks when it comes to our own actions is referred to as optimism bias. Teams will optimistically think that overlooking socially responsible design will not adversely affect our users’ experience or the bottom line.

What Is The Impact?

As a result of optimistic bias, we may skip user research, ignore accessibility, disregard inclusive language, and launch products that don’t account for the diverse people who use the product.

It turns out that people want and expect products to be designed inclusively. A 2021 survey found that 65% of consumers worldwide purchase from brands that promote diversity and inclusion. And a study by Microsoft found that 49% of Gen-Z consumers in the US stopped purchasing from a brand that did not represent their values.

What Can We Do?

  • Recognize the powerful influence of negativity bias for those on the receiving end of our optimistic bias.
    Psychologists’ research has consistently affirmed that people expect to have good experiences and are more unhappy about bad experiences than good ones. So, one bad interaction has a much greater impact on our users’ perceptions about their experiences than multiple positive interactions.
  • Prioritize impact over output.
    Nobel Prize-winning psychologist Daniel Kahneman suggests running a project premortem. He has extensively researched optimism bias and ways to reduce its influence on our decision-making. Premortem is a loss aversion technique that encourages us to brainstorm potential oversights and identify preventive measures early in our processes.
Omission Bias

Similar to optimism bias, omission bias pertains to our expectations of outcomes. Omission bias occurs when we judge harmful outcomes worse when caused by action than when caused by inaction. This bias can lead us to believe that intentionally deceptive design is a greater offense than failing to implement inclusive design practices.

What Is The Impact?

When we allow our omission bias to prevail, we feel reassured by an illusion of innocence. However, delivering products to market without considering diverse user expectations has the risk of creating harmful user experiences.

This bias is a possible catalyst for skipping user research or leaving inclusive UX work in the product backlog. Some companies profit off this bias by providing accessibility overlays as a post-production solution. These third-party tools attempt to detect accessibility issues in the code and fix the problem for users on the website in real time. Unfortunately, accessibility overlays have been widely documented as problematic and can worsen access.

What Can We Do?

  • Remember that inaction is not without consequence and no less damaging to our users than deliberately harmful actions.
    When our product or service creates barriers or exclusion for our users, whether intentional or unintentional, the effect of the experience feels the same.
  • Plan for inclusive research and design by factoring the necessary time, people, and money into the product roadmap.
    Studies have found that the business cost of going back to fix a design can be 100 times as high as it would have been if the work was addressed during the development stage.

False Consensus Bias

The next two biases, false consensus and perceptual biases, are influential in how we think about others. False consensus bias is when we assume that other people think and behave the same as we do. Jakob Nielsen is known for the clever phrase, “you are not the user,” which is derived from this bias. Our false consensus bias can lead us to think, “well, I’m a user too,” when making design decisions. However, we all have a varied mix of identities — our age, ethnicity, abilities, gender, and so on — which are attributed to our unique needs and expectations.

What Is The Impact?

We design for a broad range of people, most of whom are not like us.

That is illuminated when we consider intersectionality. Law professor Kimberlé Crenshaw coined the term intersectionality “to describe how race, class, gender, and other individual characteristics ‘intersect’ with one another and overlap.”

In early 2022, Olay’s senior design strategist Kate Patterson redesigned the packaging for their facial moisturizer. The new Easy Open Lid not only has side handles allowing a better grip for dexterity challenges but also has the product type in Braille and larger lettering with higher contrast for vision impairments. The product was released as a limited edition, and the company has a feedback form on its website to get feedback from users to make improvements for a second edition.

What Can We Do?

  • Avoid relying on personal preferences.
    Start with conventions and design guidelines, but don’t rely on them solely. Design guidelines are generic, so they don’t, and can’t, address all contextual situations. Optimal user experiences are the result of context-sensitive design.
  • Let go of the notion of the average user and engage with users in interviews, accessibility and usability testing, and other empirical research methods.
    Conducting primary user research is immensely insightful as it allows us to learn how intersecting identities can vary users’ expectations, behavior, and contextual use cases.
Perceptual Bias (Stereotyping)

Continuing with biases that distort how we think of others, perceptual biases include halo effect, recency bias, primary effect, and stereotyping. Regarding biases that get in the way of inclusive design, we’ll address stereotyping, which is when we have overgeneralized beliefs about people based on group attributes.

What Is The Impact?

How we gather and interpret research can be greatly influenced by stereotyping. Surveys, for example, typically don’t reveal a person’s motivations or intent. This leaves room for our speculations of “why” when interpreting survey responses, which creates many opportunities for relying on stereotyping.

The Mr. Clean Magic Eraser Sponge advertisement, “This Mother’s Day, get back to the job that really matters,” reinforced antiquated gender roles. A Dolce & Gabbana campaign included an Asian woman wearing one of their dresses and trying to use chopsticks to eat Italian food while a voiceover mocked her and made sexual innuendos. Designing based on stereotypes and tropes is likely to insult and alienate some of our user groups.

What Can We Do?

  • Include a broad spectrum of our users in our participant pool.
    The more we understand the needs and expectations of our users that are different from us (different ages, ethnicities, abilities, gender identities, and so on), the more we reduce the need to depend on generalizations and offensive constructs about various social identities.
  • Conduct assumption mapping which is an activity of documenting our questions and assumptions about users and noting the degree of certainty and risk for each.
    Assumption mapping can help us uncover how much we’re relying on oversimplified generalizations about people and which segments of the audience our design might not be accounted for and help us prioritize areas to focus our research on.

Status Quo Bias

Lastly, let’s look at a decision-making bias. Status quo bias refers to our tendency to prefer how things are and to resist change. We perceive current practices as ideal and negatively view what’s unfamiliar, even when changes would result in better outcomes.

What Is The Impact?

When we rely on default thinking and societal norms, we run the risk of perpetuating systemic social biases and alienating segments of our users. Failing to get input and critique from people across a diverse spectrum can result in missed opportunities to design broadly-valued solutions.

It took Johnson & Johnson 100 years to redesign their skin-tone colored adhesive bandages. The product was released in 1920 with a Eurocentric design that was optimal for light skin tones, and it wasn’t until 2020 that Band-aid added more shades “to embrace the beauty of diverse skin.”

What Can We Do?

  • Leaders can build non-homogenous teams and foster a workplace where it’s safe to question the status quo.
    Having team members with diverse lived experiences creates a wealth of variance and opportunities for divergent perspectives. Teams that are encouraged to challenge the default and propose alternatives have significant potential to minimize the risks of embedding biases in our UX processes.
  • As individuals, we can employ our System 2 thinking.
    Psychologist Daniel Kahneman popularized two modes of thinking in his book Thinking, Fast and Slow to encourage us to move beyond our visceral thoughts to slower, effortful, and analytical thinking. In this mode, we can question our default System 1 thinking, which is automatic and impulsive, awaken our curiosity about novel ways to approach design challenges, and find opportunities to learn about and engage with people outside our typical circles.
Summary

Designing for many means designing for demographic groups whose needs and expectations differ from ours. Our unconscious biases typically keep us in our comfort zones and stem from systemic social constructs that have historically been an anti-pattern for inclusivity.

Unconscious biases, when unrecognized and unchallenged, seep into our design practices and can insidiously pollute our research and design decisions.

We start to counter our unconscious biases by acknowledging that we have biases. You do. We all do. Next, we can take steps to be more mindful of how our designs impact the people who interact with our products so that we design inclusive experiences.

Additional Resources

  • Learning to Recognize Exclusion
    An article by Lesley-Ann Noel and Marcelo Paiva on what it means to exclude, why we do it, and tips for moving out of our comfort zones.
  • Biased by Design
    A website with information about other biases that influence the design and links to additional resources.
  • Coded Bias
    A Netflix documentary investigating bias in algorithms after M.I.T. Media Lab researcher Joy Buolamwini uncovered flaws in facial recognition technology.
  • Thinking, Fast and Slow
    A book by Daniel Kahneman about how thinking more slowly can help us reduce biased decision-making.
  • Design for Cognitive Bias
    A book by David Dylan Thomas that discusses how biases influence decision-making and techniques for noticing our own biases so we can design more consciously.
]]>
hello@smashingmagazine.com (Trina Moore Pervall)
<![CDATA[Building Your Security Strategy (Case Study)]]> https://smashingmagazine.com/2022/09/ten-principles-consider-building-security-strategy-case-study/ https://smashingmagazine.com/2022/09/ten-principles-consider-building-security-strategy-case-study/ Thu, 29 Sep 2022 12:00:00 GMT This article is a sponsored by Wix

What should you focus on when designing your security strategy? This question becomes more and more tricky as your organization grows and matures. At an initial stage, you might be able to make due with a periodic penetration test. But you will soon find that as you scale up to hundreds and thousands of services, some of the procedures have to change. The focus shifts from project-based assessments to building and maintaining a lasting mindset and framework with security at the core, so you can minimize risk across your environment.

In this article, we’ll share some guiding principles and ideas for incorporating security by design into your own development process, taken from our work at Wix serving 220M+ users.

First And Foremost: Security By Design

Also known as security by default, security by design (SbD) is a concept in which we aim to “limit the opportunities” for making security-related mistakes. Consider a case where a developer builds a service to query a database. If the developer is required (or allowed) to build queries “from scratch” writing SQL directly into his code, they can very well end up introducing SQL Injections (SQLI) vulnerabilities. However, with a security by default approach, the developer can get a safe Object-Relational Mapping (ORM), letting the code focus on logic where the DB interactions are left for the ORM libraries. By ensuring the ORM library is safe once, we are able to block SQLI everywhere (or at least everywhere the library is used). This approach might restrict some developer liberties, but except for specific cases, the security benefits tend to outweigh the cons.

That previous example is rather well known, and if you use a mature application development framework, you’re probably using an ORM anyway. But the same logic can be applied to other types of vulnerabilities and issues. Input validation? Do this by default using your app framework, according to the declared var type. What about Cross-Site Resource Forgery (CSRF)? Solve it for everyone in your API gateway server. Authorization confusion? Create a central identity resolution logic to be consumed by all other services.

By following this methodology, we’re able to allow our developers the freedom to move quickly and efficiently, without needing to introduce security as a “blocker” in later stages before new features go live.

1. Establish Secure Defaults For Your Services

Take the time to ensure that your services are served by default with secure settings. For example, users should not need to actively choose to make their data private. Instead, the default should be “private” and users can have the option to make it public if they choose to. This of course depends on product decisions as well, but the concept stands. Let’s look at an example. When you build a site on our platform, you can easily set up a content “Collection”, which is like a simplified database. By default, editing permissions to this collection are restricted to admin users only, and the user has the option to expose it to other user types using the Roles & Permissions feature. The default is secure.

2. Apply The Principle Of Least Privilege (PoLP)

Put simply, users shouldn’t have permission for stuff they don’t need. A permission granted is a permission used, or if not needed, then abused. Let’s look at a simple example: When using Wix, which is a secure system with support for multiple users, a website owner can use Roles & Permissions to add a contributor, say with a Blog Writer role, to their site. As derived from the name, you would expect this user to have permissions to write blogs. However, would this new contributor have permissions, for example, to edit payments? When you put it like this, it sounds almost ridiculous. But the “least permission” concept (PoLP) is often misunderstood. You need to apply it not only to users, but also to employees, and even to systems. This way even if you are vulnerable to something like CSRF and your employees are exploited, the damage is still limited.

In a rich microservice environment, thinking about least permission might become challenging. Which permission should Microservice A have? Should it be allowed to access Microservice B? The most straightforward way to tackle this question is simply starting with zero permissions. A newly launched service should have access to nothing. The developer, then, would have an easy, simple way to extend their service permission, according to need. For example, a “self service” solution for allowing developers to grant permissions for services to access non-sensitive databases makes sense. In such an environment, you can also look at sensitive permissions (say for a database holding PII data), and require a further control for granting permissions to them (for example, an OK from the data owner).

3. Embrace The Principle Of Defense In Depth (DiD)

As beautifully put by a colleague, security is like an onion — it’s made of many layers built on top of layers, and it can make you cry. In other words, when building a secure system, you need to account for different types of risk and threats, and subsequently you need to build different types of protections on top of others.

Again, let’s look at a simple example of a login system. The first security gateway you can think of in this context is the “user-password” combination. But as we all know, passwords can leak, so one should always add a second layer of defense: two-factor authentication (2FA), also known as multi-factor authentication (MFA). Wix encourages users to enable this feature for their account security. And by now, MFA is pretty standard — but is it enough? Can we assume that someone who successfully logged into the system is now trusted?

Unfortunately, not always. We looked until now at one type of attack (password stealing), and we provided another layer to protect against it, but there are certainly other attacks. For example, if we don’t protect ourselves, a Cross Site Scripting (XSS) attack can be used to hijack a user’s sessions (for example by stealing the cookies), which is as good as a login bypass. So we need to consider added layers of defense: cookie flags to prevent JS access (HTTP only), session timeouts, binding a session to a device, etc. And of course, we need to make sure we don’t expose XSS issues.

You can look at this concept in another way. When writing a feature, you should almost protect it “from scratch”, thinking all defenses might have been broken. That doesnt mean writing every line of code again, it just means being aware that certain assumptions cannot be made. For example, you can’t assume that just because your service does not have an externally reachable endpoint, it has never been accessed by malicious entities. An attacker exploiting Server-Side Request Forgery (SSRF) issues can hit your endpoint any minute. Is it protected against such issues?

At Wix, we assume a “breach mindset” at all times, meaning each developer assumes the controls leading up to the application they’re working on have already been breached. That means checking permissions, input validations and even logic — we never assume previous services are sensible.

4. Minimize Attack Surface Area

What’s the safest way to secure a server? Disconnect it from the electricity socket. Jokes aside, while we don’t want to turn our services off just to ensure they’re not abused, we certainly don’t want to leave them on if they serve no real function. If something is not needed or being used, it should not be online.

The most straightforward way to understand this concept is by looking at non-production environments (QA, staging, etc). While such environments are often needed internally during the development process, they have no business being exposed such that external users can access them. Being publicly available means they can serve as a target for an attack, as they are not “production ready” services (after all, they are in the testing phase). The probability for them to become vulnerable increases.

But this concept doesn’t apply only to whole environments. If your code contains unused or unnecessary methods, remove them before pushing to production. Otherwise, they become pains instead of assets.

5. Fail Securely

If something fails, it should do so securely. If that’s confusing, you’re not alone. Many developers overlook this principle or misunderstand it. Imagining every possible edge case on which your logic can fail is almost impossible, but it is something you need to plan for, and more often than not it’s another question of adopting the right mindset. If you assume there will be failures, then you’re more likely to include all possibilities.

For instance, a security check should have two possible outcomes: allow or deny. The credentials inputted are either correct, or they’re not. But what if the check fails entirely, say, because of an unexpected outage of electricity in the database server? Your code keeps running, but you get a “DB not found” error. Did you consider that?

In this particular instance, the answer is probably “yes”, you thought of it, either because your framework forced you to consider it (such as Java’s “checked exceptions”) or simply because it actually happens often enough that your code failed in the past. But what if it is something more subtle? What if, for example, your SQL query fails due to non-unicode characters that suddenly appeared as input? What if your S3 bucket suddenly had its permissions changed and now you can’t read from it anymore? What if the DNS server you’re using is down and suddenly instead of an NPM repo you’re hitting a compromised host?

These examples might seem ludacris to you, and it would be even more ludacris to expect you to write code to handle them. What you should do, however, is expect things to behave in an expected manner, and make sure if such things occur, you “fail securely”, like by just returning an error and stopping the execution flow.

It would make no sense to continue the login flow if the DB server is down, and it will make no sense to continue the media processing if you can’t store that image on that bucket. Break the flow, log the error, alert to the relevant channel — but don’t drop your security controls in the process.

6. Manage Your Third-Party Risk

Most modern applications use third-party services and/or import third-party code to enhance their offering. But how can we ensure secure integrations with third parties? We think about this principle a lot at Wix, as we offer third-party integrations to our user sites in many ways. For example, users can install apps from our App Market or add third-party software to their websites using our full-stack development platform called Velo.

Third-party code can be infiltrated, just like your own, but has the added complication that you have no control over it. MPM node libraries, for instance, are some of the most used in the world. But recently a few well-known cases involved them being compromised, leaving every site that used them exposed.

The most important thing is to be aware that this might happen. Keep track of all your open-source code in a software bill of materials (SBOM), and create processes for regularly reviewing it. If you can, run regular checks of all your third-party suppliers’ security practices. For example, at Wix we run a strict Third-Party Risk Management Program (TPRM) to vet third parties and assess security while working with them.

7. Remember Separation Of Duties (SoD)

Separation of duties really boils down to making sure tasks are split into (and limited to) appropriate user types, though this principle could also apply to subsystems.

The administrator of an eCommerce site, for example, should not be able to make purchases. And a user of the same site should not be promoted to administrator, as this might allow them to alter orders or give themselves free products.

The thinking behind this principle is simply that if one person is compromised or acting fraudulently, their actions shouldn’t compromise the whole environment.

8. Avoid Security By Obscurity

If you write a backdoor, it will be found. If you hard-code secrets in your code, they will be exposed. It’s not a question of “if”, but “when” — there is no way to keep things hidden forever. Hackers spend time and effort on building reconnaissance tools to target exactly these types of vulnerabilities (many such tools can be found with a quick Google search), and more often than not when you point at a target, you get a result.

The bottom line is simple: you cannot rely on hidden features to remain hidden. Instead, there should be enough security controls in place to keep your application safe when these features are found.

For example, it is common to generate access links based on randomly generated UUIDs. Consider a scenario where an anonymous user makes a purchase on your store, and you want to serve the invoice online. You cannot protect the invoice with permissions, as the user is anonymous, but it is sensitive data. So you would generate a “secret” UUID, build it into the link, and treat the “knowledge” of the link as “proof” of identity ownership.

But how long can this assumption remain true? Over time, such links (with UUID in them) might get indexed by search engines. They might end up on the Wayback Machine. They might be collected by a third-party service running on the end user’s browser (say a BI extension of some sort), then collected into some online DB, and one day accessed by a third party.

Adding a short time limit to such links (based on UUIDs) is a good compromise. We don’t rely on the link staying secret for long (so there’s no security by obscurity), just for a few hours. When the link gets discovered, it’s already no longer valid.

9. Keep Security Simple

Also known as KISS, or keep it simple, stupid. As developers, we need to keep users in mind at all times. If a service is too complicated to use, then its users might not know how to use it, and bypass it or use it incorrectly.

Take 2FA for example. We all know it’s more secure, but the process also involves a degree of manual setup. Making it as simple as possible to follow means more users will follow it, and not compromise their own accounts with weaker protections.

Adding new security functionality always makes a system more complex, so it can have an unintended negative impact on security. So keep it simple. Always weigh the value of new functionality against its complexity, and keep security architecture as simple as possible.

10. Fix Security Issues, Then Check Your Work

Thoroughly fixing security issues is important for all aspects of a business. At Wix, for example, we partner with ethical hackers through our Bug Bounty Program to help us find issues and vulnerabilities in our system, and practice fixing them. We also employ internal security and penetration testing, and the security team is constantly reviewing the production services, looking for potential bugs.

But fixing a bug is just the start. You also need to understand the vulnerability thoroughly before you fix it, and often get whoever spotted it to check your fix too. And then, when a bug is fixed, carry out regression tests to make sure it’s not reintroduced by code rollbacks. This process is crucial to make sure you’re actually advancing your application security posture.

Conclusion

By implementing security by design at Wix, we were able to build a robust and secure platform — and we hope that sharing our approach will help you do the same. We applied these principles not just to security features, but to all components of our system. We recommend considering this, whether you build from scratch or choose to rely on a secure platform like ours.

More importantly, following security by design instilled a security mindset into our company as a whole, from developers to marketing and sales. Cybersecurity should be top priority in everyone’s minds, as attacks increase and hackers find new ways of accessing sensitive information.

Taking a defensive position right from the start will put you at an advantage. Because when thinking about cybersecurity, it’s not if a breach happens. It’s when.

  • For more information on security by design, visit the Open Web Application Security Project. This non-profit community is dedicated to securing the web, and produces a range of free open-source tools, training and other resources to help improve software security.
  • To learn more about secure practices at Wix, check out wix.com/trust-center/security.
]]>
hello@smashingmagazine.com (Wix Security Team)
<![CDATA[Phone Numbers For Web Designers]]> https://smashingmagazine.com/2022/09/phone-numbers-web-designers/ https://smashingmagazine.com/2022/09/phone-numbers-web-designers/ Wed, 28 Sep 2022 11:00:00 GMT It is exciting how websites are being optimized. Localization, A/B testing, and cross-domain campaign tracking contribute to your bottom line. But why stop there? The customer experience is not determined by your website alone. Take the next step and start to include your telephony in the optimization span. And it is a relatively easy step to take as you are already familiar with the mechanisms. Simply follow these seven considerations.

First Things First: The Basics

Before determining which number type to use and when and how to present them on your website, it helps to know which number types are available, to begin with:

Each of these numbers can be valid to use, depending on your strategy. It is important to line up the localization, appearance (tone of voice), and other factors of your website and the phone number type you choose. And — like your website — keep testing and optimizing your choice.

Let’s dive into the details of the seven considerations to make.

Localization

A lot has been written about localization. Why it is important and how to achieve it with your website. All this attention is leading to great results. However, a website and the product are not the only points of contact with the customer and do not fully cover the customer experience domain. So, there is much to be gained here.

The localization of your website and phone number choice needs to be in sync. If your website is tailored per country, the phone number should also be country-specific. It would be weird to have a site for a specific country but not a phone number. And the beauty is that you have already determined the level of localization required for your website. You can simply match the localization needed to the available phone number types.

If your website localization is country-based, then get one of these numbers:

  • National number,
  • Freephone number,
  • Premium Rate number.

All of these are suitable for country-wide operating businesses. We’ll get back to how to choose which one fits your case best later in this article.

If your website targets specific areas smaller than a country:

Get local numbers in the same areas you are targeting with your website. It strengthens your website localization strategy, and you continue to earn trust with the local phone numbers. If you have optimized (an instance of) your website specifically for London, it only makes sense to extend that strategy and present a Local London Phone number.

There are two number types that require additional attention:

  1. A mobile phone number is technically a number that is valid country-wide. However, it has its value for a very specific type of business: mostly local operating, independent service providers.
  2. An international freephone number (officially a UIFN number) is a single number that can be activated in multiple countries. If your website strategy is explicitly to express one voice for all, this number type fits that strategy; one single international phone number that can be activated in multiple countries. And it can have its advantages in other areas as well. We’ll dive into those a bit later in this article.
Appearance

Every type of number expresses an identity. This should match the identity your target market expects from you. Again, consistency is key. Make sure to align the tone of voice and the image you are projecting with your website with the appearance of the phone number(s) you choose.

If you are trying to generate a familiar feel on your website, a local number is your best option. You are calling someone close by, your neighbor. It gives the feeling you know them and that they are trustworthy.

If you want to provide a more corporate or formal impression, a national number is your choice. Bigger companies need a lot of phone numbers, and in many cases, they have offices in different cities. National Numbers have been created to overcome the issue of local numbers being snagged away from consumers. And as stated earlier, they can be used in multiple cities, which enables a company to be reachable in multiple cities via the same phone number. Not for nothing, National phone numbers are also called corporate numbers.

Only use a mobile number if you have to exhume mobility while it is ok that you are an independent service provider. Like an independent courier.

Freephone numbers are by far the most effective phone number types for sales lines and support lines for high-end services and products. If you want to welcome your callers with open arms, this is the number type to opt for, without a doubt.

If the phone call is the medium via which you provide your services, premium rate numbers can provide financial compensation for the services provided. In some cases, these numbers are also used as support lines with the goal of building a threshold for the customer to call and some compensation for the cost of the time spent. Note that this will negatively impact your customer experience. In most countries, it is not even allowed to offer a premium rate number for the support line on services under contract or products under warranty.

An international freephone number is counterproductive in localization but has other advantages. This number type has been defined by the ITU as an international alternative for the regular in-country freephone number and has the calling code +800. Having the same number available in multiple countries has its advantages: You only have to print one number on documentation to be used in multiple countries. And if you have international traveling callers, they only have to memorize one number.

Caller And Operational Cost

Each number type has its own caller and operational cost profile.

The most cost-effective numbers for both callers and you are local, national, and mobile numbers. These number types are mostly called from the caller bundle and have the lowest operational cost.

The purpose of a freephone number is to shift the caller cost from caller to operational. Therefore, the operational cost is relatively high.

A premium rate number is a payment method; therefore, caller cost is high and provides an operational source of income.

The cost model for an international freephone number is similar to the model of a normal freephone number. The cost is shifted to the operation.

Note: Since this is a globally defined phone number type, it is not regulated by the various in-country regulators to whom the caller operators have to answer.

Most fixed line operators do respect the 0-caller tariff. However, some mobile operators use this loophole to charge their customers for calls to these numbers.

Reachability

Not all number types can be called from everywhere. Obviously, you need to make sure your phone number is reachable by your target audience.

Local, national, mobile and international freephone numbers are usually internationally reachable.

Normal freephone and premium rate numbers are not. As discussed before, these numbers do have their added value for many organizations. If you use these types of numbers, it is important to make sure you get a number in every target market or at least an alternative number for your local customer who just happened to travel outside of your country.

A/B And Campaign Testing

With these guidelines, you can make educated choices and proceed with confidence. But do you stop tweaking your website at this point? No, you don’t! This is where you start with optimization via methods like A/B Testing.

So why not include the phone number options in the scope of testing? All tools are available. All you have to do is include the phone numbers as an A/B parameter. And by adding the call statistics to the test evaluation, you can get to a more educated and accurate conclusion. Now, instead of the website, you are optimizing the website-phone number combination.

That also brings us to the next optimization. When evaluating an ad campaign or mailing, the evaluation usually stops with the clicks. But using different phone numbers (the same type of phone numbers to keep the evaluation clean) on both legs makes it very easy to add the call and call result statistics to the evaluation, enabling you to make even more educated decisions.

Conclusion

A/B testing can be used to evaluate and tweak your phone number choices. And by using different phone numbers (of the same type), you can make your Campaign evaluations more accurate.

Website And Phone Number Integration

Online communication and telephony are often regarded as two distinct domains, but they shouldn’t be. They are both customer contact points, and each can benefit greatly from the other.

Traditionally, just the phone number of the central office was presented. Once the realization set is that localization was also relevant for phone numbers, at least a block with multiple phone numbers was shown.

At the moment (hopefully even more after this article), the phone number shown is an integral part of the localization.

Best practice, however, is taking it a step further. Whatever you do, the goal should be to reach the goal as fast and efficiently as possible for your customer and you. This is valid for your website, your phone support, and both combined. The best results can be obtained when information gathered on the website is not wasted but put to the benefit of the following phone call. By simply presenting a phone number based on the information gathered, you skip the necessity of an (extensive) phone menu and have call screening in place. The image shows a chat setup, but obviously, the same result can be achieved with other setups as well.

And in many cases, that information can be used to present relevant self-service alternatives to the visitor. That could mean even higher efficiency for both your customers and you. Do note that it is essential to offer the options to the visitor — do not hide the possibility of calling. That will lead to frustration, negatively impact customer satisfaction, and cost you leads and customers.

Phone Number Presentation

The last consideration is the presentation of the phone number on your website. Obviously, the presentation depends highly on your website design, but here are a couple of pointers from the phone number perspective:

Link

Always link your phone numbers! Anything you do should contribute to making the life of your audience easier. Most devices are smart and connected, so link your phone number and enable your audience to place the call via a click.

Linking a phone number is easy with the ‘tel’ HTML tag, but what is important is always to use the international format. If you link the local format, visitors from another country will not be able to call the number. In the link, do not place spaces or dashes, just the phone number, for example, tel:+31201234567.

Flags

It does help to present the flag or ISO code of the country of the number presented. It confirms the localization to the caller. The caller recognizes the flag and feels confident to call the number. If it is someone from another country, at least they are aware they will call internationally. This way, you’ll prevent possible surprises for the caller afterward.

Furthermore, it gives you the opportunity to offer alternatives. If you have alternative phone numbers, it is possible to present the flag (combined with the number) in a dropdown. This way, in case the localization of the website is off, any visitor can find their relevant phone number. Note: When having alternatives, do not show all options, but show one (the one that should be relevant according to your site’s localization) and show there are alternatives. That way, you keep it simple.

Caller Tariffs

Important: When presenting a premium rate phone number, always present the caller’s cost as well.

Besides that, it is the right thing to do, and it is also obligatory in most countries. In most countries, it is even obligatory to present the cost with the same font type, size, and color as the phone number, to avoid any room misinterpretation.

On the other hand, when presenting a freephone number, it is good to make it explicit as well as you want to avoid any chance your visitor does not recognize the number is free to call. What is important in this case is to make sure to use the right language which is understood by your audience. Other names for a “freephone number” are, for instance, a “green number” or “toll-free number”; it has many different names in many other languages. Check with your target audience before naming your number.

The other number types usually fall within everybody’s calling bundle, and there is not really a reason to state the number type. The only thing important for your audience is the country of the phone number. Those numbers are internationally callable, which could impact the caller’s cost.

Takeaway

It could help to see phone numbers like URLs. They have — on an abstract level — the same dynamics and statistics.

Visits vs Calls
Session duration vs Call duration
Conversion result vs Conversion result

The customer journey is not limited to a website alone. Simply by combining the world of website design and telephony, far better results can be obtained for your organization. And thanks to the similarities and mutual benefits, it is an easy step to take.

]]>
hello@smashingmagazine.com (Onno Westra)
<![CDATA[Five Data-Loading Patterns To Boost Web Performance]]> https://smashingmagazine.com/2022/09/data-loading-patterns-improve-frontend-performance/ https://smashingmagazine.com/2022/09/data-loading-patterns-improve-frontend-performance/ Tue, 27 Sep 2022 14:00:00 GMT When it comes to performance, you shouldn’t be stingy. There are millions of sites, and you are in close competition with every one of those Google search query results. Research shows that users will abandon sites that take longer than three seconds to load. Three seconds is a very short amount of time. While many sites nowadays load in less than one second, there is no one size fits all solution, and the first request can either be the do or die of your application.

Modern frontend applications are getting bigger and bigger. It is no wonder that the industry is getting more concerned with optimizations. Frameworks create unreasonable build sizes for applications that can either make or break your application. Every unnecessary bit of JavaScript code you bundle and serve will be more code the client has to load and process. The rule of thumb is the less, the better.

Data loading patterns are an essential part of your application as they will determine which parts of your application are directly usable by visitors. Don’t be the site that slows their entire site because they chose to load a 5MB image on the application’s homepage and understand the issue better. You need to know about the resource loading waterfall.

Loading Spinner Hell And The Resource Loading Waterfall

The resource loading waterfall is a cascade of files downloaded from the network server to the client to load your website from start to finish. It essentially describes the lifetime of each file you download to load your page from the network.

You can see this by opening your browser and looking in the Networking tab.

What do you see there? There are two essential components that you should see:

  1. The chart shows the timeline for each file requested and loaded. You can see which files go first and follow each consecutive request until a particular file takes a long time to load. You can inspect it and see whether or not you can optimize it.
  2. At the bottom of the page, you can check how many kB of resources your client consumes. It is important to note how much data the client needs to download. On your first try, you can use it as a benchmark for optimizations later.

No one likes a white blank screen, especially your users. Lagging resource loading waterfalls need a basic placeholder before you can start building the layout on the client side. Usually, you would use either a spinner or a skeleton loader. As the data loads one by one, the page will show a loader until all the components are ready.

While adding loaders as placeholders is an improvement, having it on too long can cause a “spinner hell.” Essentially, your app is stuck on loading, and while it is better than a blank HTML page, it could get annoying, and visitors would choose to exit your site.

But isn’t waiting for the data the point?

Well, yes, but you can load it faster.

Assuming you want to load a social media layout, you might add a loading spinner or a skeleton loader to ensure that you don’t load an incomplete site. The skeleton loader will usually wait for:

  • The data from the backend API;
  • The build layout according to the data.

You make an asynchronous call to an API, after which you get the URL for the asset on the CDN. Only then can you start building the layout on the client side. That’s a lot of work to show your face, name, status, and Instagram posts on the first try.

The Five Data-Loading Patterns You Need to Know

Developing software is becoming easier as frameworks like React, Vue, or Angular become the go-to solution for creating even the simplest applications. But using these bulky frameworks filled with a ton of magical functions you don’t even use isn’t what you should be going for.

You’re here to optimize. Remember, the less, the better.

But what if you can’t do less? How will you serve blazingly fast code, then? Well, it’s good that you’re about to learn five data-loading patterns that you can use to get your site to load quickly or, as you would say, blazingly fast.

Client Side Rendering, Server Side Rendering And Jamstack

Modern JavaScript frameworks often use client-side rendering (CSR) to render webpages. The browser receives a JavaScript bundle and static HTML in a payload, then it will render the DOM and add the listeners and events triggers for reactiveness. When a CSR app is rendered inside the DOM, the page will be blocked until all components are rendered successfully. Rendering makes the app reactive. To run it, you have to make another API call to the server and retrieve any data you want to load.

Server-side rendering (SSR) is when an application serves plain HTML to the client. SSR can be divided into two types: SSR with hydration and SSR without hydration. SSR is an old technique used by older frameworks such as WordPress, Ruby on Rails, and ASP.NET. The main goal of SSR is to give the user a static HTML with the prerequisite data. Unlike CSR, SSR doesn’t need to make another API call to the backend because the server generates an HTML template and loads any data within it.

Newer solutions like Next.js uses hydration, where the static HTML will be hydrated on the client side using JavaScript. Think of it like instant coffee, the coffee powder is the HTML, and the water is the JavaScript. What happens when you mix instant coffee powder with water? You get — wait for it — coffee.

But what is a Jamstack? Jamstack is similar to SSR because the client retrieves plain HTML. But during SSR, the client retrieves the HTML from the server. However, Jamstack apps serve pre-generated HTML directly from the CDN. Because of this, Jamstack apps usually load faster, but it’s harder for developers to make dynamic content. Jamstack apps are good with pre-generating HTML for the client, but when you use heavy amounts of JavaScript on the client side, it becomes increasingly harder to justify using Jamstack compared to Client Side Rendering (CSR).

Both SSR and Jamstack have their own differences. What they do have in common is they don’t burden the client with rendering the entire page from scratch using JavaScript.

When you optimize your site’s SEO, using SSR and Jamstack are recommended because, compared to CSR, both return HTML files that search bots can easily traverse. But search bots can still traverse and compile JavaScript files for CSR. However, rendering every JavaScript file in a CSR app can be time-consuming and make your site’s SEO less effective.

SSR and Jamstack are very popular, and more projects are moving to SSR frameworks like Next.js and Nuxt.js compared to their vanilla CSR counterparts, React and Vue, mainly because SSR frameworks provide better flexibility when it comes to SEO. Next.js has a whole section talking about SEO optimizations on their framework.

An SSR application will generally have templating engines that inject the variables into an HTML when given to the client. For example, in Next.js, you can load a student list writing:

export default function Home({ studentList }) {
  return (
    <Layout home>
        <ul>
          {studentList.map(({ id, name, age }) => (
            <li key={id}>
              {name}
              <br />
              {age}
            </li>
          ))}
        </ul>
    </Layout>
  );
}

Jamstack is popular with documentation sites that usually compile code to HTML files and host them on the CDN. Jamstack files usually use Markdown before being compiled to HTML, for example:

---
author: Agustinus Theodorus
title: ‘Title’
description: Description
---
Hello World
Active Memory Caching

When you want to get data that you already had quickly, you need to do caching — caching stores data that a user recently retrieved. You can implement caching in two ways: using a super-fast key-value store like Redis to save data keys and values for you and using a simple browser cache to store your data locally.

Caching partially stores your data and is not used as permanent storage. Using the cache as permanent storage is an anti-pattern. Caching is highly recommended for production applications; new applications will start using caches as they gradually mature.

But when should you choose between a Redis cache (server cache) and a browser cache (local cache)? Both can be used simultane