Chapter 9. The Standard Library and External Type Definitions

TypeScript’s lead architect Anders Hejlsberg once said that they envision “TypeScript to be the Switzerland of JavaScript”, meaning that it doesn’t prefer or work towards the compatibility of a single framework, but rather tries to cater to all JavaScript frameworks and flavors. In the past, TypeScript worked on a decorator implementation to convince Google not to pursue the JavaScript dialect AtScript for Angular, which was TypeScript plus decorators. The TypeScript decorator implementation that acts as the template for a respective ECMAScript proposal. TypeScript also understands the JSX syntax extension, allowing frameworks like React or Preact to use TypeScript without limitations.

But even if TypeScript tries to cater to all JavaScript developers and makes a huge effort of integrating new and useful features for a plethora of frameworks, there are still things it can’t or won’t do. Maybe because a certain feature is too niche, or maybe because a decision would have huge implications for too many developers.

This is why TypeScript has been designed to be extensible by default. A lot of TypeScript’s features like namespaces, modules, and interfaces allow for declaration merging, which gives you the possibility to add type definitions to your liking.

In this chapter, we look at how TypeScript deals with standard JavaScript functionality like modules, arrays, and objects. We will see some of their limitations, analyze the reasoning behind their limitations and provide reasonable workarounds. You will see that TypeScript has been designed to be very flexible for various flavors of JavaScript, starting with sensible defaults, giving you the opportunity to extend when you see fit.

9.1 Iterating over Objects with Object.keys

Problem

When you try to access object properties via iterating over its keys, TypeScript throws red squiggly lines at you, telling you that ’string’ can’t be used to index type.

Solution

Use a for-in loop instead of Object.keys and lock your type using generic type parameters.

Discussion

There is rarely a head-scratcher in TypeScript as prominent as trying to access an object property via iterating through its keys. This is a pattern that’s so common in JavaScript, yet TypeScript seems to keep you from using it at all costs. We talk about this simple line to iterate over an object’s properties.

Object.keys(person).map(k => person[k])

It leads to TypeScript throwing red squigglies at you and developers flipping tables: Element implicitly has an any type because expression of type string can’t be used to index type Person. This is one of the situations where experienced JavaScript developers have the feeling that TypeScript is working against them. But as with all decisions in TypeScript, there is a good reason why TypeScript behaves like this.

Let’s find out why. Take a look at this function:

type Person = {
  name: string;
  age: number;
};

function printPerson(p: Person) {
  Object.keys(p).forEach((k) => {
    console.log(k, p[k]);
//                ^
// Element implicitly has an 'any' type because expression
// of type 'string' can't be used to index type 'Person'.
  });
}

All we want is to print a Person’s fields by accessing them through its keys. TypeScript won’t allow this. Object.keys(p) returns a string[], which is too wide to allow accessing a very defined object shape Person.

But why is that so? Isn’t it obvious that we only access keys that are available? That’s the whole point of using Object.keys! It is, but we are also able to pass objects that are sub-types of Person, which can have more properties than defined in Person.

const me = {
  name: "Stefan",
  age: 40,
  website: "https://fettblog.eu",
};

printPerson(me); // All good!

printPerson still should work correctly. It prints more properties but it doesn’t break. It’s still the keys of p, so every property should be accessible. But what if you don’t access only p?

Let’s assume Object.keys gives you (keyof Person)[]. You can easily write something like this:

function printPerson(p: Person) {
  const you: Person = {
    name: "Reader",
    age: NaN,
  };

  Object.keys(p).forEach((k) => {
    console.log(k, you[k]);
  });
}

const me = {
  name: "Stefan",
  age: 40,
  website: "https://fettblog.eu",
};

printPerson(me);

If Object.keys(p) returns an array of type keyof Person[], you will be able to access other objects of Person, too. This might not add up. In our example, we just print undefined. But what if you try to do something with those values? This will break at runtime.

TypeScript prevents you from scenarios like this. While we might think Object.keys is keyof Person, in reality, it can be so much more.

One thing that we could use to mitigate this problem is to use type guards.

function isKey<T>(x: T, k: PropertyKey): k is keyof T {
  return k in x;
}

function printPerson(p: Person) {
  Object.keys(p).forEach((k) => {
    if (isKey(p, k)) console.log(k, p[k]); // All fine!
  });
}

But, this adds an extra step that frankly, shouldn’t be there.

There’s another way to iterate over objects, using for-in loops.

function printPerson(p: Person) {
  for (let k in p) {
    console.log(k, p[k]);
//                 ^
// Element implicitly has an 'any' type because expression
// of type 'string' can't be used to index type 'Person'.
  }
}

TypeScript will throw the same error for the same reason because you still can do things like this:

function printPerson(p: Person) {
  const you: Person = {
    name: "Reader",
    age: NaN,
  };

  for (let k in p) {
    console.log(k, you[k]);
  }
}

const me = {
  name: "Stefan",
  age: 40,
  website: "https://fettblog.eu",
};

printPerson(me);

And it will break at runtime. However, writing it like this gives you a little edge over the Object.keys version. TypeScript can be much more exact in this scenario if you add a generics:

function printPerson<T extends Person>(p: T) {
  for (let k in p) {
    console.log(k, p[k]); // This works
  }
}

Instead of requiring p to be Person (and thus be compatible with all sub-types of Person), we add a new generic type parameter T that is a sub-type of Person. This means that all types that have been compatible with this function signature are still compatible, but the moment we use p, we are dealing with an explicit sub-type, not the broader super-type Person.

We substitute T for something that is compatible with Person, but where TypeScript knows that it’s concrete enough to prevent you from errors.

The code above works. k is of type keyof T. That’s why we can access p, which is of type T. And this technique still prevents us from accessing types that lack specific properties.

function printPerson<T extends Person>(p: T) {
  const you: Person = {
    name: "Reader",
    age: NaN,
  };
  for (let k in p) {
    console.log(k, you[k]);
//                 ^
//  Type 'Extract<keyof T, string>' cannot be used to index type 'Person'
  }
}

We can’t access a Person with keyof T. They might be different. Beautiful! But since T is a sub-type of Person, we still can assign properties, if we know the exact property names.

p.age = you.age

And that’s exactly what we want.

TypeScript being very conservative about its types here is something that might seem odd at first but helps you in scenarios you wouldn’t think of. I guess this is the part where JavaScript developers usually scream at the compiler and think they’re “fighting” it, but maybe TypeScript saved you without you knowing it. For situations where this gets annoying, TypeScript at least gives you ways to work around it.

9.2 Explicitly Highlighting Unsafe Operations with Type Assertions and unknown

Problem

Parsing arbitrary data via JSON operations can go wrong if the data is not correct. TypeScript’s defaults don’t provide any safeguards for these unsafe operations.

Solution

Explicitly highlight unsafe operations by using type assertions instead of type annotations, and make sure they are enforced by patching the original types with unknown.

Discussion

In Recipe 3.9 we spoke about how to effectively use type assertions. Type assertions are a curious thing as they are an explicit call to the type system to say that some type should be a different one, and based on some set of guard rails — e.g. not telling number is actually string — TypeScript will treat this particular value as the new type.

With TypeScript’s rich and extensive type system, sometimes type assertions are inevitable. Sometimes they are even wanted like shown in Recipe 3.9, where we use the fetch API to get JSON data from a backend. One way is to call fetch and assign the results to an annotated type.

type Person = {
  name: string;
  age: number;
};

const ppl: Person[] = await fetch("/api/people").then((res) => res.json());

res.json() results in any:footnote[Back when the API defintiion was created, unknown didn’t exist. Also TypeScript has a strong focus on developer productivity, and with res.json() being a widely used method, this would’ve broken countless applications.], and everything that is any can be changed to any other type through a type annotation. There is no guarantee that the result is actually Person[].

The other way is to use a type assertion instead of a type annotation.

const ppl = await fetch("/api/people").then((res) => res.json()) as Person[];

For the type system, this is the same thing, but we can easily scan situations where there might be problems. If we don’t validate our incoming values against types (with e.g. Zod, see Recipe 12.5), than having a type assertion here is an effective way of highlighting unsafe operations.

Unsafe operations in type system are situations where we tell the type system that we expect values to be of a certain type, but we don’t have any guarantee from the type system itself that this will actually be true. This happens mostly at the borders of our application, where we load data from someplace, deal with user input, or parse data with built-in methods.

Unsafe operations can be highlighted by using certain keywords that indicate an explicit change in the type system. Type assertions (as), type predicates (is), or assertion signatures (asserts) help us to find those situations. In some cases, TypeScript even forces us to either comply with its view of types or to explicitly change the rules based on our situations. But not always.

When we fetch data from some backend, it is just as easy to annotate as it is to write a type assertion. Things like that can be overlooked if we don’t force ourselves to use the correct technique.

But we can help TypeScript a little bit to help us do the right thing. The problem is the call to res.json(), which comes from the Body interface in lib.dom.d.ts.

interface Body {
  readonly body: ReadableStream<Uint8Array> | null;
  readonly bodyUsed: boolean;
  arrayBuffer(): Promise<ArrayBuffer>;
  blob(): Promise<Blob>;
  formData(): Promise<FormData>;
  json(): Promise<any>;
  text(): Promise<string>;
}

As you see, the json() call returns a Promise<any>, and any is the loosey-goosey type where TypeScript just ignores any type check at all. We would need any’s cautious brother, unknown. Thanks to declaration merging, we can override the Body type definition and define json() to be a bit more restrictive.

interface Body {
  json(): Promise<unknown>;
}

The moment we do a type annotation, TypeScript yells at us that we can’t assign unknown to Person[].

const ppl: Person[] = await fetch("/api/people").then((res) => res.json());
//    ^
// Type 'unknown' is not assignable to type 'Person[]'.ts(2322)

But TypeScript is still happy if we do a type assertion.

const ppl = await fetch("/api/people").then((res) => res.json()) as Person[];

And with that, we can enforce TypeScript to highlight unsafe operations:footnote[Credits to Dan Vanderkam’s Effective TypeScript blog for inspiration on this subject.].

9.3 Working with defineProperty

Problem

You define properties on the fly using Object.defineProperty, but TypeScript doesn’t pick up changes.

Solution

Create a wrapper function and use assertion signatures to change the object’s type.

Discussion

In JavaScript, you can define object properties on the fly with Object.defineProperty. This is useful if you want your properties to be read-only or similar. Think of a storage object that has a maximum value that shouldn’t be overwritten:

const storage = {
  currentValue: 0
};

Object.defineProperty(storage, 'maxValue', {
  value: 9001,
  writable: false
});

console.log(storage.maxValue); // 9001

storage.maxValue = 2;

console.log(storage.maxValue); // still 9001

defineProperty and property descriptors are very complex. They allow you to do everything with properties that usually is reserved for built-in objects. So they’re common in larger codebases. TypeScript has a little problem with defineProperty:

const storage = {
  currentValue: 0
};

Object.defineProperty(storage, 'maxValue', {
  value: 9001,
  writable: false
});

console.log(storage.maxValue);
//                  ^
// Property 'maxValue' does not exist on type '{ currentValue: number; }'.

If we don’t explicitly assert to a new type, we don’t get maxValue attached to the type of storage. However, for simple use cases, we can help ourselves using assertion signatures.

Note

While TypeScript might not feature object changes when using Object.defineProperty, there is a chance that the team will add typings or special behavior for cases like this in the future. For example, checking if an object has a certain property using the in keyword never had an effect on types for years. This has changed in 2022 with TypeScript 4.9.

Think of an assertIsNumber function where you can make sure some value is of type number. Otherwise, it throws an error. This is similar to the assert function in Node.js:

function assertIsNumber(val: any) {
  if (typeof val !== "number") {
    throw new AssertionError("Not a number!");
  }
}

function multiply(x, y) {
  assertIsNumber(x);
  assertIsNumber(y);
  // at this point I'm sure x and y are numbers
  // if one assert condition is not true, this position
  // is never reached
  return x * y;
}

To comply with behavior like this, we can add an assertion signature that tells TypeScript that we know more about the type after this function:

function assertIsNumber(val: any) : asserts val is number
  if (typeof val !== "number") {
    throw new AssertionError("Not a number!");
  }
}

This works a lot like type predicates (see Recipe 3.5), but without the control flow of a condition-based structure like if or switch.

function multiply(x, y) {
  assertIsNumber(x);
  assertIsNumber(y);
  // Now also TypeScript knows that both x and y are numbers
  return x * y;
}

If you look at it closely, you can see those assertion signatures can change the type of a parameter or variable on the fly. This is just what Object.defineProperty does as well.

The following helper does not aim to be 100% accurate or complete. It might have errors, it might not tackle every edge case of the defineProperty specification. But it will give us the basic functionality. First, we define a new function called defineProperty which we use as a wrapper function for Object.defineProperty.

function defineProperty<
  Obj extends object,
  Key extends PropertyKey,
  PDesc extends PropertyDescriptor>
  (obj: Obj, prop: Key, val: PDesc) {
  Object.defineProperty(obj, prop, val);
}

We work with three generics:

  1. The object we want to modify, of type Obj, which is a subtype of object

  2. Type Key, which is a subtype of PropertyKey (built-in): string | number | symbol.

  3. PDesc, a subtype of PropertyDescriptor (built-in). This allows us to define the property with all its features (writability, enumerability, reconfigurability).

We use generics because TypeScript can narrow them down to a very specific unit type. PropertyKey for example is all numbers, strings, and symbols. But if I use Key extends PropertyKey, we can pinpoint prop to be of e.g. type "maxValue". This is helpful if we want to change the original type by adding more properties.

The Object.defineProperty function either changes the object or throws an error should something go wrong. Exactly what an assertion function does. Our custom helper defineProperty thus does the same.

Let’s add an assertion signature. Once defineProperty successfully executes, our object has another property. We are creating some helper types for that. The signature first:

function defineProperty<
  Obj extends object,
  Key extends PropertyKey,
  PDesc extends PropertyDescriptor>
   (obj: Obj, prop: Key, val: PDesc):
     asserts obj is Obj & DefineProperty<Key, PDesc> {
  Object.defineProperty(obj, prop, val);
}

obj then is of type Obj (narrowed down through a generic), and our newly defined property.

This is the DefineProperty helper type:

type DefineProperty<
  Prop extends PropertyKey,
  Desc extends PropertyDescriptor> =
    Desc extends { writable: any, set(val: any): any } ? never :
    Desc extends { writable: any, get(): any } ? never :
    Desc extends { writable: false } ? Readonly<InferValue<Prop, Desc>> :
    Desc extends { writable: true } ? InferValue<Prop, Desc> :
    Readonly<InferValue<Prop, Desc>>;

First, we deal with the writable property of a PropertyDescriptor. It’s a set of conditions to define some edge cases and conditions of how the original property descriptors work:

  1. If we set writable and any property accessor (get, set), we fail. never tells us that an error was thrown.

  2. If we set writable to false, the property is read-only. We defer to the InferValue helper type.

  3. If we set writable to true, the property is not read-only. We defer as well

  4. The last, default case is the same as writable: false, so Readonly<InferValue<Prop, Desc>>. (Readonly<T> is built-in)

This is the InferValue helper type, dealing with the set value property.

type InferValue<Prop extends PropertyKey, Desc> =
  Desc extends { get(): any, value: any } ? never :
  Desc extends { value: infer T } ? Record<Prop, T> :
  Desc extends { get(): infer T } ? Record<Prop, T> : never;

Again a set of conditions:

  1. Do we have a getter and a value set, Object.defineProperty throws an error, so never.

  2. If we have set a value, let’s infer the type of this value and create an object with our defined property key, and the value type.

  3. Or we infer the type from the return type of a getter.

  4. Anything else: We forget. TypeScript won’t let us work with the object as it’s becoming never.

Lots of helper types, but roughly 20 lines of code to get it right:

type InferValue<Prop extends PropertyKey, Desc> =
  Desc extends { get(): any, value: any } ? never :
  Desc extends { value: infer T } ? Record<Prop, T> :
  Desc extends { get(): infer T } ? Record<Prop, T> : never;

type DefineProperty<
  Prop extends PropertyKey,
  Desc extends PropertyDescriptor> =
    Desc extends { writable: any, set(val: any): any } ? never :
    Desc extends { writable: any, get(): any } ? never :
    Desc extends { writable: false } ? Readonly<InferValue<Prop, Desc>> :
    Desc extends { writable: true } ? InferValue<Prop, Desc> :
    Readonly<InferValue<Prop, Desc>>

function defineProperty<
  Obj extends object,
  Key extends PropertyKey,
  PDesc extends PropertyDescriptor>
  (obj: Obj, prop: Key, val: PDesc):
    asserts  obj is Obj & DefineProperty<Key, PDesc> {
  Object.defineProperty(obj, prop, val)
}

Let’s see what TypeScript does with our changes:

const storage = {
  currentValue: 0
};

defineProperty(storage, 'maxValue', {
  writable: false, value: 9001
});

storage.maxValue; // it's a number
storage.maxValue = 2; // Error! It's read-only

const storageName = 'My Storage';
defineProperty(storage, 'name', {
  get() {
    return storageName
  }
});

storage.name; // it's a string!

// it's not possible to assign a value and a getter
defineProperty(storage, 'broken', {
  get() {
    return storageName
  },
  value: 4000
});

// storage is never because we have a malicious
// property descriptor
storage;

While this might not cover everything, there is already a lot done for simple property definitions.

9.4 Expanding Types for Array.prototype.includes

Problem

TypeScript won’t be able to look for an element of a broad type like string or number within a very narrow tuple or array.

Solution

Create generic helper functions with type predicates, where you change the relationship between type parameters.

Discussion

We create an array called actions, which contains a set of actions in string format which we want to execute. The resulting type of this actions array is string[].

The execute function takes any string as an argument. We check if this is a valid action, and if so, do something!

// actions: string[]
const actions = ["CREATE", "READ", "UPDATE", "DELETE"];

function execute(action: string) {
  if (actions.includes(action)) {
    // do something with action
  }
}

It gets a little trickier if we want to narrow down the string[] to something more concrete, a subset of all possible strings. By adding const-context via as const, we can narrow down actions to be of type readonly ["CREATE", "READ", "UPDATE", "DELETE"].

This is handy if we want to do exhaustiveness checking to make sure we have cases for all available actions. However, actions.includes does not agree with us:

// Adding const context
// actions: readonly ["CREATE", "READ", "UPDATE", "DELETE"]
const actions = ["CREATE", "READ", "UPDATE", "DELETE"] as const;

function execute(action: string) {
  if (actions.includes(action)) {
//                     ^
// Argument of type 'string' is not assignable to parameter of type
// '"CREATE" | "READ" | "UPDATE" | "DELETE"'.(2345)
  }
}

Why is that? Let’s look at the typings of Array<T> and ReadonlyArray<T> (we work with the latter one due to const-context).

interface Array<T> {
  /**
   * Determines whether an array includes a certain element,
   * returning true or false as appropriate.
   * @param searchElement The element to search for.
   * @param fromIndex The position in this array at which
   *   to begin searching for searchElement.
   */
  includes(searchElement: T, fromIndex?: number): boolean;
}

interface ReadonlyArray<T> {
  /**
   * Determines whether an array includes a certain element,
   * returning true or false as appropriate.
   * @param searchElement The element to search for.
   * @param fromIndex The position in this array at which
   *   to begin searching for searchElement.
   */
  includes(searchElement: T, fromIndex?: number): boolean;
}

The element we want to search for (searchElement) needs to be of the same type as the array itself! So if we have Array<string> (or string[] or ReadonlyArray<string>), we can only search for strings. In our case, this would mean that action needs to be of type "CREATE" | "READ" | "UPDATE" | "DELETE".

Suddenly, our program doesn’t make a lot of sense anymore. Why do we search for something if the type already tells us that it just can be one of four strings? If we change the type for action to "CREATE" | "READ" | "UPDATE" | "DELETE", actions.includes becomes obsolete. If we don’t change it, TypeScript throws an error at us, and rightfully so!

One of the problems is that TypeScript lacks the possibility to check for contra-variant types with e.g. upper-bound generics. We can tell if a type should be a subset of type T with constructs like extends, we can’t check if a type is a superset of T. At least not yet!

So what can we do?

Option 1: Re-Declare ReadonlyArray

One of the options that come into mind is changing how includes in ReadonlyArray should behave. Thanks to declaration merging, we can add our own definitions for ReadonlyArray that is a bit looser in the arguments, and more specific in the result. Like this:

interface ReadonlyArray<T> {
  includes(searchElement: any, fromIndex?: number): searchElement is T;
}

This allows for a broader set of searchElement values to be passed (literally any!), and if the condition is true, we tell TypeScript through a type predicate that searchElement is T (the subset we are looking for).

Turns out, this works pretty well!

const actions = ["CREATE", "READ", "UPDATE", "DELETE"] as const;

function execute(action: string) {
  if(actions.includes(action)) {
    // action: "CREATE" | "READ" | "UPDATE" | "DELETE"
  }
}

There’s a problem, though. If there wouldn’t be, the TypeScript team would’ve changed the behavior already. The solution works but takes the assumption of what’s correct and what needs to be checked. If you change action to number, TypeScript would usually throw an error that you can’t search for that kind of type. actions only consists of string, so why even look at number. This is an error you want to catch!

// type number has no relation to actions at all
function execute(action: number) {
  if(actions.includes(action)) {
    // do something
  }
}

With our change to ReadonlyArray, we lose this check as searchElement is any. While the functionality of action.includes still works as intended, we might not see the right problem once we change function signatures along the way.

Also, and more importantly, we change the behavior of built-in types. This might change your type-checks somewhere else, and might cause problems in the long run!

Tip

If you do a type patch by changing behavior from the standard library, be sure to do this module scoped, and not globally.

There is another way.

Option 2: A Helper with Type Assertions

As originally stated, one of the problems is that TypeScript lacks the possibility to check if a value belongs to a superset of a generic parameter. With a helper function, we can turn this relationship around!

function includes<T extends U, U>(coll: ReadonlyArray<T>, el: U): el is T {
  return coll.includes(el as T);
}

This includes function takes the ReadonlyArray<T> as an argument, and searches for an element that is of type U. We check through our generic bounds that T extends U, which means that U is a superset of T (or T is a subset of U). If the method returns true, we can say for sure that el is of the narrower type U.

The only thing that we need to make the implementation work is to do a little type assertion the moment we pass el to Array.prototype.includes. The original problem is still there! The type assertion el as T is ok though as we check possible problems already in the function signature.

This means that the moment we change e.g. action to number, we get the right errors throughout our code.

function execute(action: number) {
  if(includes(actions, action)) {
//            ^
// Argument of type 'readonly ["CREATE", "READ", "UPDATE", "DELETE"]'
// is not assignable to parameter of type 'readonly number[]'.
  }
}

And this is the behavior we want. A nice touch is that TypeScript wants us to change the array, not the element we are looking for. This is due to how the relationship between the generic type parameters is.

Tip

The same solutions also work if you run into similar troubles with Array.prototype.indexOf!

TypeScript aims to get all standard JavaScript functionality correct and right, but sometimes you have to make trade-offs. This case brings calls for trade-offs: Do you allow for an argument list that’s looser than you would expect, or do you throw errors for types where you already should know more?

Type assertions, declaration merging, and other tools help us to get around that in situations where the type system can’t help us. Not until it becomes better than before, by allowing us to move even further in the type space!

9.5 Filtering Nullish Values

Problem

You want to use the Boolean constructor to filter nullish values from an array, but TypeScript still yields the same types including null and undefined.

Solution

Overload the filter method from Array using declaration merging.

Discussion

Let’s end this chapter with a quick tip. Sometimes you have collections that could include nullish values (undefined or null).

// const array: (number | null | undefined)[]
const array = [1, 2, 3, undefined, 4, null];

To continue working, you want to remove those nullish values from your collection. This is typically done using the filter method of Array, maybe by checking the truthiness of a value. null and undefined are falsy, so they get filtered out.

const filtered = array.filter((val) => !!val);

A very convenient way of checking the truthiness of a value is by passing it to the boolean constructor. This is short, on point and very elegant to read.

// const array: (number | null | undefined)[]
const filtered = array.filter(Boolean);

But sadly, it doesn’t change our type. We still have null and undefined as possible types for the filtered array.

By opening up the Array interface and adding another declaration for filter, we can add this special case as an overload.

interface Array<T> {
  filter(predicate: BooleanConstructor): NonNullable<T>[]
}

interface ReadonlyArray<T> {
  filter(predicate: BooleanConstructor): NonNullable<T>[]
}

And with that, we get rid of nullish types and have more clarity on the type of our array’s contents.

// const array: number[]
const filtered = array.filter(Boolean);

Neat!

What’s the caveat? Literal tuples and arrays. BooleanConstructor does not only filter nullish values but falsy values. To get the right elements, we not only have to return NonNullable<T>, but also introduce a type that checks for truthy values.

type Truthy<T> = T extends "" | false | 0 | 0n ? never : T;

interface Array<T> {
  filter(predicate: BooleanConstructor): Truthy<NonNullable<T>>[];
}

interface ReadonlyArray<T> {
  filter(predicate: BooleanConstructor): Truthy<NonNullable<T>>[];
}

// as const creates a readonly tuple
const array = [0, 1, 2, 3, ``, -0, 0n, false, undefined, null] as const;

// const filtered: (1 | 2 | 3)[]
const filtered = array.filter(Boolean);

const nullOrOne: Array<0 | 1> = [0, 1, 0, 1];

// const onlyOnes: 1[]
const onlyOnes = nullOrOne.filter(Boolean);
Note

The example includes 0n which is 0 in the BigInt type. This type is only available from EcmaScript 2020 onwards.

This gives us the right idea of which types to expect, but since ReadonlyArray<T> does take the tuple’s elements types and not the tuple type itself, we lose information on the order of types within the tuple.

As with all extensions to existing TypeScript types, be aware that this might cause side effects. Scope them locally, and use them carefully.

9.6 Extending Modules

Problem

You work with libraries that provide their own view of HTML elements, like Preact or React. But sometimes their type definitions miss the latest features. You want to patch them.

Solution

Use declaration merging on the module and interface level.

Discussion

JSX is a syntax extension to JavaScript, introducing an XML-like way of describing and nesting components. Basically, everything that can be described as a tree of elements can be expressed in JSX. JSX has been introduced by the creators of the popular React framework to make it possible to write and nest components in an HTML-like way within JavaScript, where it is actually transpiled to a series of function calls.

<button onClick={() => alert('YES')}>Click me</button>

// Transpiles to:

React.createElement("button", { onClick: () => alert('YES') }, 'Click me');

JSX has since been adopted by many frameworks, even if there is little or no connection to React. We see a lot more on JSX in Chapter 10.

React typings for TypeScript come with lots of interfaces for all possible HTML elements out there. But sometimes, your browsers, your frameworks, or your code are a little bit ahead of what’s possible.

Let’s say you want to use the latest image features in Chrome, and load your images lazily. This a progressive enhancement, so only browsers which understand what’s going on know how to interpret this. Other browsers are robust enough not to care.

<img src="/awesome.jpg" loading="lazy" alt="What an awesome image" />

Your TypeScript JSX code? Errors.

function Image({ src, alt }) {
  // Property 'loading' does not exist.
  return <img src={src} alt={alt} loading="lazy" />;
}

To prevent this, we can extend the available interfaces with our own properties. This feature of TypeScript is called declaration merging.

Create a @types folder and put a jsx.d.ts file in it. Change your TypeScript config so your compiler options allow for extra types:

{
  "compilerOptions": {
    ...
    /* Type declaration files to be included in compilation. */
    "types": ["@types/**"],
  },
  ...
}

We re-create the exact module and interface structure:

  1. The module is called 'react',

  2. The interface is ImgHTMLAttributes<T> extends HTMLAttributes<T>

We know that from the original typings. Here, we add the properties we want to have.

import "react";

declare module "react" {
  interface ImgHTMLAttributes<T> extends HTMLAttributes<T> {
    loading?: "lazy" | "eager" | "auto";
  }
}

And while we are at it, let’s make sure we don’t forget alt texts!

import "react";

declare module "react" {
  interface ImgHTMLAttributes<T> extends HTMLAttributes<T> {
    loading?: "lazy" | "eager" | "auto";
    alt: string;
  }
}

Way better! TypeScript will take the original definition and merge your declarations. Your autocomplete can give you all available options and will error when you forget an alt text.

When working with Preact, things are a bit more complicated. The original HTML typings are very generous and not as specific as React’s typings. That’s why we have to be a bit more explicit when defining images:

declare namespace JSX {
  interface IntrinsicElements {
    img: HTMLAttributes & {
      alt: string;
      src: string;
      loading?: "lazy" | "eager" | "auto";
    };
  }
}

This makes sure that both alt and src are available, and adds a new attribute called loading. The technique is the same, though: Declaration merging, which works on the level of namespaces, interfaces, and modules.

9.7 Augmenting Globals

Problem

You use a browser feature like ResizeObserver and see that it isn’t available in your current TypeScript configuration.

Solution

Augment the global namespace with custom type definitions.

Discussion

TypeScript stores types to all DOM APIs in lib.dom.d.ts. This file is auto-generated from Web IDL files. Web IDL stands for Web Interface Definition Language and is a format the W3C and WHATWG use to define interfaces to web APIs. It came out around 2012 and is a standard since 2016.

When you read standards at W3C — like on Resize Observer — you can see a parts of a definition or the full definition somewhere within the specification. Like this one:

enum ResizeObserverBoxOptions {
  "border-box", "content-box", "device-pixel-content-box"
};

dictionary ResizeObserverOptions {
  ResizeObserverBoxOptions box = "content-box";
};

[Exposed=(Window)]
interface ResizeObserver {
  constructor(ResizeObserverCallback callback);
  void observe(Element target, optional ResizeObserverOptions options);
  void unobserve(Element target);
  void disconnect();
};

callback ResizeObserverCallback = void (
  sequence<ResizeObserverEntry> entries,
  ResizeObserver observer
);

[Exposed=Window]
interface ResizeObserverEntry {
  readonly attribute Element target;
  readonly attribute DOMRectReadOnly contentRect;
  readonly attribute FrozenArray<ResizeObserverSize> borderBoxSize;
  readonly attribute FrozenArray<ResizeObserverSize> contentBoxSize;
  readonly attribute FrozenArray<ResizeObserverSize> devicePixelContentBoxSize;
};

interface ResizeObserverSize {
  readonly attribute unrestricted double inlineSize;
  readonly attribute unrestricted double blockSize;
};

interface ResizeObservation {
  constructor(Element target);
  readonly attribute Element target;
  readonly attribute ResizeObserverBoxOptions observedBox;
  readonly attribute FrozenArray<ResizeObserverSize> lastReportedSizes;
};

Browsers use this as a guideline to implement respective APIs. TypeScript uses these IDL files to generate lib.dom.d.ts. The TS JS Lib generator project scrapes web standards and extracts IDL information. Then an IDL to TypeScript generator parses the IDL file and generates the correct typings.

Pages to scrape are maintained manually. The moment a specification is far enough and supported by all major browsers, people add a new resource and see their change released with an upcoming TypeScript version. So it’s just a matter of time until we get ResizeObserver in lib.dom.d.ts.

If we can’t wait, we can add the typings ourselves. And only for the project we currently work with.

Let’s assume we generated the types for ResizeObserver. We would store the output in a file called resize-observer.d.ts. Here are the contents:

type ResizeObserverBoxOptions =
  "border-box" |
  "content-box" |
  "device-pixel-content-box";

interface ResizeObserverOptions {
  box?: ResizeObserverBoxOptions;
}

interface ResizeObservation {
  readonly lastReportedSizes: ReadonlyArray<ResizeObserverSize>;
  readonly observedBox: ResizeObserverBoxOptions;
  readonly target: Element;
}

declare var ResizeObservation: {
  prototype: ResizeObservation;
  new(target: Element): ResizeObservation;
};

interface ResizeObserver {
  disconnect(): void;
  observe(target: Element, options?: ResizeObserverOptions): void;
  unobserve(target: Element): void;
}

export declare var ResizeObserver: {
  prototype: ResizeObserver;
  new(callback: ResizeObserverCallback): ResizeObserver;
};

interface ResizeObserverEntry {
  readonly borderBoxSize: ReadonlyArray<ResizeObserverSize>;
  readonly contentBoxSize: ReadonlyArray<ResizeObserverSize>;
  readonly contentRect: DOMRectReadOnly;
  readonly devicePixelContentBoxSize: ReadonlyArray<ResizeObserverSize>;
  readonly target: Element;
}

declare var ResizeObserverEntry: {
  prototype: ResizeObserverEntry;
  new(): ResizeObserverEntry;
};

interface ResizeObserverSize {
  readonly blockSize: number;
  readonly inlineSize: number;
}

declare var ResizeObserverSize: {
  prototype: ResizeObserverSize;
  new(): ResizeObserverSize;
};

interface ResizeObserverCallback {
  (entries: ResizeObserverEntry[], observer: ResizeObserver): void;
}

We declare a ton of interfaces, and some variables that implement our interfaces, like declare var ResizeObserver which is the object that defines the prototype and constructor function:

declare var ResizeObserver: {
  prototype: ResizeObserver;
  new(callback: ResizeObserverCallback): ResizeObserver;
};

This already helps a lot. We can use the — arguably — long type declarations and put them directly in the file where we need them. ResizeObserver is found! We want to have it available everywhere, though.

Thanks to TypeScript’s declaration merging feature, we can extend namespaces and interfaces as we need it. This time, we’re extending the global namespace.

The global namespace contains all objects and interfaces that are, well, globally available. Like the window object (and Window interface), as well as everything else which should be part of our JavaScript execution context. We augment the global namespace and add the ResizeObserver object to it:

declare global { // opening up the namespace
  var ResizeObserver: { // merging ResizeObserver with it
    prototype: ResizeObserver;
    new(callback: ResizeObserverCallback): ResizeObserver;
  }
}

Let’s put resize-observer.d.ts in a folder called @types. Don’t forget to add the folder to both the sources that TypeScript shall parse, as well as the list of type declaration folders in tsconfig.json

{
  "compilerOptions": {
    //...
    "typeRoots": ["@types", "./node_modules/@types"],
    //...
  },
  "include": ["src", "@types"]
}

Since there might be a significant possibility that ResizeObserver is not yet available in your target browser, make sure that you make the ResizeObserver object possibly undefined. This urges you to check if the object is available:

declare global {
  var ResizeObserver: {
    prototype: ResizeObserver;
    new(callback: ResizeObserverCallback): ResizeObserver;
  } | undefined
}

In your application:

if (typeof ResizeObserver !== 'undefined') {
  const x = new ResizeObserver((entries) => {});
}

This makes working with ResizeObserver as safe as possible!

It might be that TypeScript doesn’t pick up your ambient declaration files and the global augmentation. If this happens, make sure that:

  1. You parse the @types folder via the include property in tsconfig.json.

  2. Your ambient type declaration files are recognized as such by adding them to types or typeRoots in the tsconfig.json compiler Options.

  3. Add export {} at the end of your ambient declaration file so TypeScript recognizes this file as a module.

9.8 Adding Non-JS Modules to the Module Graph

Problem

You use a bundler like Webpack to load files like CSS or images from JavaScript, but TypeScript does not recognize those files.

Solution

Globally declare modules based on file-name extensions.

Discussion

There is a movement in web development to make JavaScript the default entry point of everything, and let it handle all relevant assets via import statements. What you need for this is a build tool, a bundler, that analyzes your code and creates the right artifacts. A popular tool for this is Webpack: A JavaScript bundler that allows you to bundle everything! CSS, Markdown, SVGs, JPEGs, you name it.

// like this
import "./Button.css";

// or this
import styles from "./Button.css";

Webpack uses a concept called loaders, which looks at file endings and activates certain bundling concepts. Importing css-files in JavaScript is not native. It’s part of Webpack (or whatever bundler you are using). However, we can teach TypeScript to understand files like this.

Note

There is a proposal in the ECMAScript standards committee to allow imports of files other than javascript and assert certain built-in formats for this. This will have an effect on TypeScript eventually. You can read all about it here.

TypeScript supports ambient module declarations, even for a module that is not “physically” there, but in the environment or reachable via tooling. One example is Node’s main built-in modules, like url, http or path, as described in TypeScript’s documentation:

declare module "path" {
  export function normalize(p: string): string;
  export function join(...paths: any[]): string;
  export var sep: string;
}

This is great for modules where we know the exact name. We can also use the same technique for wildcard patterns. Let’s declare a generic ambient module for all our CSS files:

declare module '*.css' {
  // to be done.
}

The pattern is ready. This listens to all CSS files we want to import. What we expect is a list of class names that we can add to our components. Since we don’t know which classes are defined in the CSS files, let’s go with an object that accepts every string key and returns a string.

declare module '*.css' {
  interface IClassNames {
    [className: string]: string
  }
  const classNames: IClassNames;
  export default classNames;
}

That’s all we need to make our files compile again. The only downside is that we can’t use the exact class names to get auto-completion and similar benefits. A way to solve this is to generate type files automatically. There are packages on NPM that deal with that problem. Feel free to choose one of your liking.

It’s a bit easier if we want to import something like MDX into our modules. MDX lets us write Markdown which parses to regular React (or JSX) components (more on React in Chapter 10).

We expect a functional component (that we can pass props to) that returns a JSX element:

declare module '*.mdx' {
  let MDXComponent: (props) => JSX.Element;
  export default MDXComponent;
}

And voilà: We can load .mdx files in JavaScript and use them as components.

import About from '../articles/about.mdx';

function App() {
  return <>
    <About/>
  </>
}

If you don’t know what to expect, make your life easy. All you need to do is declare the module. Don’t provide any types. TypeScript will allow loading, but won’t give you any type-safety.

declare module '*.svg';

To make ambient modules available to your app, it is recommended to create an @types folder somewhere in your project (probably root level). There you can put any amount of .d.ts files with your module definitions. Add a referral to your tsconfig.json and TypeScript knows what to do.

{
  ...
  "compilerOptions": {
    ...
    "typeRoots": [
      "./node_modules/@types",
      "./@types"
    ],
    ...
  }
}

It is one of TypeScript’s main features to be adaptable to all JavaScript flavors. Some things are built-in, and others need some extra patching from you.