Back to All Posts

Let's Bring Back JavaScript's `with()` Statement

JavaScript's "with()" statement is effectively deprecated and strongly discouraged from use. But I'm not so sure it's justified.

It's hard not to appreciate the elegance of Kotlin's scope functions, which allow you to tap into object and immediately execute a block of code against it. I often reach for also, run, and let, but with is up there too. Pass an object, and you can access specific properties with no identifier:

data class Person(
	val firstName: String,
	val lastName: String,
	val wasRight: Boolean
)

val miltonFriedman = Person(
	firstName = "Milton",
	lastName = "Friedman",
	wasRight = true
)

val fullName = with(miltonFriedman) {
	"$firstName $lastName" <-- slick!
}
    
print(fullName) // "Milton Friedman"

Up until recently, I didn't know JavaScript has something with a similar vibe – its own version of with. It's effectively deprecated, and it won't work at all in strict mode, but its still something to marvel at, even in light of the reasons it's discouraged. And I've love to see a world in which we bring (at least some version of) it back.

Let's spend some time reviewing what with() does, its common criticisms, and my own objections to those criticisms.

An Overview of Accessing Properties w/ with()

When you access properties on a JavaScript object, you almost always need to qualify those properties with an identifier so the engine knows where it can find a value. The one exception is global variables. If a variable by that name doesn't exist up the scope chain, it's checked as a property of window or globalThis. Like this:

const name = 'Bob';

const person = {
	name: 'Milton Friedman',
	wasRight: true
}

// Searches the "person" object:
console.log(person.name); 'Milton Friedman'

// Searches the scope chain, then "window" or "globalThis":
console.log(name); // 'Bob'

The with statement gives you access to properties on an object without a qualifying identifier. You simply reference them as standalone variables.

const person = {
	name: 'Milton Friedman',
	wasRight: true
}

with(person) {
	console.log(name); // 'Milton Friedman'
	console.log(wasRight); // `true`
}

This works because with() wedges person into the beginning of the scope chain, meaning the target object will first be searched for a value before moving up any further. As an aside, you can still access variables from broader scope – you just need to rely on that explicit identifier:

window.name = "Bob";

with(person) {
	console.log(window.name); // "Bob"
}

In some ways, it offers an ergonomic benefit similar to destructuring assignment. Instead of needing to repeat an identifier, there's a little less syntactic bloat. But as an added benefit, the code executed inside with() is contained to a distinct block scope.

Why is it deprecated?

If you look at the TC39 documentation, the with() statement is marked as "legacy" and discouraged from use, but it doesn't go into a lot of depth as to why. If you look elsewhere, however, a few main reasons come up. (It's very possible I'm missing some other key objections, by the way. If you have them, let me know.)

#1. Poor Readability

Without an explicit identifier, it's possible to write some confusing code that's difficult to read. Look at this function:

function doSomething(name, obj) {
  with (obj) {
    console.log(name);
  }
    
  console.log(name);
}

doSomething("Bob", { name: "Alex" });
// "Alex"
// "Bob"

At first glance, it's not clear what name is referring to – a property on obj or the parameter passed to the function. And the same variable name refers to completely different values throughout the function body. It's confusing and might trip you up. After all, depending on where that variable is used, resolving its scope will be performed very differently.

This is a good critique, but in my opinion, not a lethal one. It's the developer's (poor) choice to write code like this, and seems like something largely solved by education.

#2. Scope Leak / Unintended Property Access

Beyond that that, due to its design, you can inadvertently run into problems by accessing properties within a different scope you didn't intend to handle. Let's say you have a function that processes historical events contained in "country" objects.

const israel = {
  history: ['event1', 'event2'],
};

function processHistory(country) {
  with (country) {
	// do something with `history`...
  }
}

processHistory(israel);

This'll work fine until you pass a country with no history property. In that event, history will fall back to window.history (or some other history variable that exists up the scope chain), causing unexpected issues.

In this simple example, the problem could be a non-issue if history is a required property (TypeScript could help if the object were created via object literal, but it's very easy to bypass if you're composing objects through other means), but I can see other surprises popping up in more complex scenarios. You're modifying the scope chain. At some point, weird things are bound to happen. So, I'm somewhat sympathetic to this point.

Update: Important Historical Context

When I originally wrote this post, I wasn't thinking about how significantly the parameters of this conversation have changed over the past number of years, particularly throughout the transition between ES5 and ES2015. Then, Sean May dropped some really useful context in a comment below. He illustrates how scope management in JavaScript was far more like the wild west before const, let, and modules gave us more fine-grained control over it. It really has me wondering how this all would've turned out if those tools had existed from the beginning. Skip down and read that comment if you haven't!

#3. Performance Challenges

Things get a little more interesting when they relate to performance. In my view, it's the strongest criticism I've seen.

When a property is accessed within a with statement, it's value is searched not only within the top-level properties of the given object, but the entire prototype chain. And if it's not found there, it'll then search up from scope to scope. This search order necessarily happens for every property access. Depending on the application, that can make for some slow look-ups and performance foot guns.

A quick illustration. We're using with to access properties on me, which is at the bottom of a prototype chain:

const creature = { name: 'creature', planet: 'earth' };
const mammal = Object.create(creature, { name: { value: 'mammal' } });
const human = Object.create(mammal, { name: { value: 'human' } });
const me = Object.create(human, { name: { value: 'alex' } });

function outerFunction() {
	const outer = 'outer';

	function innerFunction() {
		const inner = 'inner';

		with (me) {
			console.log(
                name, 
                planet, 
                inner, 
                outer
            );
            // 'me' 'earth' 'inner' 'outer'
		}
	}

	innerFunction();
}

outerFunction();

Since name exists directly on the object, there's not much overhead in looking it up. Remember – the target object's top-level properties are the first to be searched. But planet is different. It's not on me, so every single object in the prototype chain is searched for a value until it's found.

And even though inner and outer are distinct variables that don't exist anywhere in the prototype chain, that entire chain is searched before those other variables are read. Pretty wasteful, at least in contrived circumstances like this, designed to illustrate a point.

In order to get the same "clean" variable handling but without these risks, destructuring assignment is often recommended. Using the inheritance example from before:

const creature = { name: 'creature', planet: 'earth' };
const mammal = Object.create(creature, { name: { value: 'mammal' } });
const human = Object.create(mammal, { name: { value: 'human' } });
const me = Object.create(human, { name: { value: 'me' } });

const { name, planet } = me;

console.log(name, planet);
// `me` `earth`

I understand why it's the suggested alternative:

  • There's less ambiguity about where the variables are coming from.
  • The compiler can make better assumptions (and optimizations) about where properties are being accessed, which is good for performance (although, the prototype chain is still searched, so that cost will still exist).
  • You still get to use the variables without an identifier.

But from a readability standpoint, I'm not totally sold on it being a worthy alternative.

Why with() is (Sometimes) Superior to Destructuring Assignment

The appeal of using with() isn't only in the "clean" variables. It's in the control structure. Due to the syntax around it, it's very easy to cognitively "bucket" a particular task inside a with() statement. That code is set aside from the rest, both in lexical scope and purpose, making it easier to reason about.

Imagine you're handling an HTTP request that passes some information in the request, and you somehow get access to it in a data variable. Your objective is to use particular properties to save a record to the database. Here's how you might used destructured properties:

const { imageUrl, width, height } = data;

await saveToDb({
  imageUrl,
  width,
  height,
});

It's fine, but it takes a line to pluck off those variables. Plus, they're all now block-scoped, and could clash with whatever else is going on in the method. There might be a couple of responses at this point:

"Just move the code into its own method." I don't hate idea. It might even be a good from a OOP design perspective – the parent method would be kept slimmer and focused. But this suggestion also feels primarily like a solution to a problem introduced by choosing to use destructuring assignment, and that could've been eased using the semantics of with(). Depending on whatever else is going on, I might not want to create a distinct method, but would still like that distinct scope.

"Wrap it all in a one-off block." That const is block-scoped, which means it could be contained by creating a new block scope with curly braces:

const imageUrl = "different-image.jpg";

{
 	const { imageUrl, width, height } = data;

	await saveToDb({
  		imageUrl,
		width,
		height,
	});   
}

There are points to be awarded here for cleverness, but there's no way you can convince me it's more readable. It's not a hack, but it kinda feels like one.

Same Story, Using with()

Now, here's the same thing, but this time, it's accomplished via with statement:

with (data) {
  await saveToDb({
    imageUrl,
    width,
    height,
  });
}

If the benefits aren't clear:

  • The control structure paired with with makes it very clear that something specific's gonna happen concerning data and saving something to the database.
  • Thanks to shorthand property names, I don't need to first destructure values from data before passing them to my method. Saves me a line.
  • None of the variables can bleed into other parts of the containing method.

I still think destructuring assignment is a useful feature (I use it a lot), but at least in terms of readability and semantics, it doesn't quite cut it as a drop-in alternative for with().

But what about those performance concerns?

Yeah, let's talk about those. By the nature of how with() is designed to operate, it's definitely not the most strictly optimal way to handle object properties. But I question just how serious of a concern that is in light of what's actually going on in the code, and weighed against the ergonomic and legibility gains.

Consider this example. In most of the with() cases I've seen, the objects people are using aren't terribly complex. They're often just simple key/pairs. So, I made a pretty large one compared to most of these cases. It's a list of every U.S. state:

const states = {
	alabama: 'AL',
	alaska: 'AK',
	arizona: 'AZ',
	arkansas: 'AR',
	california: 'CA',
    //... the rest of them.
};

I then ran a quick benchmark test on logging out each of those values. One test used with():

with (states) {
    console.log(alabama, alaska, /* ...the rest */);
}

And the other used destructuring assignment:

const { alabama, alaska, /* ...the rest */ } = states;

console.log(alabama, alaska, /* ...the rest */);

As expected, with() was slower, by about 23% overall:

But if you look at those numbers a little longer and contextualize them in the real world, the difference is pretty much "meh." After all, you're dealing in tens of thousands of operations in a second.

Don't get me wrong. That could make a very meaningful difference in an environment highly sensitive to execution performance. But those scenarios are likely few, and they probably shouldn't be using JavaScript anyway. They'd be written in PHP, obviously.

On top of that, it's worth calling out that with() is far from the only JavaScript feature that can backfire in performance when used inappropriately. Just one example: the spread operator feels really nice to write, but if it's not used carefully in the context of the rest of our code, things can get gross real fast.

Still... can we make it faster?

I'd love to see a world in which a "better" version of with() comes back in glory, with a few tweaks for improved performance.

A very specific change I wouldn't mind seeing is no longer searching the prototype chain. The new & improved with() would only consider the top-level properties on an object returned from Object.getOwnPropertyNames(). This change would reduce the amount of time it takes to resolve variables, especially if you're accessing those outside the with() context altogether.

A benchmark a promising benchmark related to this. I made an object with 100 key/value pairs, and whose prototype chain was 100 levels deep. Each level had that same set of key/value pairs. Here's the scrappy code used to make it:

function makeComplicatedObject() {
	const obj = Object.fromEntries(
		Array.from({ length: 100 }).map((_, index) => [
			`key_${index}`,
			`value_${index}`,
		])
	);

	return obj;
}

// result: 100 key/value pairs, prototype chain 100 levels deep
const deeplyNestedObject = Array.from({ length: 100 }).reduce(
	(prevObj, _current, index) => {
    	let newObject = makeComplicatedObject();
    	Object.setPrototypeOf(newObject, prevObj);
    	return newObject;
  	},
	makeComplicatedObject()
);

I then ran a benchmark between that deeply nested object and another object with the same key/value pairs, but no huge prototype chain. Each snippet of code would simply log another local variable. Since it's inside the with(), however, it'd be forced to first wait for those objects to be crawled.

The results shouldn't be surprising. The "flat" version of the object was ~36% faster to search.

screenshot of with() benchmark results

That makes sense. It didn't make with() traverse that nasty prototype chain. And I'm willing to bet that 99.99% of the real-world code using with() doesn't need to either.

My Own "Limited" Version

I had a fun time building my own version of a more "limited" with(), by the way. It uses a simple Proxy to make with() think the object only "has" a property when it's one of the top-level keys:

function limitedWith(obj, cb) {
  const keys = Object.getOwnPropertyNames(obj);
  const scopedObj = new Proxy(obj, {
    has(_target, key) {
      return keys.includes(key);
    },
  });

  return eval(`
    with(scopedObj) {
      (${cb}.bind(this))();
    }
  `);
}

The benchmark results weren't too bad either, despite using JavaScript to solve what the native code powering the engine could undoubtedly do better:

benchmark comparing limitedWith() vs. with()

Of course, this is only one optimization on the table. I also don't hate the idea of being able to pass a target scope into with(). By default, it'll search for a variable all the way up the scope chain. But if a particular value is passed, it'd be limited to that scope:

with(someObject, { scope: 'module' }) {
	// outside of `someObject`, 
	// only the current module scope 
	// would be searched.
}

I'm sure there are good challenges to these modifications in & of themselves. If you have any feedback with them or other ideas altogether, I'd love to hear them.

You don't know every use case.

These issues aside, "you'll never have a legitimate reason to use [insert tool]" is one bold claim, and a difficult one to defend. And it applies to certainly to with(). When people discourage its use, they're very likely thinking within a certain range of possible circumstances, and making several assumptions along the way. They might include:

  • Execution speed is of utmost importance.
  • The syntax is confusing.
  • It's an inefficient way to accomplish a task.
  • This code will always be running on the main thread.
  • ... many more.

Assumptions like this, by the way, are very often accurate, and worth keeping loaded in your brain. But they're not accurate all the time. They often neglect the particular set of trade-offs and engineer being forced to make, the purpose of the tool they're building, or other factors.

The best evidence for this is the fact that really good, reputable libraries written by really smart, discerning engineers still have with() in their codebases today. One of them I've come across is partytown, and I'm very confident other examples exist as well. It may make a head tilt in reading that, but when you dig into what libraries like this are trying to achieve, it might start to level again.

Partytown, for example, is rather unique because much of it doesn't ever execute within the main thread, and so it runs on a fundamentally different set of performance constraints than most other libraries do. It's arguably a special case, but at the very least, butts up against the claim that with() ought to be invariably revoked. You've got to have a really, really good case to take a feature away, after all, especially when it's been a part of the language for so long.

TL;DR

Let's review all that word vomit above.

  • Yes, there are some unique challenges & risks in using with() (although, they weren't as bad as they were prior to ES2015).
  • No, the recommended alternatives aren't good enough.
  • Those challenges & risks are often overblown anyway.
  • Still, we can probably build more responsible version to replace it.
  • You're probably not justified in universally discouraging a feature that's been around for a very long time.

You know there are holes in this. I'm possibly making unfair assumptions of my own, or missing some key risks that deserve a place in the conversation. If that's true, drop them in the comments, find me on X, or write your own scathing blog post in response.


Alex MacArthur is a software engineer working for Dave Ramsey in Nashville-ish, TN.
Soli Deo gloria.

Get irregular emails about new posts or projects.

No spam. Unsubscribe whenever.
Leave a Free Comment

0 comments