Will Sentance, educator and co-founder of Codesmith, joins SE Radio’s Adi Narayan to discuss the evolution of JavaScript and modern best practices. They begin with JavaScript’s origins as a simple scripting language and its growth into the backbone of modern web development, highlighting the core theme of the “don’t break the web” constraint. The requirement that JavaScript must remain backward-compatible has shaped everything from naming decisions (e.g., flat instead of flatten) to the introduction of Symbols as a collision-safe way to extend objects.
Will explains how the TC39 group uses the open-source community as a filtration system, absorbing user land patterns (like those from Lodash or Moment) into the standard library only once demand is proven. The upcoming Temporal API is highlighted as a major win for native date/time handling. On the engine side, Will discusses the shift toward monomorphic object shapes in the V8 JavaScript engine for better just-in-time (JIT) compiler performance, and how developers can now write more engine-aware code. The conversation also touches on LLMs in coding: Will’s view is that AI tools are useful but risk atrophying developers’ under-the-hood understanding, which remains essential for debugging complex, production-scale systems.
Brought to you by IEEE Computer Society and IEEE Software magazine.
Show Notes
Related Episodes
Other References
Transcript
Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number.
Adi Narayan 00:00:18 Welcome to Software Engineering Radio. I’m your host, Adi Narayan. Today’s episode is about the evolution of JavaScript and how its new features can allow developers to make their apps more performant and optimized for sophisticated browsers. Here to talk about it is Will Sentance, he’s a seasoned educator and one of the founders of Code Smith, which runs full stack in machine learning training programs. He’s also a visiting fellow at Oxford University working on AI and has created a course on the hard parts of JavaScript. Very relevant to our conversation, and you can find this on frontend masters.com. Earlier in his career, he was one of the founders of ICE CA tool which allowed developers to add P2P video and audio chats to their websites will welcome to Software Engineering Radio.
Will Sentance 00:01:00 Thank you for having me, Adi. It’s a pleasure to be here.
Adi Narayan 00:01:03 Is there anything that I missed out that you’d like to add to that introduction?
Will Sentance 00:01:06 Yeah, only that now I work out of South Park Commons, a community in San Francisco where I work on embodied AI developer tooling. So, this is the transition of software engineering from the, I guess virtual world to its potential in the physical world. And so, building the tools, the Twilio’s, the Stripes, the MongoDB equivalents for the physical revolution of ai, that I think is coming
Adi Narayan 00:01:31 A hundred percent and it’s a topic that we hope to cover in the future episode for sure.
Will Sentance 00:01:35 Absolutely.
Adi Narayan 00:01:36 So let’s get started. Will, you’ve been living and breathing JavaScript so to speak for a long time and you’ve been teaching folks how to get really good at it. To get started, can you give us a brief history of JavaScript and how it came about and how it evolved?
Will Sentance 00:01:49 Absolutely. So, I think most people listening will know that JavaScript started from a 1995 era created in a very short period of time, something like 10 days supposedly and never really was expected to be the home of all modern dynamic web applications that it’s become by the early two thousand creative folk behind web browsers. We’re starting to realize the limit of plugins, so things like Java app plates that were the, at that point default or flash that were the default techniques for adding interactivity to a web application and started pushing the limits of JavaScript, both with existing features and starting to introduce new features. So, one of the pioneers of that of course is Google Maps, as I’m sure again many of your listeners will know, may even have a memory of first interacting with the web browser. A pretty amazing moment. I think my first light bulb moment was a site called Net Vibes.
Will Sentance 00:02:47 Sure, almost nobody remembers it, but it was a dynamic homepage with, again, draggable components, something that we now take for granted, but back in 2004 was a really significant moment in what you could imagine could be built within this environment. The web browser. Fast forward to today and the language JavaScript is now a much more mature language. That’s a combination of the TC39 committee that has responded to needs from the community for nearly decades now of being responsive to the community and adding features but is also a coherence of browsers. So browser developers backed by pretty substantial organizations working incredibly hard to evolve their architecture under the hood to make sure that even as huge amount of features have been added and web applications have become far heavier in the workloads they’re handling, the browser is still able to handle that and has the capacity to handle it, both the architectural decisions around it, but also the features available to developers in JavaScript.
Will Sentance 00:03:50 And that’s not just JavaScript, it’s also the web browser APIs that are part of the experience. I think a lot of us might sometimes get confused what is JavaScript and what are web browser APIs? Things like fetch or the access to the camera or local storage or being essentially browser features not developed by the ECMO committee. That’s the group that works on JavaScript itself, but they’ve all started to play hand in hand together much more. And so, while today many people write some abstraction level above, maybe TypeScript, maybe React at the core, JavaScript is now a fairly mature and stable language. What we can talk about is its additive nature and a pen only system because while a Python or a Java might have a version release cycle where at some point a previous version is deprecated, JavaScript within the browser needs to be backwards compatible within reason all the way back to 1995. And so that’s the constraint in which this language has always been evolving, and I think we’ll probably talk about some of the decisions that have been made that are a result of that a 100%.
Adi Narayan 00:04:53 And I guess that constraint as it’s called, don’t predict the web constraint. Given that sort of a constraint that exists, what has that led to? What kind of improvements have the guardians of JavaScript not been able to make because of that constraint?
Will Sentance 00:05:07 Yeah, I mean there’s probably many we don’t realize without looking at the committee notes or the conversations that gone behind the scenes, but some that really surface to public knowledge. One was 2018, I think pretty notorious or not notorious, but significant moment when the JavaScript developers increasingly recognized that what we might call user land. So that’s libraries developed by the public community. So, things like Lodash had many functions that you would perhaps take for granted in other languages like Python developed by the community but not available within JavaScript itself. So particularly array methods. That was around the time when functional paradigm and thinking about data as something to be transformed from function to function without maybe directly mutating became really demanded by the community. And so mapping flattening type higher order functions became something that was requested. And in 2018 it reached the point where a flatten array built in method was proposed.
Will Sentance 00:06:11 Great. Sounds good. Until you realize that there was a mood tools library that was still being used by thousands of applications that already had a built-in flattened property available to developers to use directly on their arrays through the prototype by mutating it. And that would now be overridden if a flatten method was added at the language primitive level, the standard library level. And so, after much debate including potentially calling it smoosh, which would’ve been smoosh gate much better names. And they came known as Smoosh gate. The agreement came to create a flat array method, and when you think about wanting your method names to reflect their action, flatten is more appropriate. But in the end, flat was chosen because the legacy of previous libraries that still needed to be compatible with restricted what was really available, and that is the, again, guiding principle of JavaScript’s developers.
Will Sentance 00:07:09 So I think one of the most significant steps the language creators took was introducing therefore an entirely new primitive known as a Symbol. And I actually did a new version of the hard parts of JavaScript a few months ago and did a whole section on Symbols. They’re one of the most counterintuitive constructs that you could possibly imagine until you realize that they are designed essentially as a workaround for not adding new properties to objects where there is a chance of collision, and I don’t mean collision in the hash map sense, but just collision of a name within JavaScript itself with existing libraries or existing developers, custom properties added to an object, meaning that if you are a developer and you’ve written codes that’s going to loop through all your publicly available properties on your object and suddenly there’s a new property added by JavaScript’s core team, then suddenly you are going to be displaying a lot more than you expected on the UI.
Will Sentance 00:08:05 And so a new primitive was introduced, the symbol that gives JavaScript and then also developers through meta programming techniques, the ability to add new features to objects without the risk of clashing on existing properties. And this is the result of having a language that needs to be backwards compatible. There are no, this is 2.7, if you are using it, you need to use the existing features, but if you want to use the new features, you need to accept the entire code base will be deprecated unless you upgrade to the new version. Doesn’t exist in JavaScript. Fascinating constraint that’s led to things like symbols that allow you to add properties that are only accessible through certain entry points within the code, meaning that the change in ability, the change in behavior of JavaScript is possible but not without conscious decision by a developer to use it. Got it. No unconscious accidental interaction with it.
Adi Narayan 00:09:01 In terms of the constraint, is it right for me to summarize that right at the beginning you had a lot of user land libraries like Lodash like moment that do fairly standard things like formatting time, being able to format strings and so forth, which people have gotten so used to that it’s impossible for TC39 to introduce these fairly standard features into the language without essentially forcing people to stop using those libraries. Is that accurate?
Will Sentance 00:09:27 I think it’s accurate. I think at this point everybody in development is excited by the new built in temporal feature of JavaScript that will allow a more modern interface for time because the moment library, while there’s lightweight versions, is quite a heavy library just for access to time features that work as you might expect, that tie to the ISO standards for thinking about time. And so yeah, the changes still get made, but the day that all developers move off moment and use the temporal object directly, I mean that’ll be a surprise because your code base, I mean this is what we’ll talk about later I’m sure, but the individual JavaScript features that you might benefit from switching to new vanilla features, new built-in standard library features is a question lesser performance and more of maintenance. The gains to maintenance are potentially significant, but the legacy dependencies of these libraries may sit across millions of lines of code. And so, switching out is no small feat.
Adi Narayan 00:10:31 Agree. Now, what actually led TC39 to start absorbing these user land patterns into the language? Was it mainly driven by performance, security concerns, or was it a case of it’s high time we do this now and ever?
Will Sentance 00:10:44 Yeah, that’s a great question. So, I think we can agree that JavaScript’s standard library is still surprisingly thin. So, engineers still struggle with — or did until very recently — one of the most is deep cloning or immutable objects, and I think the hack has been the JSON parse stringify step that maybe we’re all familiar with. But that’s a performance hit undoubtedly, and that comes again from this concern: do not break the web, this defining principle that they’ve experienced for so long. However, because the library is so thin, the community has developed great tools to fix that gap. I think the way that TC39 thinks about it is, if the community develops almost as an experimental approach, then at a certain point we see the demand is significant enough that we then add and develop that feature ourselves. So, I think in some cases it’s performance.
Will Sentance 00:11:38 With deep cloning, I think we’ve got finally structured clone, but that is definitely to overcome the performance hit of using either a Lodash library, which is perhaps less of a performance hit, or more likely a JSON parse JSON stringify. For others, I think it’s a responsiveness to the community, let it be fought out in almost user land as R&D, and then TC39 only absorbs the patterns once the community proves they’re essential. It’s almost a filtration system. So, I think performance is important but also if you are a language that is used so broadly in so many different domains — not just a browser of course, but node, and Bun, and everywhere else — you can use your community and the development of their own libraries as a filtration system: what really emerges is the standard universal need? Okay, it’s been around for long enough, now we integrate it.
Adi Narayan 00:12:30 That makes a lot of sense. Do they end up waiting for too long though? So, something new came up, everyone started using it and you’ve waited so long that people are just going to be resistant to switch to anything new?
Will Sentance 00:12:41 I think the answer is definitely yes before 2015, I think since 2015 the release cycle is much more reasonable and now I think it’s fair to say that you actually see features shipped at a sustainable rate where there is enough awareness that what we call in robotics, the end effector in this case the browser or the node environment, they also have changes. So, the needs come and go. So, patients around that are a suitable thing when you are defining a standard library that is used in so many different end effectors, but equally there needs to be a steady regular pace. It was only in 2015 that that really started. You started getting yearly releases until 2015. If there were yearly releases, they weren’t thought about as a mature cycle. And 2015 was when I think the shift really happened. So, what you describe as the slow pace, I think that’s really something that’s no longer an issue, but it really was before ES 2015 for sure.
Adi Narayan 00:13:40 Makes sense. And thank you for clarifying that. As an instructor who sees thousands of developers, do you find folks eager to try out these new features or is there a tendency to go with what works because there’s so much code out there that still uses the user land libraries just continue using that and not try out the new stuff?
Will Sentance 00:13:58 I think the former, when people are really looking for a performance upgrade where a feature adds a serious performance upgrade, things like structured clone, things like the temporal object for want of a better term when it’s released a temporal feature, I think you see that. Performance gains are so often within the given language itself secondary to all the other performance killers that step ahead of unoptimized JavaScript, right? So, I talk to engineers in the team, but obviously alums of Code Smith working enormous code bases and far more of an issue for them are things like unoptimized fetch requests that pull an entire, example I was talking to an engineer who works at a sports data company, and they pull a single player, and they get their entire team history, they get their entire game history, they get their entire interaction with other players in different games and that’s the API available.
Will Sentance 00:14:58 And so far, more often the pressure is to fix the fetch than it is to improve an edge case performance gain that comes from switching from a Lodash to a pure Script feature. There are exceptions for that of course, but in terms of performance, I think it tends to be secondary. Now where I think there’s a lot of interest of course is library creators. So, when you’re a library creator, the ability to use native features that are more performant and easier to write and easier to reason with, that’s where you start to see a change. And I think you see that also over years when some of the features built in are super similar to other languages. So I think the await feature is one of the most broadly adopted feature of JavaScript and not a surprise that it’s a feature that is very analogous to other languages and also a feature that reverts the order of control to give the developer at the very least a sense that the code is executing line by line down the page and not one of the hardest things to reason about, which is a function definition being passed to a function call and for you to understand as a developer that somehow that function definition is not going to be executed until some point at a later stage when some actions, some IO happens.
Will Sentance 00:16:17 And only then is that function called, and by the way, you better have any work you want to happen on that call be inside that function definition. That is the most challenging mental model for people to get their heads around. Whereas once you introduce an awake keyword, it’s code linearly down the page until you hit a wait, do the async task, get the result back, and then code continues down the page from there. The problem is, of course that does not reflect the actual execution pattern. If that await is within a function, an async function, then the code outside that async functions call will continue and your awaited code will not execute until later, until all global codes finished executing. So yeah, that’s the problem when you have really appealing abstractions, really appealing new features and when they’re not understood, there’s a huge disconnect between how it really works and maybe what it looks like on the page and that’s when you get those bugs that unless you have under hood understanding and that’s a passion of mine, is to give people that under hood understanding you are not going to be able to debug.
Adi Narayan 00:17:17 Makes a lot of sense. I think it’s useful for listeners to also understand JavaScript as a language. It’s always been the situation where you can write whatever you want and it’s the job of the engine to fix it, to essentially save your bad code. And that’s shifting now, right? It’ll help to sort of understand why this tendency to save bad code exists within JavaScript.
Will Sentance 00:17:36 Yeah, and it is shifting, so I think from the very start, there was an inclination to have JavaScript be a very flexible language. There’s a couple of reasons for that. One, I think it was a scripting language in theory for people who were primarily thinking about the pixels on the page or communicating text on the page to be able to add a little bit of dynamism. That’s it. That was its goal. So, from the start, it had a very flexible structure because it’s a scripting language, it’s not designed to be a robust end-to-end production application development language. Now obviously that’s no longer the case even close, but it was where it started. Secondly, JavaScript’s most typical environment in which it runs the web browser is a messy set of APIs for interacting with the page, the DOM, the document for other features, text, HDP messages from the network.
Will Sentance 00:18:28 These are all external interfaces that again, from early on were across many different browsers implemented in many different ways. And so the hope was that if you let JavaScript be flexible, that including by the way automatically coercing types data types from one to another, that at that interface between the JavaScript land and the web browser, the DOM, the network that maybe JavaScript could automatically help you coerce things that came off the web browser as text strings into numbers. And of course, in theory in a small scale that might be quite helpful in practice real application scale that becomes extremely difficult and dangerous and that’s where TypeScript and or more generally type checking at every boundary between JavaScript and any IO becomes vital. But the underlying principle was coming from a relatively informed place, it just isn’t suited to this type of using that language as it is now for complex applications.
Adi Narayan 00:19:30 And I guess having it flexible really helped in its growth, right? The fact that pretty much a hundred percent of the web is based on JavaScript or TypeScript is testament to the fact that this was such an easy language, the entry barrier was pretty low, and it was pretty forgiving for the most part.
Will Sentance 00:19:43 I think that’s right. Forgiving in the sense that your error is handed to the user, right, as opposed to at a compile time stage. I think it’s also gave it a really rough reputation. My sense would be that it’s almost, we call it syntactic sugar. Syntactic sugar is really dangerous because it can create this artificial sense of understanding. I’m able to technically write it and follow it, but as soon as you hit to more complex edge case, let’s think about the event loop. The event loop is an actually fairly clear model once it’s explicitly defined. And yet if you only see the JavaScript runtime and you only see your code technically not erroring because an undefined returned or an undefined value doesn’t cause a compile error, then you might think it’s working. Or you might at the very least not hit the block and feel the error and then try another tool.
Will Sentance 00:20:40 But in practice, until you have that mental model of the event loop, the callback queue, the call Stack, you are really flying blind. Once you have them though, I do think JavaScript, the flexibility is really appealing. If you have a really strong mental model of this language, and that’s why I have taught my workshops on it for years and I long ago switched to teaching neural networks and AI models, but I still come back to teaching JavaScript because you have a lot of the core principles of programming languages implemented in JavaScript. You have an event loop, you have an asynchronous IO design, you have queuing, you have a way of at least thinking about inheritance. It’s not native classical inheritance, but you have a way of thinking about with a prototype chain, you have flexible and interesting ways of thinking about persisting data between different function calls.
Will Sentance 00:21:27 You have closure to do that, or you have other patterns you can use, and the variety does become really appealing. As you say, the flexibility becomes really appealing. But I would just strongly say only once you have a clear mental model, otherwise you hear people being very critical of JavaScript because the flexibility where you don’t understand it under the hood is a curse. But when you do understand under the hood it becomes an asset, you can say, hey, our team’s going to really follow an object-oriented pattern. Yes, we know it’s not fully under the hood OOP, but we’re able to pretty much at this point emulate many of the features, particularly now that there’s private data in the last couple of years. But instead, if we want to follow a functional paradigm, we’re able to do that too. We can quite easily implement most of the functional key features all the way up to monad with JavaScript. So, I do love that flexibility. I think it’s a huge edge but only built on core fundamentals.
Adi Narayan 00:22:24 Other few things that you mentioned that I just want to make sure our listeners understand, syntactic sugar, what does syntactic sugar mean in the context of JavaScript?
Will Sentance 00:22:31 Yeah, a perfect example would be OOP Object-Oriented Programming. So, this is for many, the essential design pattern for building complex applications allows you to think about things in a structured way to think about properties that are universal, properties that belong to subclasses, data that is shared between different features, data that is not, it’s an extraordinary way to structure a complex application. Okay, JavaScript lets you do that and it does it using an architecture under the hood that is absolutely not classical inheritance, not classical deprogramming. Instead, it uses a feature called the prototype chain that allows objects to access, I wouldn’t even call it inherit, but access functions and features of other objects down a chain. However, on the surface the keywords used words like new or class, they are very similar. They are identical to the keywords used in natively object-oriented languages that don’t use a chain model under the hood that gives objects access to other functions but not direct inheritance as is built into natively OOP languages.
Will Sentance 00:23:41 That creates a lot of problems because when you use a new keyword, if you are not aware that under the hood it’s creating an object inside of the function you are calling with a new keyword, it’s adding any properties you add to it using the keyword, this is access, it’s automatically returning that object out. By the way, if you call a function without the new keyword, it’s going to attach all those properties to the native global object. Then you are going to look at JavaScript and go, what the heck? This doesn’t mirror anything that I know about OOP. And so, your syntactic sugar tricks you into thinking it’s native OOP
Adi Narayan 00:24:13 Essentially, it’s a little bit of an illusion. So even though it looks like OOP, it’s not actually that and you can continue using it that way, but if you understand how the internals work, you can get the full benefit of the way it’s designed, correct?
Will Sentance 00:24:26 The full benefit, but also have it not do things you don’t expect. So, in a classical OOP language, you’re not able to use a constructive function without the new keyword. In JavaScript you could use the very same function you expect to be constructing an object without a new keyword, and it will create a new set of properties that you’ve passed to that function. But unfortunately, there’ll be attached to the default global object that’s available in JavaScript and you won’t get an object returned out because the new keyword will automate the process of creating an object, adding properties and returning it out. The idea that that is being done not as a base level feature, but as a syntactic sugar, that’s then manually creating your object under the hood, meaning you could still run that function without the new keyword and it will try to add properties to its default object, not the object you want it to add. That creates a lot of frustration in engineers. One of the key questions that Google asked of any engineer working with JavaScript, their favorite question to ask was, what’s the new keyword doing under the hood? Because they know that that is along with how’s closure work essentially the kind of tell me that you really understand how this system is working.
Adi Narayan 00:25:36 Makes sense. Let’s switch gears a little bit because we, I think we are halfway through the interview now. When we talk about JavaScript, the word engine gets down, read about a lot when JavaScript engineers talk about the engine, be it Google V8, be it Apple, JavaScript, core SpiderMonkey, what are they actually referring to in terms of the stack? Is it a virtual machine, a compiler or something else?
Will Sentance 00:25:55 It can be a number of things. I think one thing to recognize is it is not the entire web browser. It is an engine that can be separately run in many environments, both the web browser, but also it can be directly run within your core system as in node and therefore has to be supplied with all the other pieces you’d need if you are running on your system directly, like ability to have IO with the network if you’re in the browser, it needs to be integrated with the browser features. That’s why it’s quite hard to pin down exactly what we mean when we speak about an engine because of course the V8 engine from Google is integrated into the Chrome browser. So, if you are working on the V8 engine and there are opportunities to optimize directly with the DOM or with the web browser features, you are working closely with the web browser developers.
Will Sentance 00:26:45 And so that’s I think why it’s quite hard to pin down what people mean exactly when they say engine because the engine doesn’t exist in isolation. In fact, most of what we do in JavaScript is writing code to interface with features outside of the core engine and I think that’s where a lot of the development has happened as well. When we switched from XML HTP request to fetch, that is a web browser feature, but one developed hand in hand with the digital engine team. I think one that’s particularly a shift for quite seasoned engineers is the engine has been very helpful at dealing with dynamic code. So, adding properties to an object at any time and knowing that the engine will probably optimally treat that in a way that is performant, that can also lead when those engine changes happen to anti patterns. And I think we are finally, this may be something that listeners are aware of within the just in time compiler, which is the backbone of JavaScript engine from V8, from the Google team, the engine now optimizes for what’s called monomorphic shapes.
Will Sentance 00:27:51 So that’s where your object is when defined continues to follow that pattern, that shape throughout its lifecycle and adding properties, then mid execution breaks the internal blueprint that’s being built at this just in time compilation stage and then kills performance. So that’s the challenge when thinking about engine optimizations is you are firstly working with an engine that is integrated within a browser. So even pinning down the exact features of the engine requires you to think about how it interacts with a browser. But secondly, there are changes only within the last year that throw out what were considered previously senior engineer best practices. So, I think we are in a moment of change there and it’s going to be interesting to see how well communicated it is. The JavaScript engine, at least the V8 engine now allows for optimizations that you can meta programming style add as comments that give information to the engine on how it should approach the code. That’s something new and I think we’re going to see that play out and see how developers pick it up.
Adi Narayan 00:28:57 Essentially, we are moving to sort of a paradigm where the developer explicitly instructs the engine as opposed to the engine saving bad code, which is how it was in the olden days.
Will Sentance 00:29:06 Correct.
Adi Narayan 00:29:08 Staying with that, how engines give you a lot more controls. Can you give us some examples of the things that developers can do in terms of how you can instruct engine to be to optimize one thing or the other?
Will Sentance 00:29:18 The main one is consistent monomorphic classes or consistent monomorphic objects. That is define your object and define on it properties that will be used, even if currently null are undefined to give the compiler the pre-programmed shape for the object then do not add properties dynamically afterwards because at that point the blueprint that gets created for the object has to be thrown out the sort of the fast rails to accessing that object. You have to switch back to the slow rails. That’s honestly the main one that I’m aware of.
Adi Narayan 00:29:53 Consistent monomorphic objects, just to make sure I understand that essentially a data structure that maintains like a singular unchanging shape throughout its lifecycle, right?
Will Sentance 00:30:01 Correct. Exactly.
Adi Narayan 00:30:02 And having that, what does it change things? How does it change an app or how does it change the developer’s experience?
Will Sentance 00:30:07 Honestly, what the V8 team says is that this allows more performant access to object properties at runtime. What it means for a developer is your standard approach of treating an object as a flexible data structure that you can add and remove properties from throughout your application’s lifecycle becomes a suboptimal approach. The other features that JavaScript’s added, I think that are interesting that empower developers, I do think that the symbol adds a whole new set of tools for developers to change how objects iterators work at the application level. And so, in the hard parts workshop, I walked through the symbol principle, and it allows you to manually override default features of objects of iterators to follow patterns that you want. And that’s very empowering to engineers. The chance to, at the object level across all within the library, change the behavior to your particular target. That can be anything from as trivial as logging an object and having your own description rather than object, object instead having your own description that is excellent for debugging all the way through to manually changing how iterators a native feature, JavaScript iterate over an object or iterate over a set of tasks and that meta programming ability where you are changing the behavior of JavaScript without breaking the system. That’s I think a really interesting expansion of what JavaScript does.
Adi Narayan 00:32:07 So essentially, you’re creating a data structure that’s specific to your needs and you are tweaking the behavior of underlying JavaScript functionality so that it’s optimized for that shape.
Will Sentance 00:32:17 Correct. This becomes particularly valuable though to library creators. So, library creators are especially sensitive that their library doesn’t break the code that people are using their library in and when they update or release a new version, they want to minimize the number of changes that developers need to make to ensure that they’re like the new version of the library works correctly. And so, for them to be able to add features genuinely as simple as having logging expose a more relevant piece of information than just object is extremely valuable. So, I think some of these more uncommon features that have been released in the last couple of years, particularly Symbols particularly valuable to library developers and I think that speaks to a lot of where JavaScript has gone. You know, as many people write JavaScript there are more people or at least as many writings React.
Will Sentance 00:33:12 And I think that’s it was funny, I was talking to an engineer a couple of days ago and he said, is React the last framework? Now we already know there’s new frameworks, there’s new approaches, but is React the last framework? And at this point React is so embedded that you can imagine for the JavaScript team, one of their goals is how can we ensure that React developers, that is to say people building the React library are able to continue to iterate and able to continue to solve problems. And can some of these features that they’ve introduced, like symbols that give control to React library creators, can those empower developers indirectly via the React library via the BUN library developers, can we enable them to enable developers in the community? And I think that’s what you look at some of the JavaScript features is less serving the broader developer community and more serving the people building abstractions on top of it that have become so mainstream as to be thought of as almost part of the JavaScript standard library by proxy. So that’s TypeScript, that’s React.
Adi Narayan 00:34:11 When you say library creators, that phrase almost underplays what the library creators are libraries you’re talking about are things like no JS React, right, viewed JS, all these are libraries, right? They’re huge systems in their own right. And when you’re talking about library creators, you’re talking about pretty much sometimes entire companies or huge groups of open-source communities. That are responsible for maintaining these tools that we’ve pretty much so many folks live and breathe these days
Will Sentance 00:34:35 And their core constrain is very similar to Java Scripts don’t break the web and don’t break the code bases using our library even as we want to add new features. My kind of hypothesis is that increasingly the JavaScript engine team TC39 particularly are thinking in terms of those needs maybe more than the engineers directly working with JavaScript because most of the population is working with those core libraries.
Adi Narayan 00:35:02 So the library creators, they have a lot of power here in terms of using the core features in the library itself but also encouraging users to experiment with some of these features perhaps?
Will Sentance 00:35:14 Yes. Look, we are not in the days of Angular where the library artificially created its own scope property truly removing the JavaScript layer entirely. We’re actually in an age where library developers love to use native features as much as possible. They love to surface nature native features as much as possible. I think that’s for a combination of three things. One performance that’s makes sense. Might as well use the native feature where you can. I think two, it’s in terms of people’s ability to understand and reason about the problem, there’s nothing like creating an abstraction layer a domain specific language. I often thought of Angular as almost being a domain specific language sitting almost independent of JavaScript. I think the only frameworks often did that creative React and know they want to be able to use a native feature because it makes it much easier to reason about and they are confident hopefully that people do understand ES modules.
Will Sentance 00:36:09 They do understand the asynchronous patterns built into JavaScript. But the third reason also I think is it makes for a more transferrable library. If your library is such an abstraction layer on top of the universal language of JavaScript, then when you only want to integrate it in a piece of your puzzle and or want to have other libraries integrate with it, the overhead is far, far less than if you are building with a library like the earlier versions of Angular that just abstracted most of JavaScript’s core features. How do you integrate angular scope variable into a library that wants to work with JavaScript’s native scope scoping rules? I think it’s very, very difficult. And so, I think the third reason is library creators benefit from their library’s frameworks being as close to the metal as close to the core standard library features as possible because that is just a massive developer maintenance improvement
Adi Narayan 00:37:06 And it improves interoperability as you just pointed out. Yeah, thank you for clarifying that. I think it was explained clearly at that point I have to ask the mandatory AI question, given that everyone’s using Claude and Gemini and other LLMs, can code generating LLMs accelerate the transition? Do they, can they help folks get better at JavaScript or is it effectively reinforcing legacy patterns because the training dataset is essentially really old use land code?
Will Sentance 00:37:32 Yeah, I mean that’s another reason why people think React is the last framework because for whatever reason the LLMs love React. I think the stories of XML HTP requests being the recommended line of code things that are 10 plus years old, although by the way no hard feelings to XML HB request was not a terrible API, but I think maybe it was terrible API but I think that.
Adi Narayan 00:37:53 Let’s not to go there.
Will Sentance 00:37:54 Yeah, exactly. I used to give talks on it, I guess I was very relieved when Fetch came along. But there’s a couple of things to say here. One is I don’t really think that the legacy code is the body of the latest models. The latest models are very well suited to at least relatively recent for more common code bases like React relatively recent if not right up to date approaches. Now that doesn’t necessarily translate into code bases that are less available in basically GitHub repos doesn’t translate into the company’s own internal patterns most of the time. It certainly doesn’t translate into the ability to reason through what the right architectural choice might be. On the flip side, if you listen to Andrej Karpathy who, you could easily dismiss as somebody who does a lot of talking online, that’s very kind of hyped on what AI agents can do for coding.
Will Sentance 00:38:47 But I think also as somebody who’s shown enormous commitment to explaining under the hood how these things are working, his conviction as the models in the last two months have reached a new level of performance, I think that’s probably fair. Now that being said, those are usually on greenfield projects and for large legacy code bases with complex existing standards and design patterns. My question is less do they imply or encouraged patterns that are outdated and more, do they discourage, or people lose the muscle to go under the hood? Which in the end for those really complex edge case challenges remain a vital part. We are essentially moving to two abstraction levels, and we think about this when we’re teaching, and I think about this when I’m building workshops or my own development is moving between two abstraction levels, one of which is within a given runtime, understanding how a given feature, a given line of code is really executing.
Will Sentance 00:39:40 That may be something that you can get the code snippet or have it automated, but there’ll be moments where it’s not but simultaneously or working at another abstraction level increasingly, which is a system level where you are thinking about multiple different runtimes, multiple different network connected devices and you are trying to orchestrate agents across all of them and have tasks completed. Those two abstraction levels are, so it almost might be comparable to when we move from languages that were heavy manual control of memory allocation to more dynamic languages and having to work in both at the same time. I think we have to think in a new way, maybe more similar to 30, 40 years ago when that transition was happening about what engineering means. We have a higher-level abstraction where we are working with agents and we’re evaluating their output, maybe literally writing evals to do so and then we have a lower-level abstraction, which it might be a given runtime where actually exactly how the event loop works will be that a determining factor of whether your code functions or not.
Will Sentance 00:40:39 And no amount of Claude agentic prompting will help you make the optimal decision at the given runtime level where a performance problem emerges because you’ve not understood how JavaScript automatic garbage collection works and you’ve got a memory leak because you’ve not really understood how closure and how functions associated memory is working and that those are persistent even after their return from a another function. So that understanding at the runtime level to me remains crucial. So, for me, my biggest emphasis for thinking about LLM assisted coding is distinguishing building an under the hood understanding of the given runtime simultaneously with building an under hood understanding of the system level runtime in which the agent operates. And if you have both of those, then you can tackle any problem. You can tackle the agentic Ralph loop, that’s supposedly running for two hours but then bounces out and you can’t work out why. Oh, it’s because the compaction stage has cut the loop from your task, and you can also work at the runtime level where your function call is still returning under fine because you’ve not understood the order of code execution because you don’t understand the event loop. So, I think it’s under the hood on both.
Adi Narayan 00:41:53 And I think the fact that it’s such a powerful tool to learn, right? Because you could, you could easily, before you’d have to go and look at the API reference or look at some textbook somewhere or, go look at stack overflow to understand what’s going on here. You can question it to see what went wrong, what does the event loop, you can get an under the hood understanding right then and there provided you put in the effort to understand that they, you could just prompt your way through and , just give me JavaScript to do what I want, take it, paste it and move on. Then you’re not going to understand it. But I guess if people put in the effort to understand what’s happening under the hood, like you said at a system level and also at the runtime level, your understanding of what the powerful aspects of the language would really grow.
Will Sentance 00:42:26 And isn’t that exciting? I love that that’s the still the edge.
Adi Narayan 00:42:29 100%, but it’s time.
Adi Narayan 00:42:32 Like every software engineer now you’ve got to do so much, you’ve got to do the system level stuff and you’ve got to keep, you have multiple agents doing so much of coding, how much do you do and how much time do you set aside to learn becomes the challenge?
Will Sentance 00:42:42 Oh my goodness. Yeah, I’m working on a hard part at the moment of agent design. I think that is the new runtime, that is the equivalent. It has the same stuff. Is it IO it has a pointer essentially it has memory, a context pointer is executing, it’s the LLM interpreting the next step to take and then it has IO which may be across multiple different components of a system. And having a mental model of that, in theory to me much as having a mental model that’s pretty complete of a programming runtime should help accelerate the ability to build understanding. It’s sort of the principles, the foundations that then you can, my goal always with JavaScript, the hard parts, let’s call it programming the hard parts, was if you understand the underlying patterns, then when a new feature is released or a new library uses a feature you haven’t used before, you are able to unpick it because you understand how the pieces under the hood join up. I think what we’ve not yet had is that same sort of approach to the agentic level because A, it’s all being worked out right now and B, it can appear to go a long way, a bit like maybe JavaScript did for people. It can appear to go a long way before you hit a problem. But when you do, that’s when the system level understanding under the hood I think will become priceless.
Adi Narayan 00:43:56 Is there an example that you like to use about where this level of understanding explicitly leads to a vegetable performance shift in some tool that folks build when you teach?
Will Sentance 00:44:06 Yeah, so I mean I don’t know if you are thinking about the agentic setting but, in the agentic setting, so we probably all heard about the Ralph Loop.
Adi Narayan 00:44:14 I meant in this conventional heuristic just in terms of JavaScript when it comes to instructing the engine in those kinds of places.
Will Sentance 00:44:20 Yeah, I was giving a talk at BBC, which I adore. I put it on record. Really, I live in California, I live in San Francisco Bay area and have done for many years, but I continue to feel great gratitude to the best of the BBC. Obviously always complications but, so I felt very lucky when I have given long form courses to the BBC’s engineering team and I gave the hard parts of asynchronous programming and an engineer came up to me and said, hey, I work on BBCI player. So BBCI player is the UK’s version of Netflix and has, he was describing 5 million concurrent users at peak times, if not probably far more by now and it’s built on node. And he was sharing with me, we’ve had this persistent bug where we cannot work out and I can’t remember the exact details of it, but we cannot work out how to get our queuing essentially in order we’re dealing with these 5 million concurrent across multiple machines and I’d just walked through there, the event loop in node and in node it’s even more complex than in regular JavaScript which comes back to that point that you were asking about earlier about the engine design in node.
Will Sentance 00:45:32 Again the engine is JavaScript joins up quite closely with other bits and so the event loop while at the interface between the engine and IO, the sorry environment in which it sits, which might be the browser, it might be node and the system on the computer directly has in JavaScript’s version in the browser. So many queues, in nodes it has something like six, seven different queues for asynchronous function execution available and different types of tasks will be queued up in different queues that we know about the microtask queue versus the task queue, the callback queue and promises being placed or promise associated functions being placed in the microtask queue and regular callbacks in the callback queue. But in node, I mean it’s five or six or seven, there’s many, many queues and each of them has a different priority and each of them has a different behavior. And as I laid out these different queues, he showed me their code base and said we’ve been using set timeout as a deferral technique.
Will Sentance 00:46:31 We had a sense of it seemed to be executing in the right order, no understanding of why understanding all these queues means we can now from scratch properly implement the order of execution that we want our asynchronous code, the asynchronous delayed code to run at. And I was like ugh. I felt very grateful that this person who’s working with clearly small team running this vast version of Netflix BBC, is a tight-knit team because it’s a publicly owned organization and has now the ability to optimally work with their asynchronous code in node dealing with something that’s used by millions of people. And so just that under the hood mental model opened up to him the ability to optimize this really quite significant code base. So that one’s always stuck with me as a moment where having that mental model gives someone a big edge.
Adi Narayan 00:47:23 That’s a really cool example especially because I live in the UK now and I play it all the time. Yeah, especially when I really like I have a little child now so when I just need to switch off and watch some bizarre documentary about like Lake District, that’s where I turn to. Netflix is no match for that.
Will Sentance 00:47:37 You know that under the hood there is code that is a result of people understanding the event loop in node in full. So yeah, that’s cool.
Adi Narayan 00:47:44 Yeah, a hundred percent and that’s beautiful, right? Like when you build it just the right way using the perfect tool for the job and not just sort of, any tool that you can just pick up and understand. That’s beautiful. So as an engineer I love that. So yeah, totally. I get your emphasis. Yeah, staying on topic folks listening in think anyone who works at a large organization with JavaScript, they’re going to have to deal with the refactoring question? If you have a large code base and it’s got a ton of user land features and think definitely a lot of JavaScript that can be written better. What’s the ROI on refactoring and how do you think about that? Like should you focus on the performance wins or the maintenance wins? How do you suggest people think about that problem?
Will Sentance 00:48:20 Yeah, I think it’s maintenance win first. Really, I think the performance win is there, but swapping moment for temporal is a performance win. No longer downloading a 60-kilobyte library, even if it’s zipped or whatever less. But, still not nothing but it’s the removal of dependency risks really you don’t want to have across your entire code base a set of libraries that are maintained of course but still unpredictable versus the core features. So, I think really it comes down to dependency time bombs, syntax optimization. It can’t fix though, even if you do make those changes, what really ends up being often the bigger challenge architectural disaster, which is things like, as I said over fetching massive JSON files. But so, I don’t think it’s always the number one priority of engineers to make these changes, but when someone can remove a library because they’ve identified that enough of the core features and again that’s where this temporal one comes in so strongly because quality time features have not been available in JavaScript.
Will Sentance 00:49:25 If you’re working with anything serious, you needed a library to work with it, that is going to change. I’m not sure the exact release date when I last gave hard parts a few months ago, it wasn’t yet released. I think it was scheduled 2026. But that’s the kind of non-negotiable, if you can now implement timing patterns without needing an external library, that’s worth it. But primarily it’s a maintenance win. I think that’s how you have to think about these libraries and in many cases, that’s going to be a company specific. If they are very happy with a Lodash and many companies are, then actually it can be more of a maintenance cost to suddenly switch out from it because while JavaScript now has many of those array methods, it doesn’t have all of them and Lodash still has some library specific array methods that JavaScript doesn’t have.
Will Sentance 00:50:06 Well if you are all used to using Lodash then how much is it really a maintenance win to switch to a function signature that is different to Lodashes for your reducing or mapping or flattening was available to you in Lodash. Many of those functions would’ve been available in JavaScript long before the company started using a helper library. But they don’t use them because they follow the Lodash pattern throughout. And so, it’s to me often going to be a case of, I think you’ve hinted at this, JavaScript is probably never going to provide the full utility library because it is a general use language. And so, if you are then keeping your Lodash in, then following that pattern for all your array manipulations or all your data manipulations is a better maintenance win than switching out and suddenly your developers are like, hold on, which of these is a native function? Which of these is a Lodash function? So, I think that’s often going to be the case. I can’t come back to enough that data and time management is one that JavaScript I think and developers in general are really excited to have natively and I think you’re going to see a lot of switching out of the moment libraries and equivalents very soon.
Adi Narayan 00:51:15 And even if people don’t switch out, at least when they’re building new apps or new websites, they wouldn’t start with the Lodash at a moment. They start with the temporarily and even if it’s not fully ready for prime time, you’d sort of start playing with it now so that for the new stuff you don’t want to be using the use land libraries for sure. Right.
Adi Narayan 00:51:30 And a follow up question there, given like once temporal comes out, does Moment stop being supported? It’s not like Moment support is going to stop anytime soon, right?
Will Sentance 00:51:38 No, exactly. So, I think it’s fine. I think it’s compressed very small dependency, but when you time as well as a narrowly used feature, whereas a lot of these utility libraries are embedded in every other line of code every, yeah I think it’s much, much harder to switch out from those libraries and they’re so general use applicable, but that doesn’t really apply to something like time, which is mission critical when it’s there, but is much more specific use cases.
Adi Narayan 00:52:07 Yeah, this has been very interesting to sort of close it off, I was going to ask what do you expect in 2026 and what you excited about? I take it temporal is definitely one of those. Anything else in the Java script universe that you’re excited about in this year and next year?
Will Sentance 00:52:19 I’m excited to see what I would not have expected. Maybe I’m naive, but I would not have expected that JavaScript would build a native object cloning feature as opposed to continue to rely on the JSON approach. So, I’m curious to see what other native features will be introduced and, but we’re not yet aware of, as you say, temporal will be a significant one. I think it is hard to unbundle what happens in other languages. So as curious as I’m about where JavaScript goes, I am about where React and TypeScript go, and I think React shifting to at least a more balanced take on server versus client-side rendering is going to be really interesting. One thing I think we have to acknowledge is that as there are so many environments in which jobs are running, Emerge, one that I think is particularly exciting is the BUN environment.
Adi Narayan 00:53:16 What is that?
Will Sentance 00:53:17 It is itself a full JavaScript runtime built from scratch, but it has around it a built in package manager, bundler, test runner, so it can replace node, but it’s very thoughtfully designed to the point that as I use agentic workflows, you might default to using Python. With BUN, you might actually get a more readable interface than you would with Python obviously Python is always, almost always the most readable interface and certainly more than node and I think that’s really exciting. So, when I’m building an, I’ve got a workshop coming up, building an agent from scratch, so sort of building open claw implementation from scratch for people to get an under hood understanding of it. And I’m using BUN as the environment as the runtime and environment where I’d normally think about using Python and probably not node, because I think it’s just too much of an emphasis on essentially server design.
Will Sentance 00:54:11 Whereas here we’re talking about a more flexible environment for JavaScript to run in. That feels a lot more like using Python on your machine. And that’s really exciting. So, I think that’s seeing JavaScript, it is a JavaScript engine of one built from scratch, super performant, but it’s also a JavaScript engine sitting within its package management, its environment. And I think we’re going to see more of JavaScript as a full environment that will end up as a user, Claude, or as you build an agent being a backbone for it. It might even be that while we think that for work that’s going to happen on your machine rather than the browser, you default to Python, somehow JavaScript might continue to survive as the core language even of agentic workflows that sit much less in the browser. I think that’d be really intriguing if that were to happen.
Adi Narayan 00:55:00 And I mean, I think of BUN as, it’s almost like a Swiss swami knife, right? That replaces a bunch of different tools and it’s much faster than node.
Will Sentance 00:55:06 Who would’ve expected that JavaScript might survive the transition to agentic driven engineering?
Adi Narayan 00:55:12 Let’s wait till agentic really comes.
Adi Narayan 00:55:15 You can’t take anything for granted. Now, as we joke around here, like the next three years is just like you never know what’s going to happen, what’s going to stay, what’s going to change.
Will Sentance 00:55:23 We’ll have to wait and see the next three months.
Adi Narayan 00:55:25 Three months, exactly. Yeah. Right. This has been a really interesting conversation. Thank you so much for taking the time. Before we sort of finish up any question that I didn’t ask that I should have asked?
Will Sentance 00:55:33 The only thing would be what does the shift in the abstraction layer mean for engineering as a whole? And I think that it’s worth for all the software engineers listening who might be feeling an ambivalence you move up in an abstraction layer. What about all the craft that was developed for understanding that the runtime that and I think it remains very important. Any real production environment is so full of edge cases that require the ability of a human to convert the complex need into a specific implementation. But then secondly, the next abstraction level up, let’s call it agentic, is one that also requires complex reasoning. Orchestration describes complex reasoning about moving parts, but also one that can then be applied to new domains. And that’s why I’m working in what’s called embodied AI or physical intelligence, where increasingly the key workflows are ones that are very familiar to software engineers rather than roboticists.
Will Sentance 00:56:30 And its model pipelines, it’s imitation learning. It’s what’s called VLAs, which allow you to treat the output of a robot as a software problem. And I think that’s very exciting that many of these same mental models and principles of an engineer, a software engineer is going to apply to domains that we didn’t think were software engineering domains. I always come back to the fact that the Nobel Prize for chemistry won by Sir Demis Hassabis a software engineer, a game developer is his first job. And that’s for something that is now pivotal for drug discovery and specifically protein folding, but so many other consequences. And so, I think it’s an exciting time to be a software engineer. And I think many of us can feel some ambivalence about that as things change so fast. But the mental models and the ability to go under the hood and reason about complex systems will apply to whole new domains that we didn’t previously think we had access to. And so that’s what I’m excited about on a personal level.
Adi Narayan 00:57:29 Thank you for that. It’s a really inspiring way to end the podcast and decision. And you’re right, there is a lot of ambivalence, there is a lot of fear with how things are changing but taking the time to understand how things work and really understand how things work under the hood is something that will never go out of fashion. And investing in that is the right way forward. So, thank you for a really interesting conversation. We hope to speak in the future about one of the other interesting things that you’re working on.
Will Sentance 00:57:52 Thank you so much Adi, I really appreciate being part of this.
Adi Narayan 00:57:55 Thank you so much.
[End of Audio]

