From e47ae6ea82d47a940e0d0f0ef66a31cf644bc477 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aki=20=F0=9F=8C=B9?= Date: Sun, 12 Oct 2025 22:53:46 +0200 Subject: [PATCH] September notes --- meetings/2025-09/september-22.md | 1116 ++++++++++++++++++++++++++++++ meetings/2025-09/september-23.md | 996 ++++++++++++++++++++++++++ meetings/2025-09/september-24.md | 713 +++++++++++++++++++ 3 files changed, 2825 insertions(+) create mode 100644 meetings/2025-09/september-22.md create mode 100644 meetings/2025-09/september-23.md create mode 100644 meetings/2025-09/september-24.md diff --git a/meetings/2025-09/september-22.md b/meetings/2025-09/september-22.md new file mode 100644 index 0000000..e166f20 --- /dev/null +++ b/meetings/2025-09/september-22.md @@ -0,0 +1,1116 @@ +# 110th TC39 Meeting + +Day One—22 September 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|--------------------|--------------|--------------------| +| Chris de Almeida | CDA | IBM | +| Samina Husain | SHN | Ecma | +| Keith Miller | KM | Apple | +| Ben Allen | BAN | Igalia | +| Nicolò Ribaudo | NRO | Igalia | +| Daniel Minor | DLM | Mozilla | +| Dmitry Makhnev | DJM | JetBrains | +| Eemeli Aro | EAO | Mozilla | +| Ron Buckton | RBN | F5 | +| Jesse Alama | JMN | Igalia | +| Andreu Botella | ABO | Igalia | +| Waldemar Horwat | WH | Invited Expert | +| Zbyszek Tenerowicz | ZTZ | Consensys | +| Michael Saboff | MLS | Invited Expert | +| Richard Gibson | RGN | Agoric | +| Bradford C. Smith | BSH | Google | +| Philip Chimento | PFC | Igalia | +| Chip Morningstar | CM | Consensys | +| Mikhail Barash | MBH | Univ. of Bergen | +| Duncan MacGregor | DMM | ServiceNow | +| Mathieu Hofman | MAH | Agoric | +| James Snell | JSL | Cloudflare | +| Istvan Sebestyen | IS | Ecma | +| Erik Marks | REK | Consensys | +| Aki Braun | AKI | Ecma International | +| Daniel Rosenwasser | DRR | Microsoft | +| Jordan Harband | JHD | HeroDevs | +| Justin Ridgewell | JRL | Google | +| Kevin Gibbons | KG | F5 | +| Michael Ficarra | MF | F5 | +| Mark S. Miller | MM | Agoric | +| Olivier Flückiger | OFR | Google | +| Ryan Cavanaugh | RCH | Microsoft | +| Rob Palmer | RPR | Bloomberg | +| Shane Carr | SFC | Google | +| Stephen Hicks | SHS | Google | +| Ujjwal Sharma | USA | Igalia | + +## Opening & Welcome + +Presenter: Chris de Almeida (CDA) + +CDA: Welcome to the 110 meeting of the TC39. This is the September plenary. Remote only. And meet your facilitation group. That is going to be RPR, USA and CDA as chairs, JRL, and two of our favorite Daniels as facilitators. Make sure you have signed in, presumably, if you are in the meeting, you have completed this form. And that’s how you got the link. + +Also, it would be great if in the notes doc, you could add your name to the name of the attendees at the top. Reminder that TC39 follows its code of conduct available on the website. But the TLDR to to be excellent to each other. Please and thank you Schedule is as follows: we are on central time for this meeting. Meetings beginning at 10 with a break at noon for lunch. For one hour, you then resume for two additional hours. + +Communication tools, most people are aware. New folks may not be. We use TCB. The link will be in the reflector issue nor this meeting Github. This is how the agenda view looks. And the navigation is at the top. You will have an agenda link and then the queue link next to it. Which then will reveal something like this. Note the buttons there for, if you want to discuss a new topic, if you have a reply to discuss the current topic. If you have a clarifying question. Or in the case of something that requires immediate attention, point of order. Please use these correctly and do not jump the queue. It would be greatly appreciated. There wills be—if you are the current person speaking, there’s a button that says, I am done speaking. Counterintuitively, please do not click on this button. It can—it will cause or can cause race general between chairs that are able to advance the queue. If you click that when we also click the advanced queue button on our end, that will result in somebody’s topic completely disappearing. + +We use matrix for chat while the meeting is ongoing. Most of this will be happening in the delegates channel. And then, for offtopic banter in the Temporal dead zone channel. And as always, there’s the TC39 space in the matrix which has all the other channels, but those get little to no activity during the meeting. + +Reminder about the IP policy. Please familiar yourself with the contributing MD in the 262 repo, if you are not familiar. Notes. So a detailed transcript of the meeting is being prepared. And will be eventually posted on Github. You may edit this at any time during the meeting, in Google docs for accuracy, including deleting comments which you don’t wish to appear. You may request correctionsI am or deletions after the fact by the editing. The gooing the doc in the first two after the TC39 meeting or subsequently, making a PR in the notes repository or contacting TC39 chairs. Our next meeting is coming up in November. This is no Tokyo. Hostedly Bloomberg. + +I see we have a nice picture there of the scramble as well as presumably taking by delegate Michael Ficarra at the cafe. It’s looking like it’s the biggest Asia meeting so far, the highest attendance. Please do join us, if you are on the fence about going. + +We would love to have you in person. Otherwise we will see you virtually. Just a nope, the Sunday before the meeting there is a recent—there is an opportunity to be on panel for that, which also gives you free attendance, which is otherwise not free. + +And yeah. That brings us to the end of the slide deck where we will go through our normal housekeeping The approval of the approval minutes of the last meeting. Those have been reviewed and first of all cleaned up, thank you, Aki. That has been reviewed and merged into Github. Presuming, we are approving the minutes from the previous meeting. We have a second for any objections? And that brings us to adoption of the current agenda. Which we have presumption of adoption, as long as nobody is speaking in opposition. + +All right. Seeing nothing, hearing nothing, the agenda is adopted. I will stop sharing now. + +And now, for everybody’s favorite—before I ask, is our transcriptionist with us? I don’t have the notes up. + +I see many words on the notes. Great. + +All right. My favorite part of the meeting: calling for note-takers. We are looking for two. TC39 heroes, who are willing to help with the notes for this session. We will lavish you with praise, both privately and publicly, unless you do not want praise lavished upon you publicly. + +Once we have two volunteers, we can go on and begin the meeting. + +JMN: I can help with the first block, but not for the final presentation about the amount. + +CDA Okay. Thank you, Jesse. Please do, if we fail to notice, please remind us when you are coming off of duty so we can find someone to help out in your stead. Can we get one more person to help Jesse on the notes? + +## Secretary's Report + +Presenter: Samina Husain (SHN) + +* [slides](https://github.com/tc39/agendas/blob/main/2025/tc39-2025-038.pdf) + +SHN: Thank you, CDA, for the excellent start. And great to hear about the next plenary in Japan, it will be a great turnout. + +I will go through the usual secretary’s report. I would like to give a bit of an update, of course on topics relevant to TC39. But, of course also with what is happening in the GA, and other TC’s that are also of interest to the TC39 committee. I will talk about the collaboration and update on invited experts. + +SHN: Just to bring to everybody’s attention, TC54, SBOM, have been working diligently and they will be proposing three standards for approval at the upcoming GA. I wanted to bring that to everybody’s attention. They will be available shortly for the review process. There will be the second edition of CycloneDX, the first editions of the Common Lifecycle Enumeration Specification (CLE) and Package-URL Specification (PURL). There are a number of organizations and participants that are overlapping in TC39 and TC54. So I thought it’s good to bring to your attention, if there is an interest to participate in TC54. Thank you to those who have participated. It was a lot of work. And they have all done very well. + +SHN: A new TC. This was a proposal that came from Microsoft for standardization of high-level shading language, HLSL. It was taken away for a bit, they needed more time to prepare. So it is now moving forward. It will be proposed to the executive committee at ECMA at the October meeting as a new TC. I think it will be TC58. The scope and the program of work, the individuals involved currently from Microsoft and the supporting members that want to be also involved, and not only Microsoft, but Google and Sony are showing interest. We want the representatives of the other organizations to commit to being involved, this is very good for ECMA. It’s a new TC. And I think it will bring new work. Also to bring to your attention, if you or your organization want to participate, please reach out to me, to show interest to participate in the new TC. + +SHN: JSON Schema. This is an ongoing discussion. The interest was a discussion that was in the JSON community. To bring in the JSON Schema work into ECMA. Similarly, with KLD. But the focus has been on the JSON Schema community. There’s a lot of work in conversation with the committee to discuss their interest to come to ECMA There has been an outreach to all contributors, 60 + have been involved in the JSON Schema to agree on ECMA IPRs that is a requirement for us to move forward. We have received many positive interests. We also have had a very good conversation with the IETF folks to align and ensure that they are also aligned and not seeing any reason for disagreement to bring it to ECMA. Would not force the committee to move into ECMA unless the entire community is in agreement. This work is ongoing. If your organization is involved in JSON Schema, it would be great if you reach out to me, any comments or questions or on the Slack channel, and thank you JMN and AKI for support on this topic. + +SHN: Collaborations. I brought up the interest of W3C and the interest for collaboration and engagement with ECMA. I understand that internationalization working group TG2 will engage more formally with W3C and if SFC is on the call, you may comment, but I understand this is moving forward. That’s excellent. We thought that would be the lowest hanging fruit and opportunity to have strong collaboration with the work ongoing with TC39 and the work going on within the working group at W3C. + +SHN: We had also discussed some time back having a liaison with IETF. That’s between the TC39 and IETF committees. There had been somebody in the who no longer is active, and it was left on the back burner, thank you very much RGN volunteered to do this. RGN is active in both, and RGN has been introduced to IETF. So his name should be updated on the website shortly and RGN would be the point of contact to be able to share information between the TC and IETF. To ensure that we are well informed of each other’s work. There’s open communication. And that when there is something that needs more attention, that it is done. + +SHN: And RGN, if you are on the call, you are free to make comments when I stop my presentation. + +RGN: I am on the call. I don’t have any substantial updates as well. I will be sorting out the relationship between now and the next meeting + +SHN: Thank you. That’s great, RGN. I appreciate that and the idea would be that on the plenary calls we have in the event there is something to share, we can do that so everybody will be aware of the work. And 2026FOSDEM is planned the last days of January or the first days of February. There is an intention for ECMA to support and have an event, it is a conversation with TC54 that we do something in partnership, and perhaps sponsor a day session or host something. I don’t know how that would look. I do understand that TC39 members and TC39 topics are very active. Perhaps there’s an opportunity for multiple TCs, not only 54 and TC39, maybe TC55 or others, to also participate. I would love to work this out and if there’s an opportunity to have a day track that is represented with ECMA, I would love to do that. So this is in the works. If you are involved with that or going to attend, reach out to me or AKI and we will make sure that we are involving you as we progress on this potential activity. + +SHN: Invited experts: I put a list of invited experts. What I would like to have feedback is on the first list, which is in a darker shade of grey. Those are members of invited experts that are noted in the ECMA database and have signed the ECMA invited expert form. I do not know if they are on Github. So if they are not, we should. We should ensure they are active and that we are aligned there. The names all marked in the ECMA orange, we are aligned, all of which you noted on the ECMA db and also Github. If you do not see your name there, and you believe you should be there, please reach out to me or AKI and of course the chairs. The last column, which is in the black text, are names of individuals who are on Github as ECMA invited experts. But I do not have any record of them signing an ECMA invited expert form. And I would very much like that we can make sure that we are aligned there. If the individuals are still active, and you want to continue to be active, that’s fine. But it would be very much needed if they fill out the invited expert form. + +SHN: Thank you for helping me with this list. Aki put it together based on Github, this is also through the efforts we have with the ECMA secretariat. If there are any names on this list missing and you are an invited expert, no matter what shade I put it in, please reach out to myself and AKI and we will make sure we have done it accurately. + +SHN: The annex has the standard documents that we have. The code of conduct which identifies how we should work. The invited expert rules and procedures. The list of the relevant documents that have been published that could be of interest, on the GA document perspective and ECMA 39 perspective, you may ask the chairs to provide them, if you don’t have access to the repositories and some of the dates for the next meetings. I am going to quickly go through the slides, which are on the dates. The others you can read at your time. + +SHN: Regarding the dates, I was following the conversation on matrix, and it was noted at one point that maybe 5 meetings could be appropriate with 2 remote. It’s up to you, as the committee, to agree. I noted it here. If you would like to do that and fits your requirements, and the needs of the committee, that’s fine. Just let us know how you want to proceed there. + +SHN: I have not yet seen the dates with 2026. So I am looking forward to seeing the dates, locations of the next ones so we can also add that to the calendar and also block our schedules. + +SHN: And these are the dates that are scheduled for the GA and ExeCom, for the coming—we do it up until the next year. The dates relevant for your standards that you would bring to the GA, typically in June for approval, so you will see the date in April in 2026 for ECMA262 and ECMA402 for approvals. + +SHN: The next ExeCom coming up in October. I look forward to the TC chairs to provide the TC chair report and also if you have any other key items you want to bring up, please ensure that your chair report has it. And it will be definitely addressed at the meeting. + +SHN: And you will have noted on all the slides I have had the ECMA TC39 logo, which you appropriately use on Github. And you use the colors of ECMA. We have proposed to other TCs that are joining or that are starting and want to do similar types of logos, to use the format that has been standardized by TC39 and so we are using that as a reference. Thank you, AKI for putting it together. It’s important and I think relevant that all TCs using Github use the repository and meeting space to have some branding to ECMA. Thank you. That’s the end of my slides and I will shop sharing and I will be hopeful for any questions. + +CDA: Thank you,SHN. I already sent Aki a note. The chairs will help out with the invited expert clarifications and whatever we need to do there. + +SHN: Thank you. + +CDA: NRO has mentioned that the slides link is not working. + +NRO: I don’t have for the list of names, but the one in the agenda doesn’t link on the slides + +AKI: It is uploaded to Github. + +NRO: Thank you, Aki + +CDA: Any other questions or comments Nothing on the queue. Samina, you started to say something before— + +SHN: I wanted to ask Aki if she had further comments to the slides or inputs that were discussed today. + +AKI: I don’t have any further comments on the slides. Generally speaking, when it comes to the invited expert piece, I have lists of people and when they signed things and so if you are on the list and we don’t have any evidence of you signed the RFTG form, get in touch with us or we will probably come chase you down + +CDA: Thanks, AKI. I have a name, but I will not mention them here + +AKI: Thank you, Chris. Any other points? + +CDA: I don’t think so + +SFC: Hi. I am not on the queue, but I heard my name checked earlier in your presentation. Thanks for that, Samina. A little update on the W3C relationship with TG2. At the last TC39 plenary, we reviewed a new policy that we had come up with for including W3C i18n reviews in TC39 proposals to do with ECMA402. I believe we talked about that at this plenary. We certainly talked about it at the TG2 meeting. So we’ve started that process, we don’t have any TG2-specific proposals up for advancement at this meeting, but we are definitely starting that process. APP has been great to work with and helped us draft that text as well as AKI and others. So I appreciate that. And this relationship is still, you know, developing. But I am definitely quite excited to see these pieces come together and I think we should do this more regularly, even outside of just ECMA402. I know we already have examples of sort of ad hoc reviews that we have requested of TC39 proposals and I think we should in general make it a regular thing we should do. We are both standards bodies that impact the web platform and we should do, not less, to coordinate ourselves. + +SHN: Thank you for adding that. Any update you would like to share on the relationship we can add to the secretariat support. Thank you, it’s great work. And yes, we should continue. + +CDA: Great. Thank you, Shane. Thank you, Samina. + +### Speaker's Summary of Key Points + +The secretary’s report updated TC39 on TC items and GA matters: TC54 (SBOM) was noted to bring three standards to the next GA (CycloneDX 2nd ed., CLE 1st ed., PURL 1st ed.); a new HLSL standardization effort is moving forward as a proposed TC5x (originated by Microsoft, with interest from other members); and the JSON Schema community is being consulted about potentially moving work to Ecma, with alignment discussions underway with IETF and outreach to 60+ contributors on IPR, progress contingent on broad community agreement. + +Collaboration updates included formalizing W3C horizontal reviews, and appointing RGN as the TC39–IETF liaison. Ecma may support a multi-TC presence at FOSDEM 2026. The committee is reviewing invited-expert records between the Ecma database and GitHub. + +Meeting schedule dates were covered and pending are 2026 TC39 dates. + +Action: chairs to submit October ExeCom reports. + +## ECMA262 Status Updates + +Presenter: Kevin Gibbons (KG) + +* [slides](https://docs.google.com/presentation/d/17dyg4ssXsYUtoEl4PkmeM5dDQ8v0rvV6sUV_BmXgXX4/edit?usp=sharing) + +KG: Not much in the way of an update. But we will go through it. Normative changes. The `Math.sumPrecise` landed. Base64 has almost landed, it might land during the course of this meeting. It is completely ready and I believe it has been reviewed. I just need a sign off from the other editors. But everyone has looked at it. And then the last one is a bugfix for an issue that was introduced a while back, when we were refactoring how `Function.prototype.toString` worked. We accidentally broke an invariant for `[[SourceText]]` for classes that didn’t match any implementations. This is an old bugfix that we finally landed. I do want to call out there’s a number of other consensus normative changes that the editors are in the process of reviewing and land as soon as we can. We apologize for delays. + +KG: A couple of editorial changes. The first one is a big one. A bunch of Annex B stuff, in keeping with the committee’s direction, is inlined into the specification. This doesn’t have any normative implications. If you are reading the specification and reading an algorithm that has behavior specified under Annex B, previously what happened is that there is a note which would say, this has different behavior, go to Annex B to find it. I believe there was always a note, but I am not 100%. Now, you read an algorithm and it says, "if your host is a web browser or otherwise implements this normative optional behavior" and then has the specification of the behavior inline. I think this as a consumer of the specification it is much, much nicer. But again no normative implications. Thank you very much to jmdyck for contributing to this change. + +KG: And then, last thing is that in SetFunctionName, there is an optional prefix for getters or setters of get or set respectively. The optionality was not clear to the readers. We added note to address it, following the discussion from the previous meeting + +KG: Mostly the usual list of upcoming work. But the change to Annex B which has been listed for the last couple of years is now removed because we have landed that PR. Otherwise, still working on roughly the same things, but we have no new work planned here. That’s all I got. Thanks very much. + +## ECMA402 Status Updates + +Presenter: Ben Allen (BAN) + +* [slides](https://notes.igalia.com/p/sept-tc39-tg1-editor-update#/) + +BAN: All right. So let’s see. Probably the most meaningful thing in this is the thing that Shane mentioned earlier, but I will go through the slides in order. Okay. So we have got a couple of normative changes. We are largely correcting for oversights, I am on the schedule to discuss this right after this, so I will just flip through this slide. + +BAN: Editorial changes, probably the most important is the meta one that Shane talked about that will be requesting reviews from W3C, i18n. It’s important we got that in there. And minor editorial changes, implementor, `Intl.NumberFormat`, it’s been there for a while. + +BAN: And I believe that APP pointed this out previously, we had given a rough estimate of the number of languages and natural human languages in the dialects of around 6,000. Really, it only makes sense to give a lower bound for that sort of thing. + +BAN: So we have updated to say, okay. Well, there’s in—there are 7,000 language subtags. And CLDR only important a small subset of languages and variations. The world of human language is kind of too language for us to represent, no matter how hard we try and that’s it for the 402 update. + +USA: Thank you, Ben. + +## ECMA404 Status Updates + +Presenter: Chip Morningstar (CM) + +CM: Everything is fine. Nothing to see here. Move on. + +CDA: Excellent. No surprises. We like that. + +CM: Yes, that’s what we like. + +## Test262 Status Updates + +Presenter: Richard Gibson (RG) + +RGN: I am here. And I wish I could be as efficient as Chip in this update, but I don’t quite manage that. Recently, we landed the changes for non-extensible applies to private. And in review, our tests for immutable ArrayBuffer and joint iteration, as always, help from everyone in this call is very much appreciated. + +CDA: Great. Thank you, Richard. + +## TG3: Security + +Presenter: Chris de Almeida (CDA) + +CDA: Yeah. TG3. Continues to meet weekly to discuss security impacts of proposals. As well as the odd not security-related topic, if we don’t have other agenda. Please join us, those are Wednesdays at 12 p.m. central time. Central US time. + +## TG4: Source Maps + +Presenter: Nicolo Ribaudo (NRO) + +* [slides](https://docs.google.com/presentation/d/1WdEy4ZcMHpQ7eAzfGMun3VtUziFadYvrJYxWRh5Uafw/edit?usp=sharing) + +NRO: Okay. So very quick, there are a couple discussions in the project, editorial, one for some of our algorithms, we have a lot of, like, noise content ratio due to checking on the conditions. For our perspective—what happens in our case, it’s relevant for ECMA, the programs are generated. It’s much more common to have a correct SourceMap file than a correct JavaScript file. So we are thinking of some betters ways to reduce the noise. It says, if this expectation fails just return with something. We have to figure it out, the default free for that. The exceptions are, it’s—all the statements then another third change in progress is that in various places, there are some when it comes to the grammar, there are some productions that are optional. But whether a production is optional or not depends on values of previous items in the grammar. For example, in this case here, name and kind are present or not, depending on what are the fluings. We are exploring if it's possible to somehow more clearly express this in the grammar itself. + +NRO: Marking things optional is difficult because in our spec, everything is a need to get under the hood. We have some progress on the scopes proposal. We now have some perspective text available. There have been some changes, some minor changes since last time that define better how some implementations can experiment with, for example, adding more language-specific things to discuss information. And slightly change the grammar for files that don’t have a scopes list. + +NRO: And that’s it. Again, if you have—interested in joining us, we have one TG4 meeting every month. It’s one hour long. And every month we have the meeting specifically for the scopes proposal. So during the calendar, please free to join. + +## TG5: Experiments in Programming Language Standardization + +Presenter: Mikhail Barash (MBH) + +* [slides](https://docs.google.com/presentation/d/16qX8ml3o-OEepZfmqVsJ7IlygMKULwP5vWSUxZbKAMI/edit?usp=sharing) + +MBH: We had a successful recent TG5. It was a talk about matching algorithms for (..?) regular expressions, a linear algorithm. We had more than 20 participants. This was the most well attend TG5 call. And we also had several renowned researchers join us for that call. + +MBH: As unusual, we have meetings once every month. The last Wednesday of each month. But, for example, for this month, it coincides with the plenary, so it’s cancelled for September. Then together with in-person hybrid plenary, in Tokyo, on the 17th of November, Monday, so between Japan and today’s plenary we will have a TG5 workshop. We are now working on the program for that I soon will be posted—I will soon post in Reflector a call for presentations for those who want to give a presentation there. + +That’s it. Thank you. + +CDA: Thank you, MBH. Yeah. I was at that meeting, that had the high attendance. That was a great one. These tend to be great meetings. If you have not been to a TG5 meeting, I encourage you to go to one. They are fascinating. + +## Updates from the CoC Committee + +Presenter: Chris de Almeida (CDA) + +CDA: Next up is the CoC committee. We don’t have any reports, nothing new to deal with. I am reminded of the PR that KG had for the sort of related to CoC, large language language models in authoring comments. Did we merge that PR, KG? I think we did. So that’s still—this is how we work repro. + +KG: It is not merged. + +CDA: I did hit it with an approval. Okay. I guess, last call for comments on that. I don’t think there’s any blocking concerns. There’s still some discussions, some back and forth, but that is—pull request #164 in how we work repo. If you have any thoughts or comments, but otherwise that will probably merge quite soon. + +KG: And just I guess since we’re talking about it—as presented, there was not an explicit carve out for proofreading or similar. I have now added an explicit carve out. "You may use LLM for proofreading as long as this doesn’t allow for any new content." We could be wordier but I don't think it's necessary. I think that was the only serious concern addressed at the last meeting. + +CDA: Yeah. That’s what I recall as well. Thank you, KG. + +## Convention: strings-as-enums are kebab-case + +Presenter: Kevin Gibbons (KG) + +* [PR](https://github.com/tc39/how-we-work/pull/165) + +KG: So this is a PR to the normative conventions document, which as a reminder we have now. It exists to document normative conventions that we as a committee have agreed upon. Some are things that were changes to what we have done historically, some document things that we implicitly agree on, where it’s good to have them written down for the future. + +KG: This is one which I have expected to be uncontroversial, but this is my bad because I was only looking at 262 and not 402. + +KG: But before I get into that, the precise thing I am proposing is this text on screen here. If you are using a string as an enum value, then the casing for that string should be kebab, i.e. lower case with dashes in place for spaces. + +KG: The one place currently in the specification in the 262 specification that uses this kind of string where this comes up is at `Atomics.wait`, which uses a kebab case. I also followed this convention with the base64 proposal which is in the process of landing. + +KG: So these two things use kebab case. The iterator chunking proposal has also this kind of string used as an argument, and it’s been updated to use kebab case following this convention. + +KG: I should also mention that there is a [web platform design principle](https://w3ctag.github.io/design-principles/#casing-rules), which says the same thing: if you are using an enumeration value, it should be lower case and dash-delimited, which is to say, kebab case. + +KG: So why do we have a larger timebox for this? It turns out we discussed this in 2019. In the context of 402. And at the time, the decision in 402 was to use camelCase or basically the identifier case. + +KG: So 402 has a documented convention of using camelCase for these string values. There’s—as far as I am aware, a couple of places in 402, where this does come up. At least only a couple that I found. + +KG: Some of these are also going to make their way into Temporal, because Temporal is reusing in particular the roundingMode, which can be something like halfCeil or trunc or whatever. So 402, already has some non-camelCase strings as enumeration values. So within the JavaScript language, as a whole, we’re already inconsistent. There’s a couple in 262 that are kebab case. A couple that camel case in 402. I should mention the reasoning for the convention in 402 is that a number of enumeration values are reusable as identifiers. Which is to say, there are places which use them as identifiers. "timeZoneName", I believe, is an example. This is an enum value but is also a key in an object that is passed to some of the APIs in 402. + +KG: So since we are in the process of introducing new string enum values, I think it would be behoove us to decide what we are going to do. In some sense we already did with the convention in 2019. But since we have already violated that convention in proposals that have landed since then—which, my bad—I think it’s worth revisiting. In particular, I want to make the case that since there is no world where we are consistent, given inconsistencies already exist, I prefer to try to be maximally consistent going forward. And the web platform is not going to change because the web platform has dozens of these values, things that web developers run into. If you are using the cross-origin mode and fetch, you pass the "no-cors" or—I forget the credentials ones is, but "same-origin" or something like that. + +KG: And so the web platform is pretty firmly committed to kebab case, and it is something that frontend JavaScript developers run into. Fetch has also made it into a number of non-web platform JavaScript runtimes. So I think the best option is to say, we are going to match the web platform going forward, except in cases where there’s a strong reason not to in some particular case, for example, if you are just reusing an existing enumeration value from 402 which is already in camelCase, it’s fine to continue to have that enum be a consistent, internally consistent casing. + +KG: But otherwise, we say there’s, like, a finite list of legacy exceptions that isn’t consistent with the web platform, but any new APIs will be consistent. + +KG: Yeah. That’s my case. I would like to have this convention documented or updated. That’s what I got. + +SFC: I just wanted to also note that you stated that there’s examples of camel case and kebab case. There’s one more or more examples of space separated to add to the list. + +KG: Also the web platform is not 100% consistent, despite having in documents guidelines. There’s a couple of dozen kebab, but one snake case, an underscore and one other weird one I am forgetting now. + +NRO: Regardless of which direction we go with, if we have some APIs that do not match with what we decide, it’s great to have a number of PR update into the APIs to accept both. Especially given what we decide, we don’t have many places now, accepting items so we probably just let developers forget that. The other ways of letting them that these enumeration strings exist + +KG: I’m fine with that. Usually I prefer to have only one way or the other. But I don’t have a strong reason or preference or to. If people want that, I don’t have any problem with that. + +PFC: I guess another option is to always accept both, but yes, if you generally prefer only one way of doing things, then I am sure you will like that option even less. + +KG: Yeah. I think if there’s not a particular reason to accept both, I would prefer to only have one, just so it’s one fewer decision the developers have to make. + +SFC: As KG already noted, in 402, we adopted the camelCase convention. We have fairly thorough documentation for why we went in that direction, which I encourage interested parties to review. I am not going to reiterate everything in there, but the main thing I want to sort of pull out is that the camelCase allows to use these as string enumerations but also as identifiers. This was something that was important in the 402 case because there’s at least one case where we have the string values and enumeration that can be used as properties of a property bag. And that was one of the reasons why we sort of went in this direction. And then, given that we already have that example, we felt it was more important to be self-consistent within ECMA402, to use that naming convention. We felt that was a more important type of consistency to follow than consistency with the W3C style guide. I think 262 could make the same decision that we made. Or it is—could make a different one. I think that, you know, being consistent in this way is, you know, in my opinion, would be—it would be better to be consistent with what 402 is doing. But I also understand that the web—W3C consistency is probably of greater importance in the 262 case, so that’s also a valid position for this committee to take. + +SFC: The other thing I wanted to say was that, we could also do a—like a more nuanced position where basically a case by case bases in the enumeration in question is one that we feel is likely to be used as an identifier, then we favour the camelCase version of it. Again that’s much a case by case decision. And I would give a cautionary note that, it might not always be obvious when looking at an enumeration that the enumeration values would be used as property, properties of options bags, as in the 402 case. If you feel fairly certain they are unlikely to be used in that case, following the W3C convention seems like a reasonable convention for that committee to take, although doing the more conservative 402 style camel case is also a valid position to take. That’s all. + +CDA: SFC, that covered your topic on preferring camelCase. Did you want to talk about that more? + +SFC: Oh, um, yeah. That was my—I’m sorry. Was I on the queue for another item? Yeah. Sorry. I was also on the queue for the previous item about accepting both. I’m sorry. + +SFC: So the comment about accepting both, in duration, in `Temporal.Duration` is a long discussion, should the fields be called minute or minutes, singular or plural. Hour or hours. Plural. We decided when you read the fields, it’s plural, when constructing we accept either, and so it’s—and we’ve considered doing some more things in other areas where there’s ambiguity about what the input values should be. There’s two valid frames of reference. I think that it could be applied here. I also don’t necessarily think we should default to doing that all the time. But I think that it’s a valid thing to do. I wanted to point out some examples of precedent for that. + +CDA: PFC? + +PFC: I just wanted to mention that accepting singular and plural units in `Temporal.Duration` was not just for the fun of it it. There’s also a good reason to accept both. I don’t want to derail the discussion, but I can point you to where you can go back and read if, if you are interested. + +EAO: ECMA402 is mostly internally consistent with using camelCase. With adoption of this style guidance, change what we ought to be doing for upcoming and new Intl formatters or new options we had add to existing Intl APIs, would be an expectation that these follow camelCase like rest of 402, or would new things even in 402 start becoming in kebab case? + +KG: I don’t exactly have a preference. My inclination, values which are used for existing APIs should match the convention in those APIs, but if you have a new enumeration value only used in some new API, then it would be better to be consistent with the rest of the platform. The distinction between what specification something is in is not generally something users are aware of. There’s not usually a clean boundary between what is different because it’s specified in 402. That said, Intl, in general it’s mostly namedspace'd so it's segregated. Perhaps there’s a stronger argument for Intl consistency in that way. But I am okay with either outcome. + +KG: It looks like the queue is empty. Does anyone want to express an opinion on this topic? Please. Come on. There’s 40 of you in this meeting. We’re painting a bikeshed. Someone has got to have an opinion. + +MF: I appreciate that you can use the camelCase names as IdentifierNames. That can be more convenient. But aside from actual Identifiers, you can just quote them. So I am not super compelled by that. I think we should jump on the kebab train. + +KM: I think we also sort of have a preference for kebabs. Accepting both is reasonable to Me. Especially with something that we have a legacy—I shouldn’t say legacy. We have APIs that out put, you know, camelCase and want to plug that into some new API or Something and takes the kebab case normally. You want to make sure doesn’t take Conversion and back to the other one. Accepting both is pretty reasonable to me. I would Be happy with case insensitivity even. I don’t know. I don’t care that much. Too much Work. Depends on how much you care about processing these in terms of performance. Accepting both is simple enough and probably makes things easier for a lot of people. On The output side, I think we would prefer kebab case with consistency with the rest of the Web platform as a whole. Obviously I understand that certain environments don’t operate Within the same boundaries of web platform, but most people writing JavaScript assume it as One thing and think of it as one think and not separate specification + +EAO: In Intl or ECMA402 we should keep with camelCase, as that’s the prevailing convention there, and it forms a somewhat cohesive whole. New formatters and new options for existing formatters should continue to use camelCase. Some things using camelCase and others using kebab case would be confusing to users. + +ZTZ: As someone who is vaguely aware that web is standardized in more than one place, I would prefer camelCase, but more importantly, it seems like we no longer have transcriptions or at least we had a gap in transcriptions. Most of what EAO said is not preserved. I think we’re back. That is it. Thank you. + +MM: In general, I do like there’s only one way to do it. And in general, when I’ve made two ways to do it is always because of trying to make a transition where we’re trying to move to the new world and deprecate the old world without breaking anything. With 262 the notion of deprecated doesn’t exist and shouldn’t exist. But these normative conventions, what is normative mean in normative convention? + +KG: Editorial conventions are about how we write the specification, but where those decisions don’t affect users. Normative conventions are conventions about how we design the language where those decisions do affect users. + +MM: Okay. Does this become a section of 262 or just a separate document? + +KG: It’s a separate document. + +MM: Okay. So as I think as a separate document, this might be a fine place to deprecate the thing we’re trying to move away from if we’re accepting two things. And I do have a question about kebab case. Do you anticipate there to be context like amount where the hyphen separation is used as basically a lightweight parsing? Basically split on dash? Because any time that the processing of the enum might involve some kind of segmentation me, I wouldn’t want to split on the upper case or came/Case thing. I don’t know how common they are. We have seen that with Amount. And so I think that those are the tensions is that kebab case can be split reliably assuming—sorry, kebab case can be split reliably if we also say as part of the normative convention that the thing that comes between the dashers is the identifier. Is that the intention? + +KG: It is not necessarily—like I can imagine reasonable enum values where one of the words in the enum is a number, for example. + +MM: Okay. Alphanumeric? + +KG: That seems like quite likely, yes. I don’t think it’s something that I want to commit to, because generally speaking enum values are not something that are parsed. They’re sort of atomic units, that’s kind of what it means to be an enum value to me. But certainly I would be surprised if we actually have something that isn't alphanumeric. + +MM: Okay. The cases where we split and we have identifiers, screen values that look like this that are split on dash, we do have those in either the spec or proposals? I think we talked about that for the Amount of interpretation of the kind of Amount it is. I don’t remember the terminology. Would you consider those to be within what these normative conventions about or would you just consider those to be a completely separate thing to have conventions about even though they’re exactly the same? + +KG: I would personally consider those to be sort of on the outer edges of this, so my inclination would be to say this doesn’t precisely cover them. But it’s still a good guideline to follow for those cases unless there’s a reason not to. But strictly speaking I would not consider those things to be covered by this. Basically to be an enum value, it has to be a finite list. + +MM: Okay. So I think I’m mildly in favor of what you propose with the accept both replaces where we currently have camelCase. And with this document stating that the camelCase values for the enum are deprecated and that we’re moving to kebab case. So I think that’s also sort of what I’m hearing is the common sentiment here, so I agree with that. + +KG: Okay. I don’t want to propose the normative change to Intl and Temporal as part of this because that’s a fairly large amount of work in terms of outdating tests and implementations and everything. I’m also personally less convinced of the necessity of it. Would you be okay with just having this as a document as it stands and then like people can make a follow-up to update other existing uses? + +MM: Yeah. I did not mean to suggest that part of the proposal itself would need to update all of those other particular things. + +KG: Okay. + +MM: Just stating the position in this so that the incremental process can be made towards the recommendations seems perfectly fine. so EAO stated a preference for not changing Intl. + +MM: Okay. What would you the rational for not changing it? + +EAO: Not changing Intl or not changing anything? + +MM: Not changing Intl. + +KG: As far as I’m aware, Intl is the only thing that would change. Well, no, that’s not true. Temporal uses one of the enums defined in Intl. + +MM: There is a nice dividing line that Intl falls on one side and Temporal falls on the other side. Temporal is part of the main language and is not optional. Intl is a separate spec and JavaScript considers it to be optional for an engine to provide it. Nevertheless we still have the question why not have—as long as we accept both during the very lengthy transition, why not accept both all over Intl as well as Temporal? EAO, let’s take that as a question for you. + +EAO: Because there’s a lot of it, there’s a lot of existing usage. While the benefits of using kebab case in new APIs are clear, we have a corpus of APIs and interfaces that uses the existing style. Willy-nilly changing all of that to be different I think needs better reasoning than just there’s some other APIs somewhere in the neighbourhood that kind of match this other style as well. + +MM: So by your rational, what about Temporal? + +EAO: I have no opinions about Temporal. + +KG: The thing with Temporal is that the— + +CDA: Sorry. I will just interject real quick. We have two minutes left for this topic. + +KG: We might not finish right now. That’s okay. So the thing with Temporal is that there is—as far as I’m aware, there’s one place this comes up in Temporal. And that is there’s a rounding mode parameter to some of the APIs. Rounding mode is a concept that is in Intl. The values for rounding mode are the same as they are in Intl. So Temporal is using the values which are spelled precisely the same as they are in Intl for this thing. And I think that’s good. I think it would be weird if there’s one API which requires you to write "halfCeil" in cameCase and one API that requires you to write "half-ceil" in kebab case. Assuming we’re not having update Intl to accept new values my personal opinion is that Temporal should match Intl and only accept camelCase. That would be unfortunate but I think the point of these guidelines is not to be hard and fast rules. When there’s a specific reason to deviate. The fact this is in Intl is a reason we should deviate. + +MM: With full TypeScript vs erasable TypeScript, there are enums that can’t be kebab case, right? + +KG: I think so. + +MM: That’s a serious cost. + +RCH: That's not actually true. + +KG: Oh, sorry! + +CDA: We are at time. I did capture the—I saw some people removed some things from the queue but I did capture the queue before that. Kevin, do you want to continue this—we have some time especially on day 3 if we don’t finish up any more time—if we don’t free up any more time before then, we have time on day 3 to continue. + +KG: Yes, I would like to continue. I want to say the thing that I will be proposing when we talk about this later is that we say this will be the convention for 262 going forward but Intl will continue to operate under its existing convention and will not update to accept multiple values. That decision is up to TG2 but I think that’s the most straight-forward option. So if people don’t like that, bring your opinions when we talk about this later. + +### Speaker's Summary of Key Points + +* Ecma 402 uses camelCase for enums; the web platform and two 262 APIs use kebab-case. KG argues for having new 262 APIs use kebab-case going forward. Some people suggest allowing APIs (possibly only existing camelCase APIs) to accept both. + +### Conclusion + +* No conclusion at this time + +## Normative: Add `[[CompactDisplay]]` slot to `Intl.PluralRules` + +Presenter: Ben Allen (BAN) + +* [proposal](https://github.com/tc39/ecma402/pull/1019) +* [slides](https://notes.igalia.com/p/tg1-normative-PR-1019-sept-2025.md#/) + +BAN: So this is one that is a normative change but is a normative change that has essentially no effect on any current implementation. In some languages, it’s possible for the rules for forming plurals to differ based on the notation when using the compact form. There are two compact forms. There’s the short compact form that is "1K" for 1,000 and the long compact form, the "one thousand" spelled out for 1,000. In some languages, the appropriate role for pluralization can change whether using the short form of the notation or the longer spelled out form of the notation. The reason why I say this makes no change to current implementations is CLDR doesn’t currently have the plural forms in short compact than in long compact form and no observable change for any implementation using CLDR. The PR what it does is adds a `[[CompactDisplay]]` slot to indicate whether or not the short or long form is being used in Intl plural rules. As such point as the data is added to CLDR is make use of it as a result of the PR. I will just say this one and the next one might be fairly fast. But there’s some late-breaking changes related to amount that I anticipate using a lot of the time freed up here. But any questions on the support for future CLDR data on differing plural rules for short versus long compact notation. + +EAO: Would you have an example available of a language to depend on this? I’ve not been able to find any in all of the discussions around this. I don’t feel strongly about this either way, though. + +BAN: Let me see here. I believe there’s one in Spanish. I want to defer to one of my colleagues who speaks Spanish. + +EAO: I would be happy to have that added as a comment on the appropriate issue or PR just so it’s somewhere visible about why we’re doing this. + +BAN: Fantastic. So let’s see, is it appropriate to ask for consensus on this support future CLDR on rurals with short and long compact form? Then I will ask for consensus about that. I am asking for consensus. + +CDA: Okay. Do we have support for this normative change? + +KM: I know that some platforms, I don’t know if it’s supposed to mention names, I always forget whether you’re supposed to mention, some platforms and maybe the one or maybe not the one that we ship have custom CLDR rules. Is this something that would impact people who have—I know very little about this. This is sort of not really necessarily my expertise. I’m curious if you know if this is the kind of thing that would be potentially impactful to those or— + +BAN: I don’t believe so. Someone else who’s deeper in the CLDR process, correct me if I’m wrong, but that would require an engine to have an implementation that already treats—that already uses different short versus long compact form and has differing plural rules for it, correct? + +KM: Are you asking me or somebody else? I don’t know the answer to that. + +BAN: I’ll have to punt to someone who is more deeply involved in CLDR on that. + +KM: I don’t have a problem with this if it doesn’t impact us. If it does, then like I have a harder time answering. I would have to get back to you. If that’s fine with everybody else, I don’t have a problem. I can give it to you to Ping to people and possibly get back to people by the end of the meeting. Not today but the end of the session. + +BAN: Would that be a conditional +1? + +KM: Sounds conditional +1. + +CDA: So we have on the queue, JSL saying no objections, we do need explicit support. We don’t do lazy consensus in TC39 and we need explicit support and no objections. I note that we have no objection from JSL and we have support from EAO thanks to NRO's example. Do we have any other voices of support for this change? Normally we like to have at least two voices of support. Though admittedly this does not seem particularly controversial to me. All right. I will support this as well. Not seeing any objections. So I believe you have consensus for this change. + +BAN: Fantastic. All right. And next up is the other one, the other normative change for 402. + +### Speaker's Summary of Key Points + +This PR allows implementations to take into account potential differences between the short compact form ("5K") and long form ("5 thousand") when determining the correct plural form to use. + +### Conclusion + +* Consensus for the proposed change. + +## Normative: Make `Intl.PluralRules` ResolvePlural and associated AOs take Intl mathematical values rather than Numbers + +Presenter: Ben Allen (BAN) + +* [proposal](https://github.com/tc39/ecma402/pull/1026) +* [slides](https://notes.igalia.com/p/tg1-normative-PR-1026-sept-2025.md#/) + +BAN: This is another plural rules. Currently we cannot accept plural rules for BigInt. This is largely the result of an oversight. Let me go to the actual PR. The sort of key part here is back several years ago, Intl NumberFormat, I believe this is before V3 even was updated to allow it to format BigInts. The key thing is somewhere previously it called the 262 AO to number that throws on the BigInt, I have this here, Intl NumberFormat was updated to call first the 262 PR too numeric that didn’t throw BigInt and I believe later in V3 the concept of mathematical value was introduced. That update was never made to plural rules. So currently plural rules throws when it is confronted with a BigInt. There’s sort of no particular reason to do it. That was an oversight and should be able to take everything that Intl NumberFormat takes in this context. So this is the current behavior. If you give it a BigInt, it throws. The new behavior is if you give it a BigInt, it correctly selects the plural rule. And this has approval from TG2. + +BAN: So I am asking for consensus on making this change to allow plural rules to select the proper plural rules for BigInt instead of throwing. + +MM: I have a question. I think it’s just a terminology question so it doesn’t have any impact of normative meaning of this change. As I understand what you mean by Intl mathematical value, I have to admit I haven’t paid enough attention to Intl to have come across it before. My understanding is what that category is a mathematical value plus the Infinities, NaN and minus zero; is that correct? + +BAN: That is correct. + +MM: On the amount proposal, the current amount proposal is being presented by you at this meeting uses the term numerical value which I think they define to be exactly the same thing. Can we avoid using two different terms for the same concept? + +BAN: So the reason for that—right now, this is one of the things we will need to discuss—is in the future, since amount isn’t a 402 proposal, it might be necessary or wise to hoist the concept of Intl mathematical value out of 402 into 262, at which point it would be renamed since it would no longer be purely a Intl concept. + +MM: What about going the other way around? Because calling something an adjective mathematical value with NaN especially seems very bizarre to me. Numeric value as a concession to representation issues, numeric seems much more about—to include representation naturally rather than just referring to what mathematical values in the world of mathematics denote? + +MM: And since it never appears outside of the spec, there’s no observability to the terminology here, would there be any objection to do a global search and replace on 402 to change it to the numeric value? + +BAN: Let’s see. I don’t anticipate those objections. But I think the reason why I don’t anticipate those is I have been working enough on amount to think that this concept belongs in 262. I would be in support of changing the name of 402 preparatory and moving it to 262 and indicate this is in fact not a purely internationalization-related thing that we have invented here. + +MM: Okay. I pass. + +WH: “Mathematical value” in a name implies actual mathematical values such as real numbers. There are uncountably infinitely many possible real numbers, so an implementation can’t actually represent arbitrary ones. Numbers restricted to some finite set of floating point values such as Numbers or Decimals or whatever should be called something else. + +MM: Okay. That’s interesting. I was looking at the fact that it’s a super set. You were looking at it that it’s not a subset. Interesting. I don’t have a—I’m on the fence. I don’t have an opinion to state. + +WH: We do have the concept of extended real numbers which includes infinities. + +MM: NaN and minus zero? + +WH: That specifically includes infinities but it’s natural to define one that also includes NaNs and minus zero. + +MM: You’re in favor of adjective mathematical value? + +WH: If what we’re talking about includes arbitrary mathematical numbers, then yes. + +MM: And then BAN, I want to check with you, the intention in 402 is that it does include arbitrary real number? + +BAN: Yes. It is a mathematical value plus all of the things that numbers can be that is a mathematical value for capital and number. + +MM: Okay. + +CDA: All right. That’s it for the queue discussion topic any way. EAO is on the queue on supporting the normative change. + +BAN: That is all I have to say other than it seems we have in fact in consensus. + +CDA: We have support from EOA at least. Any other voices of support for the normative change? Do we have any objections? Dissenting opinions? All right. You have consensus for this change. + +BAN: Wonderful. + +### Speaker's Summary of Key Points + +Normative: Updates the following `Intl.PluralRules` AOs to take Intl mathematical values rather than Numbers: + +* `ResolvePlural` +* `ResolvePluralRange` + +This allows `Intl.PluralRules.select` and `Intl.PluralRules.selectRange` to take BigInts as arguments. + +### Conclusion + +* Consensus for the proposed change. + +## Amount for Stage 2 + +Presenter: Ben Allen (BAN) + +* [proposal](https://github.com/tc39/proposal-amount) +* [slides](https://docs.google.com/presentation/d/1cDQBcMzSAht9jZiuaMKAEIDlPmlSmjeBJ-sw23AySWI/edit?slide=id.g37deebb6a10_2_54#slide=id.g37deebb6a10_2_54) + +BAN: All right. So you’ll notice the strike-through. There’s been fairly late braking changes and a lot of discussion since Friday evening. We were going to ask for Stage 2 for Amount. Given the contention we’re not expecting to ask for Stage 2 for Amount at this point for this plenary. And instead this has become a Stage 1 update. I suppose it was on the schedule for something to be considered for Stage 2 if it seems like all the contention has died down that I’m not anticipating, I would gladly ask for Stage 2. Without further ado, there we go. + +BAN: so the thing that I will talk about here first are resolutions that we have made to question/concerns from July. People pointed us in a lot of very useful directions and we have incorporated the feedback and suggestions that we heard there. Then we have some open questions. There’s three that were on the slides before Friday and then there’s a couple other things that were added since the discussion on Friday. And then stage advancement question mark, but like I said, not necessarily anticipating—depending on how the conversation goes, I’m not necessarily anticipating asking for Stage 2 for amount today. + +BAN: Okay. So first a quick recap of the changes that we have made since Stage 1. The big one is we are no longer considering functionality involving arithmetic. This is not something to add units to units. We are not doing mathematical in this. The numeric champion call on the addition amount should be a number with unit and precision and previously we had decided it should node a newspaperrable value. In response to the concerns from July, we are changing to domain covering all the input types number and BigInt and numeric string. That’s the stuff that we have discussed about adding all of the values that are possible for Intl mathematical values: infinities and NaN and negative zero. So as described in July, it was an immutable mathematical value optionally tagged with unit and precision. Now it is anima lloydminster value and discussion of 402 and numerical value that is the same. This is the proposed API and changed since we last spoke. Here is how we changed the concerns of July. The TLDR is handling the infinity and NaN and negative zero. There’s discussion on when we shall round. That discussion everyone considered the discussion at the last plenary indicated that everyone considered it appropriate to round on the way in and not on the way out. We also had some discussion in July of whether we should treat currency as something special rather than just another type of unit and again the discussion pretty clearly landed on like currency should be another type of unit and we should not have special handling for currency. + +BAN: As we said, again, previously finite mathematical value and then we realized we want to be able to cover everything in the domain of number as well as mathematical value. So infinities, NaN, and negative zero. So to avoid having unpleasant surprises, amount should have a domain that covers its input types meaning that we need to have numeric equivalent, something equivalent to Intl mathematical value even though this isn’t an Intl proposal. I provided links. And they are clear and people came to consensus that we should handle everything that Number can handle. We have a PR up for adding support for this. So as I’ve been saying, we now support Number’s holding name. The other open question was when we should round? We are absolutely confident that the correct behavior is to round on the way in. So when we construct an amount or in a call to with I making a new amount with with, rounding can occur. Both of them have a roundingMode option that defaults to halfEven. The only rounding that like toString and toLocaleString may round when there are large number of decimal double digits and this does not round with options and we are not doing math once something is in here. We round on the way in and on the way out we do not. So if we can continue amount with the option for fractions, it’s a lower number of fraction digits in this case zero, what can get stored is in this case if we provide it with 1.2, what gets stored is 1. The way is with with the new amount with four fraction digits and the 0.2 and the stuff that got rounded off is lost, so it will be a new amount with four fractional digits throwing 1. Again, the other item discussion in July is "currency" special, like in Intl? Previously it had a Intl NumberFormat currency option. When we discussed it no explicit support for having currency as the option of Intl NumberFormat and covering the currency case. And currency identifiers are considered for unit. + +BAN: That’s the resolution to stuff from July. We have new open questions. + +BAN: One is: What are the limits which is a very big discussion that’s sort of related not identical to the limits discussion that SFC will be having on the last day. + +BAN: Smaller one, now that things can be NaN and negative zero, we have potential additional predicates isFinit, isNaN, and isZero. I believe one of these is already in there. I believe isFinite is in there. The other thing that came up is parsing units with numbers with units in the constructor. There’s another one we will get through after stepping through these three. So there is significant ongoing discussion in 402 especially on the limits for amount values and whether they should be spec defined or implementation-defined and if they’re spec-defined, what should they be? I don’t think I will step through the discussions. I believe most of what we will talk about, like I said, will be a little down stream of SFC's discussion but we also have to consider limits related to BigInts here. It’s not exactly the same. So we can return to that one. Potential additional predicates, as I said, we have these additional values that Amounts can hold and we want to be able to test for them—Infinity and NaN and zero. + +BAN: And this one is, again, one that came up. So should our constructor be able to parse numbers with units? We have new notation here. Is `new Amount("137[kg]")` something that we want to support or amount in the value 137 with the unit “kg”? + +BAN: This is the late breaking one. I think if Jesse is around, I might punt to Jesse on discussing this. This is something that’s come up in TG3 a great deal. + +JMN: Yes, this is something that’s come up with TG3 a number of times in recent weeks. This (?), it was noticed that we have a change to to Intl mathematical value. This is thinking about the 402 aspect what’s going on in Amount. And perhaps it was noticed that the way that we wrote the spec text there, we query some internal slots basically kind of brand checking to make sure that what we’re dealing with is an Amount. But then it became a topic of discussion in TG3 because what is going on here is that we’re converting an Amount to an Intl mathematical value outside of the usual places where we use these internal slots namely we construct, for example, an Intl NumberFormat, then we call the format function on that. And then that function will end up calling the toLocaleString on amount that is querying the slots as spec text written. There was discussion about whether that is problematic and I think we have agreed it is problematic especially for thinking about membranes and membrane transparency and proxies. And so the challenge was to settle on some kind of strategy for not querying internal slots there. There is an issue there. There’s a couple of proposals for how to get around this. We don’t think this is an insurmountable problem and something that came up in review. Thanks to those who participated in TG3 for that. Is that the last slide? + +BAN: I believe it may be the last slide. Let me go back to sharing. I don’t have the queue loaded up. + +WH: Can you pull up [issue 54](https://github.com/tc39/proposal-amount/issues/54), please? To give a bit of a background for the flurry of activity that’s been happening over the last day, I reviewed the spec as it was linked in the agenda for the meeting. I found out yesterday that that linked spec was the wrong version. So the version I reviewed is a stale version. The actual version was only posted yesterday. + +WH: I like the changes that are being done. A lot of the changes fix some of the issues. What I haven’t seen in the presentation is how we propose to resolve the issues with significant digits. There is a good approach for *fractionDigits* which the—I guess the latest changes which are being reviewed will compute *fractionDigits* although they still do it wrong for zeroes. For *significantDigits*, there’s the issue that the concept means different things in different contexts and is not really sound. So I’ve been suggesting that we not expose *significantDigits* as a getter. + +WH: Furthermore even providing *significantDigits* in constructor options has problems in the design as it is now. Just to give an example, if you construct an `Amount` with the string "0.00" and no options (or `{significantDigits: undefined}`), then you get the correct *fractionDigits* of 2, which means that the output will be "0.00". If you provide "0.00" with `{significantDigits: 3}`, then you get *fractionDigits* of zero, which means that the output will be "0". And there are a lot of issues like that in the spec which will need to be fixed for which I’m not sure we’re all on the same page as to what direction to take. + +WH: The thing I’m proposing is to not expose *significantDigits* getters and we will need to really think about what to do when somebody gives *significantDigits* as a rounding option because it’s not obvious what should happen. The examples in the table here on [issue 54](https://github.com/tc39/proposal-amount/issues/54) show the current behavior is still the broken one. The proposed changes fix some of them, but others remain broken. For example, an input of -0.00 with *fractionDigits* set to 1 turns -0.00 into "-0" with zero fraction digits on the output. So I support the changes that we’re making, but we still have a bit of work to do. + +WH: And I’d like to hear if others have comments on the significant digits dilemma. The main issue of using significant digits is that it degenerates when the value is zero. Parts of the spec, in fact many parts of the spec, will just blow up and crash. The spec as it currently stands, even with the fixes, crashes when one provides zeroes. The behavior that I’m suggesting is the corrected behavior below, if you scroll down on [issue 54](https://github.com/tc39/proposal-amount/issues/54): If we only store *fractionDigits* in the Amount state, then things can work. The mathematical definition of *significantDigits* is one plus the difference in decimal position between the most significant digit and the decimal position of the quantum that is *fractionDigits*. That doesn’t work when the value is zero. + +BAN: I believe I want to defer to NRO on this one. + +NRO: So I also replied on the issue. For the case of zero, for context the problem of significant digits with zero is that there is no integer for which that format that WH said works. That is in multiple times and in the section needs are the same. I still think the significant digits concept is useful and maybe you could pick a value for zero, let’s say when the value is zero, we say zero significant digits like NaN and infinity or we could pick negative infinity that is what in quotes would make the formula work. But if we prefer to remove it, it’s probably fine especially given that we can compute significant digits from fraction digits. + +WH: Either 0, 1, or -∞ would be okay. My concern is that it’s an attractive nuisance for users to rely on *significantDigits* to specify the precision. That doesn’t work when the value is zero. And unless people test for zeroes specifically, they’ll produce code that mostly works but fails on zeroes. Now, I’m not suggesting that we get rid of *significantDigits*, I just want to discourage its use. And if you are advocating for a *significantDigits* getter, then the question to answer is, is *significantDigits* of "0.00" the same as *significantDigits* of ".00"? + +NRO: I don’t have an answer to that right now. + +JMN: I just wanted to mention that I really appreciate this feedback. I’ve been looking through some of the comments today and you perhaps have seen some of the discussion I’m sure. I think what was going on with the understandings of what fraction digits could mean and what significant digits were meaning. I appreciate the clarity that comes here. I think we have settled on a notion of fraction digit. I have a PR ready to go that should take care of those issues. I really went through each one and took a look, although it’s possible that zero might still be a problem. I think it would be a pretty straight forward issue to just drop significant digits if we were really to make that decision. I like the idea of discouraging its use. I agree this does involve some subtle issues that programmers might stumble into unintentionally and unknowingly. But, again, I would also like to preserve the kind of equivalency that you refer to and NRO refers to and SFC may also have in mind. I think it’s important to keep that invariant in place. I think we can do that fairly quickly. + +SFC: My queue item is just that, yeah, I would like, if we can, to retain that invariant. I think there’s a way to work it out, but we need to spend more time working with WH on the spec to make sure we do it correctly. + +WH: I like the PR, but you’ll need to fix the rendering function to handle negative fraction digits. Currently it breaks. + +EAO: That sounds good. Just noting that I’m mostly aligned with SFC that we do have existing behavior in Intl NumberFormat for what significant digits means and how they work. And I think they are very useful when given explicitly in a constructive for the amount to be able to say that the value has this many significant digits. But computing significant digits I agree is bad or potentially bad and we should not be doing that. I do think we should be allowing for an Amount to be storing significant digits if that is explicitly given. + +NRO: Question on the topic. Are WH and EAO supporting that we keep the significantDigits in the constructor and we convert significantDigits to fractionDigits and expose only fractionDigits? + +WH: Yes, that’s what I’m proposing: allow passing *significantDigits* into the constructor but not expose it on `Amount` values. + +EAO: My preference would be to keep the significant digits accessible because there is value in being able to tell that the significant digits is something less than the number of integer digits in a value. + +WH: The issue there is that the computed *significantDigits* can differ from the provided *significantDigits*, and that may generate confusion. There may be ways to work it out. It’s not obvious to me. + +EAO: That’s why I was saying we should not compute significant digits, we should only accept significant digits when explicitly given or otherwise have undefined for them. + +WH: Okay. But what should happen to fraction digits when you provide *significantDigits*? + +EAO: Effectivity the same thing as `Intl.NumberFormat`. If you give significant digits, then we prioritize significant digits. But we do still report fraction digits and the fraction digits we end up reporting is the number of fraction digits a number that is zero or a positive integer. + +WH: Let me see if I can come up with an example. If you pass in "0.00" with *significantDigits* of 2, what should the resulting *significantDigits* and *fractionDigits* be? + +EAO: So that was 0.03? + +WH: "0.00" with *significantDigits* in the constructor set to 2. + +EAO: That would be following the example I posted on the channel, that would be 0.00 that we calculated as so that has two significant digits and one fraction digit. + +WH: So it would render as "0.0"? + +EAO: No, "0.00". + +WH: So where did you get the— + +EAO: Because that is what we do currently with `Intl.NumberFormat` when you ask it for both minimum and maximum significant digits confined to some value. Because we have the existing behavior in the language. I strongly think we should be following the existing precedent. + +WH: So providing significant Digits in the constructor doesn’t affect fraction digits at all? + +EAO: No, it does. + +WH: How? What does it do to *fractionDigits*? + +EAO: So, for example, if you’re constructing given a value "0.03" and give significant digits two. So the value we end up constructing is "0.0" effectively where the fraction digit count is one and the significant digit is two and the value is "0.0". + +WH: That just seems really off. "0.03" to two significant digits should be "0.030". That’s what significant digits mean. + +EAO: That’s not what it means in `Intl.NumberFormat` at the moment. + +WH: That seems broken. + +EAO: But it is internally consistent. + +WH: I don’t think that’s consistent. But it’s clear to me that we have wildly different interpretations of what significant digits mean, so we should figure out what those mean. + +CDA: I’ve stepped away while Dan was chairing. I see the queue, but I’m not sure where we are at. Are we still on SFC topic or going to SFC? + +KG: The queue was advanced. I haven’t had my topic yet. But also we’re still on the previous topic which I believe NRO has a response to the previous topic based on the queue. + +CDA: That is no longer appearing. We are actually at time. I think technically we had a couple of minutes to go, you know, due to other topics. But given that the queue is where it is and it sounds like we need a continuation and I see BAN nodding, it sounds like probably we should just capture the queue here where it is and get a continuation scheduled to continue. + +BAN: I was thinking probably 30 minutes and ideally not for today and working with folks offline. + +CDA: Thank you. I mean I know there was at least one constraint on this item, but we’ll do the best we can and even though we might not be able to meet that given all the other constraints and all that. We will definitely have time. I will do that. I have captured the queue. Let me just double check that I did just so I don’t want to be caught up. I do have it. And thanks everyone. That brings us to the lunch break. We’ll see everyone back here at the top of the hour. Thank you. + +### Speaker's Summary of Key Points + +(see final continuation) + +### Conclusion + +* Continuation + +## Iterator Chunking for Stage 2.7 + +Presenter: Michael Ficarra (MF) + +* [proposal](https://github.com/tc39/proposal-iterator-chunking) +* [slides](https://docs.google.com/presentation/d/12QAd-b2rPY5OC82ZwPCcDfGgzSfeSwdbcEoctUqQGss) + +MF: Thank you. Iterator chunking again. Looking for Stage 2.7 today. I have a bit of a reminder. As we last saw this proposal, it was three methods that we add to `Iterator.prototype`. Chunks and two windowing-style methods sliding and windows, which only differed by how they treat iterators that yield fewer items than the window size. But all of these are similar. You can see the yellow section on the slide is the difference between them. And we were at that state because of the use cases that we considered and wanted to make sure that we covered. Slightly before the last meeting, we got two different pieces of feedback. One is the piece of feedback from KG that was suggesting that maybe we just combine the two windowing ones into a single method where the small difference in behavior is differentiated by the option attached to the method. The other piece of feedback we got from NRO that says if we did keep them separate, the names really weren’t very good and we had to do something. We decided at that meeting to go with KG’s suggestion. + +MF: So the proposal as of agenda deadline looked like this. We have left `Iterator.prototype.chunks` unchanged and we have combined the sliding and windows methods into one where there is now an optional second parameter called `undersized` here that takes two partial string enum values "only full" and "allow partial". Only full means only yield full-sized windows. Allow partial means possibly yield undersized windows, allowing them to be partial windows. The only constraint we were working with is we didn’t want to use the word truncate in the options because the feedback from last time is that truncation could be understood to apply to the stream, you know, the iterator or it could be understood to apply to the window itself. We avoided the names. This is what we went with. That is what the proposal looked like as of agenda deadline. + +MF: It was pointed out last week to me that we actually did have precedent for the kebab case naming that I wasn’t aware of. I picked arbitrarily space separators. I made a last minute change to switch to kebab case. If you were here before lunch today, you would have participated in the discussion talking about possibly setting precedent here and actually defining normative conventions for continuing to do it. But I wasn’t aware of it, so I didn’t make this change until last minute. Hopefully we can decide to go forward with this kebab case naming for these strings. And if that’s not an issue that that was made kind of last minute, I would like to go to Stage 2.7. + +MF: I have my summary here. I don’t think I need to read it out. And I’m open for the queue. + +RPR: Any questions for MF? Or comments? NRO. + +NRO: Thank you for the proposal. I can support this advance. + +RPR: We also have support from DLM and a question from SHS, was it discussed to default to throwing on undersized if not specified? + +MF: We have discussed at multiple plenary meetings and there’s an issue for it. My main opinion there, I think KG was slightly supportive of having a throw ability which this design does not rule out if we wanted to do that in the future. But I think that it’s an anti-pattern to throw on, you know, a particular kind of input here for the iterator helpers. So I don’t support it. As of what I know right now, I wouldn’t support it as a future extension. But the future extension is possible given our API. + +KG: Not changing the default, which is what the question was. + +MF: The default. Yeah, the default is currently `only-full` and that was chosen based on the use cases that were most common. + +KG: I do still personally slightly lean towards throwing, but I’m fine with the current behavior. I also wanted to say that I’m happy with this choice of names since there was discussion about that last time. + +SHS: I’m also fine with that. I wanted to bring it up to see if it was discussed. + +RPR: Thank you Steve. All right. That is the last item on the queue at the moment. So we have heard only support. And a little bit of clarification. Should we go the formal ask for 2.7? + +MF: Yes. I would like to ask for Stage 2.7 for iterator chunking as presented here including the kebab case name that was changed after the agenda deadline. + +WH: I support this. + +RPR: JRL has come in with a question. + +JRL: (trying to speak while clearly under water) + +JRL: Completely in support of kebab case. I agree with KG that the default should be the partial full because it matches with the chunking case instead. If we have to support both types with an enum, it seems that the argument from last time is wanted the tuple type in TypeScript, the type with known constant length. The type has four elements, for example, but now we’re switching based on the enum so it seems like either way we will have to support both cases in TypeScript. If we have to support both cases, I would rather the one that matches with the chunking behavior which is give you all the values all the time. + +KG: Sorry. To be clear, I’m not supporting switching to "allow-partial". I would be supportive of throwing if it is undersized and neither option is specified. I think that "only-full" is the better default if we're picking one. I see the analogy to chunking but if you actually go through the use cases that we think this API is for—and maybe MF can speak to this more—but I was convinced that most of the use cases actually don’t want any output in the case that it can’t provide a full-size window. So it's different from chunking just based on the use cases. + +JRL: Then I hate throwing the most. I don’t want to support that one. + +RPR: All right. So I think we’re back to the formal request for 2.7 as MF stated before. We heard support from WH. Are there any objections to Stage 2.7? All right. I’m not hearing any objections. We have had multiple people stating support. So congratulations, MF, you have Stage 2.7. + +### Speaker's Summary of Key Points + +* as discussed last plenary, combined windows/sliding into a single method + * adds an "undersized" parameter to the windows method + * value can be omitted, "only-full", or "allow-partial" + * defaults to "only-full" + * keeps the door open for a future options bag if more options are desired +* didn't request re-review from Stage 2 assigned reviewers + * change is small and the merged PR was discussed in plenary +* made a last-minute change to follow de facto kebab-case convention +* requested Stage 2.7 + +### Conclusion + +* proposal advanced to Stage 2.7, including the last-minute change to kebab casing + +## `Array.prototype.pushAll` for Stage 1 + +Presenter: Daniel Rosenwassser (DRR) + +* [proposal](https://github.com/DanielRosenwasser/proposal-array-push-all/) +* [slides](https://danielrosenwasser.github.io/tc39-slides-2025-09-array-push-all) + +DRR: I work over at Microsoft at the TypeScript team. I’m here to talk about a method that I would like to add to append elements to the end of arrays in batches from iterators and other arrays for Stage 1. So I kind of started off with the concrete name because this is maybe where I would like to envision things going. But I think there’s some flexibility there. But let’s try to go through the actual proposal. So today if you want to take an array of items and append all of their values to the end of another array, in one shot, the most sort of idiomatic way to do it in JavaScript is use spread arguments on the push method. So here you have some new elements, you have array arr and arr dot push dot dot dot new elements and that unpacks the elements and new elements and pushes it into array. The problem with this is that every single time we do this, every time you do a spread argument, this places each item from the iterator or array that you have or whatever on to the stack and so as an example here, let’s say that you have this repeat helper generator and takes a value and a count. We want to repeat some string 200,000 times and then we just say let’s spread all those and push them into the array, what will happen is that you end up with the RangeError that is really specifically a stack overflow, right? What is happening is that every single time that you do this, you will basically exceed the stack limits. I mean, in theory there’s the time that could have a very, very large stack and doesn’t hit this. But of course its implementation defined. And people hit this very, very often. So you have 200,000 elements, each of the elements gets pushed on to the call stack as the individual slots and you basically don’t have enough size to model that, right? So the work around is that if you ever need to do this, you go back to what people did before spreads existed, right? I mean, this is the for of loop. That didn’t exist before. You get the idea. You can push each to the individual elements or use dot for each and push those. Depending if you have flexibility on and model as an expression or willing to split this into multiple statements, you can pick one or the other. + +DRR: But there are a couple of problems with that, right? As you might have noted, this is a little bit more verbose. There is difference between `…` or multiple lines or single line. It also probably leaves perf on the table. Any of the approaches today, right, either is probably not optimized at the low optimization tiers or it has the risk of a stack overflow and probably isn’t optimized anyway. What I mean is every single time that you push, you might push over and over again and then you hit the capacity of the backing array for that array and then that thing has to be re-allocated and resized and re-allocated and over and over again and you have to do it over and over again and you end up with the quadratic behavior that ideally if people had the helper function to solve this, they would prefer that, right? That would be more optimizable at the baseline. + +DRR: People have the helper function to avoid this. This is not like a hypothetical problem. This happened on basically every team that I have worked at some point in time, right? Either you start off with not really thinking about the edge cases or you get a big array from somewhere across on the other side of the wire and then you try pushing all of those at once and then you have the stack overflow thing. So basically you hit this in production. You can’t predict when you hit this extreme case. It’s not even obvious there’s a problem with this in the first place until you hit it in production, right? If you knew there was another function to do this, you would go towards that function. But instead, you have found out there’s this sort of idiomatic way and unless someone warranted you or put a lint rule in place, you will end up writing this and causing an issue later on. So proposal is I should say I’m trying to be more abstract about this and maybe I was a little bit rust on saying I wanted a specific prototype method. The way I would envision this and hope it being is method like push all or push from or something like that that exists on the array prototype. There’s some subtleties in the function. But more or less would look something like this. Take iterable and array and go through all the elements and push them down to the end of the array, the back array. With push all, right, this becomes very natural. It is just a method. It’s auto completable, right? So it has save run time behavior. You don’t risk the stack overflow problem or mitigated it significantly less. It’s more discoverable. So in editor you hit dot on the right side of array and immediately get push and push all are right next to each other. + +DRR: It’s more optimizable or more easily optimizable. Every engine can optimize every single pattern in the universe. They wouldn’t have to if you had a good method that did the ideal behavior in the first place. And then it’s allegation just less boilerplate, right? You don’t have to write the helper function or for loop. It’s just there. Every other language in the—I mean, here is how popular languages this and top ten and Python has the pen for individual and extend and Java add and add all and C range and concat we have the new array. C++ has push back for the individual element and then doesn’t really have special case adding to the end. It instead says give me the iterator position of the end and then the it ray Tory start and end of the new elements from another collection. This is nice in other ways. I would like to also just make it easy to push a lot of elements to the end of the array and then go optimize is this because it can more easily figure out the pattern and whatnot and statically optimizable and also has lots of interesting characteristics about how it models its call stack even if it can’t do that, right? I’m open to other names and thinking about this a little bit differently. I think a prototype method would be the best thing. I should say this is the discussion point. I forgot to add the slide in. I would like to open the floor to discussion right now. + +RPR: All right. You have plenty of queue. We’ll start with JHD. + +JHD: Yeah, so I’m very, very skeptical that large arrays at all are common. There are certainly people whose do mains tend—they tend to run into it a lot. And I can imagine that with the TypeScript compiler you run into it a lot. What you mentioned for example sending things over the wire, if you’re sending 200,000 items over the wire without pagination, your performance problems are not in the push method, they’re elsewhere. You’re already going to have to—in order to eek out the best performance possible you have to shard the data in which case you won’t run into this problem. I don’t think large arrays are common enough that I think this is worth looking into personally. Go ahead and you can respond before I say my second part. + +DRR: I’m just surprised, because this happened on several teams and all had to tell newer engineers watch out for this. The most recent occurrence that prompted me to finally open the proposal, yes, it happened on VS code and on the types of compiler and many sort of—maybe it doesn’t happen in most front end websites but definitely happens in a lot of JavaScript code. Wouldn’t it be nice to avoid the problem in the first place, right? + +JHD: Sure. But I’ve been telling everybody about the problem for a decade but not because of the size of the arrays that is never the issue but the iterator protocol is always slow and people reach for it because it’s so ergonomic and easy to do. So I understand there’s issues and it’s unfortunate when you have to tell people here is the best practices, you can’t do what the language lets you do. That’s just programming. + +DRR: Yeah, I mean, it sounds like you raise another good point why it might be worth looking into. I personally considered both of those valid reasons for why this could be a helpful part to add. + +JHD: I will concede I’m sure there are specific domains where it is maybe very common, right? If you’re working with lots of parsing and you have to parse potentially large files which covers TypeScript babble, et cetera, and doing data number crunching stuff and front end web code and back end web server node code is not my experience that it’s common. The second part was there was some issue that was mentioned about what about the other array mutating methods that you want to do this with variable argos with splice and I forget the other one. I think splice is terrible and API is bad and shouldn’t try to replicate it further and I wasn’t happy about toSpliced, not worth fighting. I think if I’m not convinced about motivation for push, I’m not sure why it would be worth adding the other ones. And then I guess the other thing before I yield the floor is I am also skeptical that—I know there’s lots of issues with concat namely is concat spreadable? But I would rather see a solution that produces a new array instead of adding another mutating method to arrays whether it’s static or prototype. I don’t think we should add any more of those ever. It’s not a good programming practice we should be encouraging. + +DRR: The array has to be made at some point. If you have local mutation, I mean, I want to get through the queue. + +ZTZ: It’s not about it being common but not having the idiomatic way explode—rarely. + +JRL: To give another point of view, I find this extremely common. JHD mentioned he rarely hits this and any time I’m doing parsing work or any time I’m doing—I just wrote a HTML depth first traversal and don’t want to create a new array and I hit this all the time in node code and client code. I would love to have an idiomatic way to do it. + +MF: I will preface this with I have seen it a lot and run into this a lot personally. I understand it’s a problem. This is kind of sneaking my later topic into this one, but it aligns with what JHD is saying. I think that any time I have seen this problem that there’s already this problem to begin with. You’re already doing the improper thing by working with very, very large arrays and should be doing something with iterators or generators or just not realizing these huge arrays to begin with. And it’s kind of a good thing that you get an error here because it prompts you to say wait, am I doing something really dumb? In all cases, yeah, you are. And you should do something smarter that will have tractable performance. You don’t want to work with the huge arrays and doing the huge operations because you’re just—even if you can get it working, as in not throwing and giving you a correct result, it will be just impossibly slow and useless from that standpoint. I don’t object to this research area. I’m not that skeptical of it. But I do want to see us actually like justify those use cases as these are actually problems that will be solved because after they get past the throwing error, they have reasonable performance that they can actually do something with it. + +RPR: We have got about six minutes on the queue and about seven people to get to. Kevin. + +KG: Response to MF: Two things. One, the claim that if you are doing this you are doing something wrong is just false. 20,000 items is not an unreasonable number of items. Computers can deal with 20,000 items without being slow, or 50,000 or whatever the practical limit is. But that is not an unreasonable amount of data to be working with. It’s not inherently slow. Doing it a different way than arrays even at that scale is almost always going to be slower because arrays are quite fast and keeping the data linear instead of lazy is often faster. And then second thing is that often what is happening here is that most of the time it’s not actually going to be that large. So what you want to do is write code that is reasonable and performant and maintainable for the common case and have it work in the uncommon case. But if we don’t have this, then you can’t do that. Because it will explode in the uncommon case and to either have to write all of your code as if it is going to be in the uncommon case or you have to have a branch based on some arbitrary constant where you have two copies of the code, one of which does something in a more awkward way just to deal with the uncommon case. And neither of those is good. You want to just be able to write idiomatic thing that works in the common case and doesn’t explode in the uncommon case. + +KG: OK, next topic. I am strongly in favor of this API existing. Everyone needs it. Contrary to JHD’s claim this doesn’t come up in practice, it does come up in practice in my experience. That said, we really need explicit buy in from browsers before adding things to array prototype. All of the browsers. + +DRR: Yeah. I’m also seeking to have discussions with browsers here and beyond as well. I didn’t have the time to do outreach, I will be honest with you, for every vendor prior to this meeting. But I wanted to get things going and have the discussion here. + +RPR: KM from Apple. + +KM: Maybe one comment is your example of stack overflow works just fine in JSC but I don’t think if that’s because we have optimized it in some way. But overall I think I’m neutral on it. There’s going to be probably horrendous problems because it’s Array prototype. Most things in the most optimizing using tiers will still end up being slow with concat and do the optimization of concat and replace the new object with the old object and allocate the new thing. You have to grow the array by a lot you have to allocate the back end storage. It’s a wash. Concat is super fine tuned to basically be the mem copy and so the—I mean, I’m kind of neutral on the proposal I guess overall. But I guess I’m curious is the intention here largely about iterables? Concat name in iterable or just a large collection of data? + +DRR: Large collection of data. I mean, really the more practical thing is you’re taking arrays and concatting and appending all of the values of those arrays over and over or at some point which means like you know you would want to optimize based on the object here is actually an array. I don’t need to go through the iterator protocol in those cases. In fact, I would be open to specifying specifically like that to avoid some of the edge cases that other run times have done too. Python being one of them. So it’s interesting that you mentioned some of that stuff. It’s being discussed in the delegate’s chat that it’s questionable whether or not you want that differing behavior or rely on the differing behavior or want to say engines all need to have some sort of optimization. A clarifying question, though. And I guess the question is this taking array like or iterable? + +DRR: It can take—I’m open to specify that. I think it can special case array alike and also work on iterable. That was my intent on that. + +JHD: Any prototype methods that take array like or take an iterable I guess is the—because I’m pretty sure that the only place that we take the iterator around arrays is `array.from` and everything which that handles array likes as well. And everything else only takes array likes. Would your use case be satisfied by array like? + +DRR: I think that would be surprising for people but fine. I mean, one of the names was push from and maybe that was a little bit inspired by `array.from` too. But, yeah. + +RPR: Just a quick time check. We are at time. But we could go for an extension if people are happy with that. We could do up to six minutes. Please go ahead. + +MF: People have touched on this a little bit before. So this proposal didn’t have any fundamental new capabilities, you can do this yourself. This is strictly a question of ergonomics right? We have heard in the past from implementers that with varying levels of severity they have no interest in adding new `Array.prototype` methods anymore. If that is the case, we don’t have to answer this pre-Stage 1 but during Stage 1 we should figure out if that is a possibility. If it’s the case that we can’t add this to `Array.prototype`, is there an alternative that is really solving the ergonomics issue better than the current state of the art, which is not terribly unergonomic in my opinion? + +DRR: I think it would be nice if it would just be another method on arrays. Also nice if it was just as discoverable as the push method too. So I think that that’s a key reason that I would really want this to be a prototype method. I’m not against going for another location, but I think I would like to have that driven by feedback from browsers and web compact. And I would like to alleviate that work as possible and open to feedback on that and rethinking things. + +WH: I’m trying to figure out if this is a muting version of `concat` or something materially different. Could you make it be like a mutating version of `concat` where you can supply any number of arguments and just concat everything in place rather than creating a new array? + +DRR: I think that that is certainly possible. You know, I’m open to this thing taking multiple collections as its arguments, so not just one necessarily. That way would model something like concat. The key thing with concat that I think is a little bit strange is concatenating iterables and concatenating individual elements has a behavior that I think is sometimes a little bit unpredictable and sometimes a little bit harder to model whereas I think if we just say—I kind of go back to the stop casting things presentation a while back, I would like things to just be more consistent and predictable that you have the idea that, oh, I need to push this and that in this very consistent way and predict how this was going to operate. And that way you don’t have like an issue with pushing strings, you have to push an array of strings if you really end up needing that behavior. That’s kind of where I’m thinking for that. We could use concat as the naming. I think that would be a little bit surprising if we’re trying to avoid the behaviors of concat. So I understand why that is—I understand why the current state of the world makes it a little bit undesirable but I think we can still create a new good API here as well. + +WH: Would we have a good solution for flattening a bunch of things and appending them all to the end of an array? + +RPR: I think so with our flat method that we have as well. Maybe we can discuss that in more detail. I don’t want to push on the time box. + +WH: What should this do if you push an array onto itself? + +DRR: Yes, that’s a very good question. I don’t think that’s something we should block on in Stage 1. But the behavior that some other runtimes say is undefined, I wouldn’t want to do that. If you have an array push onto itself, this is why I wanted to special case array likes because—or just arrays so that you don’t end up consistently running through all the new elements of the array and run through the current length and capture that and then append those new elements. That was what I was thinking there. I’m open to feedback and iterating on that, no pun intended, after Stage 1 if I can get consensus on that. + +RPR: Steven hicks makes a good point that Stage 1 doesn’t require us to work out every detail. All right. We have a couple of minutes left. EAO. + +EAO: Be kind of echoing just exactly that previous imparting that. As I understand it, the question being presented here or the problem that we think ought to be assessing Stage 1 is doing .push(…[array]) in some cases this is not good. Rather than like jumping from there to figuring out that we should have an adopt push all or push from or any other specific solution, I would really want under this quite a work to be done to figure out could we not just optimize the implementations where this is a problem and make dot push dot dot dot array just work? Because that is I think existing syntax that has from a syntax point of view quite nice semantics and ergonomics and we should not need to add a new method call for this or functionality. We should make the thing we already have to just work. Just to clarify again not a SpiderMonkey position here. This is just me. I don’t even know what SpiderMonkey really does with this code. + +DRR: Yeah, this is a really good question. Everybody on my team asks me the same thing, and I think the—I’m open to discussing it, right? I think the key thing there is it just more implementation defined behavior and does it only apply to arrays? If it is only applicable to arrays, that’s more of a tax on engines to actually look out for that specific pattern and think about is this really an array and having to worry about if you have a consistent type there that is truly an array. And then also what happens if you add a layer of interaction? I have a function that eventually does the spreading but itself also—sorry. A function that does the pushing but eventually itself takes spreading. And that’s not desirable either, right? That can also be stacked or flowed. That sort of thing. It’s a little bit brittle. It’s a little bit fingerprintable, right? You can use that as a way of knowing what browser I’m in, but arguably you can tell it from the stack size in some cases too. But, yeah, I think that that is something I’m willing to hear about from more implementers. + +RPR: Given the time, because we’re already quite over, I’m seeing a reasonable amount of support and also from what we are hearing as well, if folk—if this is not blocking feedback, I’d suggest that we could go to a call for Stage 1. + +DRR: Can I have Stage 1? Do we have any objections? + +WH: I support Stage 1. + +RPR: Thank you WH. We have support on the queue from Mark. Go ahead Kevin. + +KG: We try to go with Stage 1 with the problem statement not the shape of the API and I would like to problem statement for this to be extensive enough to include splice and unshift even if we ultimately decide we don’t want to do these things. It’s basically the same problem. + +DRR: Okay. The problem statement is JavaScript should have a function to batch append and insert elements from arrays and iterables that is hardened against stack overflows and also optimizable? + +JHD: "Should have a function" isn’t a problem. So it’s the problem statement that I’m hearing from you is: appending large arrays on to an existing array will throw at an arbitrary array size, like, I don’t know that’s not the exact phrasing I would use. That’s the problem. The function is one way to solve that. It’s not the only way probably. But like that’s what works and we would be exploring. That’s basically my queue item. I’m totally on board with supporting the actual problem, but I remain very skeptical about the current proposed solution which would block Stage 2 but not Stage 1. + +WH: The problem as I see it is that people have to do research to figure out whether they can use the `push(…a)` form. Even if it works on implementations they try, it might not work on some other implementation that their users use. It’s turning something that ought to be simple into a research problem. We should make it simple again. + +KG: I support that problem statement. + +RPR: So the refinement of the problem statement is that it would not require research. + +RPR: All right. So again I think I’m only hearing support. Lots of it. So I think Daniel, I would ask can you write down the problem statement for the notes. But I think that it’s safe to say that we have Stage 1. + +JHD: As soon as I have that problem statement, that will be like the name of the proposal that I put in the proposal repo. So just Ping me privately or whatever when you have to + +### Speaker's Summary of Key Points + +* Issues around stack overflows and iteration protocol speed are motivating enough. +* Specific API may generalize beyond only appending at the end. Problem statement should try to stay general. +* Seeking feedback from implementers on whether current idiomatic patterns can consistently be optimized. + +### Conclusion + +* Problem statement: It should be straightforward and safe to bulk-add multiple elements to an existing array. +* Achieved Stage 1 + +## Normative: change PromiseResolve species check + +Presenter: Mathieu Hofman (MAH) + +* [proposal](https://github.com/tc39/ecma262/pull/3689) +* [slides](https://mhofman.github.io/proposal-native-promise-adoption/slides/2025-09-pr-3689/) + +MAH: So the first topic I would like to discuss is a normative PR for changing how promise resolve works when it encounters a native promise. It is a very simple change, it’s a very short change but it has potentially wide consequences. The first thing is what are we trying to solve? The problem is that await relies on the PromiseResolve operation and await does something that even in the face of a `Promise.prototype.then` pollution user code actually isn’t affected by how basically async/await works. So if the case of an async function result being awaited, any `Promise.prototype.then` pollution will not be actually able to interfere with how the internally generated promises work during the awaiting. This was something introduced in a PR a few years ago. This was originally for optimization for how the number of ticks of await took and has the great benefits of making await work more as you expect even in the face of malicious code here. This is all fine and true until you have a second pollution of the constructor property on the Promise prototype, at which point basically the PromiseResolve logic bails out and decides, no, I don’t have promise that I expect, and I will be rewrapping—recreating the promise and then trigger the `then` behavior that is polluted. + +MAH: Back to the motivation, it is in fact still possible for user code to muck around with how async/await works even though users actually don’t end up handling any promises. And this actually came up in a real issue that was filed against Node.js a couple weeks ago. What happens is that Node.js implements operations for some web specs as JavaScript. In this case as async code and because of this sensitivity to pollution, it is arguably not following the spec, because if you look at the observable effects of prototype pollution in native implementations, pollution does not affect the internal operations of how these spec algorithms work. In this case, it was the crypto spec. + +MAH: So what is happening exactly? Currently the PromiseResolve operation that is used by await and a few other operations in the spec does a brand check for the value, and if it is a promise, it will look up the `constructor` property on the resolution value and compare it with the expected `Constructor` that the caller wants. In the case of await and all the other operations in the spec that are not directly driven by an API call from the user (`Promise.resolve`), the constructor here will be the intrinsic `%Promise%`. `PromiseResolve` actually was extracted originally from the `Promise.resolve` implementation. And that function, that static method is described as a function that either returns a new promise resolved with the argument or the argument itself in case the promise has been produced by the `Constructor`. So it is really meant as an `instanceOf`, if it is a promise that I created and I will use it directly because I know how to do it. But this is a check that ends up using the `.constructor` instead of doing what `instanceOf`is doing. Besides interfering with promise adoption it allows to observe any time that native promises are handled anywhere if you override the promise prototype constructor with a getter. + +MAH: What I’m proposing here based on an idea from KG is to replace this check with a prototype check. So really what we’re saying here is we’re changing from trusting the value to tell us about its constructor, to asking the constructor to identify its instances. So because the value is a native promise, asking for its prototype is not observable. In all the spec cases that we care about, the `Constructor` to check against is `%Promise%`. So asking for its `.prototype`, the prototype used by the instances, is also unobservable and the whole check is completely unobservable for native promises, and the await goes back to the case of not being tempered with anymore by prototype pollution of the Promise prototype. This is a check that is actually more similar to instanceof. It’s just a little more strict than the instanceof check because it doesn’t do the prototype walk. We’re asking the constructor for the exact instance that it created. + +MAH: So there’s a few questions. What does it mean for web compatibility? Well, first, all the non-native promises or thenables are not affected. Any promises created by another constructor are not affected. It doesn’t change the behavior of PromiseResolve or await in any way that is meaningful. Where it is different, and where I would expect it is observable, is if anyone has added an own `constructor`property to the native promise. Or if someone has modified the `constructor` property on the promise prototype. I cannot imagine a legitimate use case for these things but we should definitely measure them in the wild because this is a pretty overarching change. The other direction is a value that was previously passed through will now get wrapped by a new promise and will not be recognized as a value that was just passed through. The only thing I can think of is a derived promise, native promise with a prototype that is not `Promise.prototype` that is modified such that its prototype has a `.constructor` pointing towards the `Promise` constructor. I believe that’s extremely unlikely to encounter and safe because we will wrap this in the new promise and the resolution behavior will remain correct. + +MAH: So, yeah, that is what I’m asking. I can show the actual change. I mean, I showed it earlier in pseudo-code in the slide. But the actual change is pretty simple. It is changing from a `.constructor` look up on the value to do a `[[GetPrototypeOf]]` of the value to be the equivalent check but that check is unobservable in the case of await. + +Let me get to the queue. + +RPR: The queue is very empty. + +MAH: The queue is empty. I’m surprised. + +KM: I don’t know if anybody actually does this and maybe there’s some better way to do it, but I could kind of imagine some kind of like weird pre-emptive multi-threading library framework horribleness where someone tries to intercept the constructor to intercept your promise accesses and then suspend you weirdly or something. + +MAH: Someone can go in and interfere with the internal promise handling that happens on the async/await. If someone does that, they should be shot because that means you’re basically rewrapping every promise you encounter at every operation. + +KM: That’s horrible for performance. I can just imagine, I mean, it’s like your thing runs 5,000 times faster, does it matter if you run it cooperatively or pre-emptively, or making everything so slow, are you saving anything? I can imagine people trying this. I don’t know whether people do it, do you know what I mean? That’s probably the biggest **“use case”**. I’d probably put the quotes there, should be in bold! + +MAH: I am not putting it past that someone is doing this. That’s why I’m asking for a big part of this will be instrumenting existing engines to see if this is web compatible. The check to add is pretty straight forward. It’s just like I’m not expecting to have an approval right away without data. I think we really need data on this. What I’m asking really is, are in principle people open to making this change and are engine implementers willing to instrument the engines to see if this would be compatible? + +KM: I guess I will say I don’t have a problem with the change. Safari doesn’t have the technology to do that instrumentation, so we can’t probably do that for you unfortunately but… + +MAH: I would love to hear from the other browser implementers to know if they are interested in this kind of change. + +RPR: At the moment, none of the other browsers are on the queue. + +NRO: Do we need to collect data? The presented change… it seems so minimal, it’s likely impossible to affect the website and we should just ship the changes. In the past we shipped things that were similarly small and so we were confident it would work without collecting data. + +MAH: I know people do a lot of horrible things with promises. So that’s why I was thinking we should probably collect data. Yeah. I think Daniel is saying the same. + +DLM: Yes. So we discussed this a bit internally. And yeah, in general, some people think we should have some data before we move ahead with accepting the normative change. I guess we are—we can definitely do this, but yes, V8 is willing to do the instrumentation, that would be great. + +OFR: Yeah. Sorry. Something with my name not be transferred from Github. I don’t know. Anyway, yeah. We are kind of neutral on this one in that sense, but we can, like,—if it moves forward and we want to collect this data, we can help with that. + +MAH: Great. At this point, I would like to ask for—how does that work for normative PRs? + +RPR: We say, this is approval in principle. Which would then lead to the data collection. + +MAH: All right. So I would like to ask for approval in principle for this change. + +RPR: Any support? + 1 from KM, JRL, JSL, ZTZ. + +MAH: Great. Any objections? + +RPR: I will say that you have no objections. So congratulations! You have approval in principle. And, thanks to OFR for— + +MAH: Thanks, everyone. And looking forward to checking in with the engine implementers, I will be in touch to figure out how we can get that in, since it’s approved in principle. All right. Well, I guess I am next topic, so I’ll move on to the follow-up topic, which is very, very similar. + +### Speaker's Summary of Key Points + +A pollution of the `Promise.prototype.constructor` can be used to force the operations like `await` to drop from promise internal state adoption to assimilation through `.then`. Whether a promise is adopted is determined by a check in `PromiseResolve`. The PR changes the check from a `.constructor` lookup to a `[[GetPrototypeOf]]` based one, which is equivalent in its nature, but not observable in common cases. Since there is a small risk of web compatibility, we want to measure how often this change would impact existing deployments. + +### Conclusion + +Approval in principle, pending measurements by engines that this change is web compatible. + +## Native Promise Adoption for stage 1 + +Presenter: Mathieu Hofman (MAH) + +* [proposal](https://github.com/mhofman/proposal-native-promise-adoption) +* [slides](https://mhofman.github.io/proposal-native-promise-adoption/slides/2025-09-stage-1/) + +MAH: This is talking about promise adoption for Stage 1. This was actually originally part of the same PR we just discussed. But there were more concerns with this part of the change, and it was suggested to spin it out in a separate change that would actually go through the proposal process. So here it is. Sorry if you didn’t see this on the agenda originally, but all the content was in the original PR. Hopefully nothing is new. + +MAH: Motivation. Same thing. We want to prevent promise prototype pollution from having surprising effects on async code. So I already talked about, like, what does a promise prototype pollution looks like. + +MAH: We have already seen how a promise prototype pollution can attempt to interfere with or observe the outcome of a promise’s resolution. So let’s imagine some library code. You have some library code written using async await. This is kind of derived again from the use case I mentioned earlier, Node.js writing a Web crypto implementation in userland JavaScript. And what happens is that if you simply await, the operation is unobservable, notwithstanding the constructor discussion we just had, which we can assume the constructor pollution is fixed. + +MAH: So when you do a simple await of a promise, like the result of an async function call, promise prototype pollution is not capable of interfering with that. + +MAH: However, if you have a little bit more complex code where an async function returns a promise like from another async call, in that case, surprisingly, that promise prototype pollution will be able to interfere and grab the results. + +MAH: Even more surprisingly is that if you add an await before the return, instead of directly returning the promise, now you are back on to the original case, where promise prototype pollution will not be able to interfere. You have the surprising case where a return in an async function, the result value ends up being observable through promise prototype pollution if it’s a promise. + +MAH: So what is going on? Well, really that code when you translate it to what the equivalent promise code looks like—it roughly translates as this, it’s not exactly this, but for practical purposes it translates as this. So where are the resolve functions coming from? They are coming from the resolving functions that are created in the spec against a promise. They mostly do resolve once checking, and then go through the actual resolve logic. Here `ResolvePromise` is actually part of the resolve functions implementation, there is no operation of that name in the spec, it’s just extracted for readability. At the end of the day, it looks at the type of the resolution value and depending on the type, it rejects, fulfills the promise, or potentially extracts a future settlement of the resolution value if it’s a thenable to become the resolution of the promise. So in the case of our `add` example, it is a simple non-object value resolution. The promise is just fulfilled with that value—great. + +MAH: However, in the case of `inc` here, we are resolving with a promise. And because it is thenable, what we are going to do is grab the then from the native promise, and call it later to get the settlement value and to resolve with that settlement value. We are creating new resolvers and recursively using the same resolution logic. This is why promise prototype pollution is able to interfere. + +MAH: If you are using the extra await, the promise prototype pollution does not interfere because the async await code translates to an internal then operation that is being used before we actually call the resolve function. So if we look again at our resolve logic here, we end up never going through the `.then` side of the promise resolution because we internally call the then equivalent operation that didn’t trigger any promise prototype pollution. + +MAH: What can we do to avoid this pollution from interfering with this very normal async await code? Well, one initial idea is we could maybe automatically await in return of async functions. But we can’t because that changes the semantics of the code: basically, if you had a `try` / `catch` / `finally` around there, now it would trigger where it wouldn’t before. This is the behavior in AsyncGeneratorFunctions and that’s surprising for those. We can’t do that here. Another option, we could do a narrow solution, which is special case when the result value in an async function is a native promise, and internally adopt them without going through a `Promise.prototype.then`. It’s possible, but it’s a weird pin hole through the result value handling that we would have to put. + +MAH: What I am hoping we can do is solve this in the resolve functions for all promise resolutions. That actually would bring consistent promise behaviors throughout, either in async function or code using promise constructors and resolvers. And this is actually what the Promises/A+ spec actually intended. + +MAH: If we look at the Promises/A+ spec, and we go look at the promise resolution procedure, it actually gives us a step. If X is a promise, adopt its state. So really, that was intended so that an implementation can recognize its own instances and use implementation-specific means to adopt their state without going through the public `.then` mechanism. + +MAH: So what would this look like? What is the change here? It’s straightforward. It’s like, when we encounter an object resolution value, we actually check whether that value is a native promise, and whether the prototype matches the promise prototype—based on what we just agreed on in the change of the check to PromiseResolve. And in that case, we would use an internal then, the same internal then logic that is used by await instead of using the `.then` on the value. + +MAH: What that means here is that when resolving with a promise, like as the result value in this `inc` case, we would never actually trigger any `.then` of a native promise. And this would become unobservable again by promise prototype pollution. So you can start writing async/await code without having to think, oh, this is a return value, that’s a promise, so I should await it. + +MAH: All right. So is this compatible? This is the reason I had to spin this out in a separate proposal. First and foremost, again, it does not affect resolution of non-native promises thenables. It does not change the number of ticks. This is left over to another proposal. But it would make faster promise adoption proposal much easier because then it can just focus on the number of ticks, instead of dealing with whether the change is observable or not. + +MAH: It really only affects code that attempts to hijack the native promise behavior. Again, this is only observable if the `.then` of the promise instance is different from the original promise `.then`, which only happens if you have defined an own `.then` or modified the promise prototype. + +MAH: So malicious code. But also possibly, async tracking libraries. The thing that, as we have just discussed, this only affects non awaited values. These libraries are already incapable of tracking adoption of promises when the await syntax is used. + +MAH: So what really happens in the case of zone.js, it was brought up, zone.js relies on transpiling all async code to promise code in order to one, avoid the await syntax and make it use basically the promise that they want. And because it relies on transpiling, the narrow solution that only changes how we handle the result value of an async function would be safe because that would also be transpiled. + +MAH: So zone.js replaces the global Promise with its own ZoneAwarePromise. Transpiled code or any manual promise code ends up using their promise implementation, which is not a derived native promise, but as a thenable. They do, however, also replace the native `Promise.prototype.then`. And the reason they do that is to assimilate native promises into zone aware promises, whenever a native promise might be encountered. Native promises might be encouraged in some other APIs, in the spec that might be returning native promises. But also, host APIs that would be returning native promises. + +MAH: Really, it's there to cover whenever you are doing a `.then` on one of these native promises. It is not expecting to be able to track whenever native promise is resolved with another native promise. Because really, you are not really expected to—in almost all cases, you are not expected to be able to encounter the native promises in the first place. + +MAH: The 262 spec itself never uses the resolver functions to resolve a promise with another promise. Actually, it internally never uses the chained promise result whenever it uses promise capabilities, except for `%Promise.prototype.then%`, which is user driven and which is the one overridden prototype pollution. My hunch is that zone.js would be compatible with the change, but it’s again a thing to measure. If zone.js is an example, I suspect there might be other libraries or code that attempts to track promises adoption. And because of the limitations in being able to track these, I am hopeful that this would be a compatible change. But we should check. + +MAH: Yeah. That is it. I would like to ask delegates to go for Stage 1, exploring making promise adoption more consistent for the return value of async functions, but hopefully throughout how promise resolution works when native promises are used. And really, what I am asking again is, whether web browser implementers are willing to add some instrumentation to verify that this kind of change is web compatible. + +RPR: Justin has a comment on zone.js. + +JRL: I support this as a necessary proposal in order for us to get faster promise adoption. This is the exact charge that I have to implement in that proposal. Separating this out into your proposal saves me a lot of work. But it makes the proposal scoped. You are working with synchronous changes and faster adoption would be dealing with the asynchronous number of ticks changes. + +JRL: So that greatly simplifies everything, I think. The main point of my topic, I tried to break zone.JS with this implementation, by monkey patching things. After several hours, I wasn’t able to break zone.JS. The initial problem that we had identified was that we thought that if one promise adopts another promise, we would escape the the currently wrapped zone. The thing that is good about this, zone.JS has the expectation that you have to use `promise.then`. Or if you are using await, that will use `promise.then` because they are using the transpilation. Because `promise.prototype.then` is monkey patched, and that is zone aware, even if you escape zone JS for a promise, it'll eventually call the monkey patched `promise.then`, and then you'll recapture the correct zone. Even if you could get access to the primordial promise constructor and then method, which is impossible in zone.JS without hacking the library, you will eventually call `promise.prototype.then` which is the monkey-patch Zone.JS aware. And that will return you to the current active zone. It’s impossible to escape. I'm personally convinced there is no web breakage, when it comes to zone.JS. There may be another library that does monkey-patching, but they will be okay for the same reason zone.JS is okay. I am ecstatic that it works as well. And I fully support going forward with this proposal. + +MAH: Thank you. Yeah. I spent a lot of time looking at the zone JS code too and looking at the spec and understanding why this was fine. + +RPR: JSL has a big + 1. Stage 1. + +SHS: I was just saying that AsyncContext, `promise.then`, we are fine for the same reason Justin was saying. + +RPR: Thank you. And then we have got + 1 from JHD, DM. + +MAH: Fantastic. Any objections to this going to Stage 1? Does anybody want to spend time on this? + +RPR: I think there are no objections. Did you say the next step is you are looking for browser interest in collecting data? + +MAH: Yeah. I think similarly to the previous topic, we need to have instrumentation. There is—it’s fairly straightforward to add instrumentation again on this because effectively we are going through the same resolution motions and we are already getting the `.then`, we just need to check if it is the same as the intrinsic `Promise.prototype.then`. If that’s the case, adopting a promise would not change anything. But if it’s not the same, this change would no longer trigger that `.then`. It’s possible to count this. + +MAH: I would expect that it would be great to measure this at the same time as the PromiseResolve change.They’re related changes and they wouldn’t be the same counters, but I would expect to add instrumentation to browsers at the same time. + +RPR: Which is what MM was going to say. And we have got a message back from Olivier. + +MM: I’m sorry, yes. MAH, covered everything I was going to say, so I am done. + +RPR: Apparently, Mozilla might have counters already. + +MAH: This is an excellent response from browsers here. + +MAH: Awesome. Well, thank you very much, then. I am happy to have Stage 1. + +RPR Stage advancement, we have a round of applause. Yeah. + +### Speaker's Summary of Key Points + +`Promise.prototype.then` pollution can interfere with native promise adoption when the resolution value is another native promise. One common example is in an async function returning the result of the call to another async function. We’d like to internally adopt the state of native promises to remove this interference point. This is a pre-requisite for follow-up proposals related to promise adoption. There are potential concerns this change might not be web compatible, and MAH will work with browser implementers to add instrumentation. + +### Conclusion + +Native Promise Adoption advances to Stage 1 + +## Native Promise Predicate for stage 1 or 2 + +Presenter: Mathieu Hofman (MAH) + +* [proposal](https://github.com/mhofman/proposal-native-promise-predicate) +* [slides](https://mhofman.github.io/proposal-native-promise-predicate/slides/2025-09-stage-1/) + +MAH: I will move on to my last topic, which is also in the same theme. Very much related to the other two topics we just saw. As we have seen, the spec is able to recognize some of its own promise instances. And go into an effectively a fast path, not using the `.then` of these promises when doing an `await`, but instead doing an internal adoption. For `Promise.resolve` it also passes through the value in those cases. + +MAH: What you have with the thenable is the opposite. If thenable, the `.then` of the thenable is always called because it is not a native promise. And that also means if you try to call `Promise.resolve` on that, you get a new promise instance. And that will be different than the thenable. It also internally calls `thenable.then`. + +MAH: So the problem here is that while `Promise.resolve` looks like something that would be able to help us detect whether a certain value is a native promise or not, it has side effects. Effectively, it is impossible to build a side effect free predicate that would be able to tell us if a value is a native promise. + +MAH: What I am looking for here is a predicate that would be doing the brand checking that `PromiseResolve` does internally. We have precedent for that in `Error.isError`, a pure brand check, I suppose for error instances that have some stack information associated. `Array.isArray` is a brand check that pierces proxies. I am not suggesting to pierce proxies for promises. It’s there to detect values that have different behavior when it’s used with some other API, such as `JSON.serialize`. + +MAH: So what I am looking for is really a predicate that allows me to detect whether a value will be specially handled through `await`. The main question is, what does the brand check look like? Is it a new static method on the Promise constructor, what is its name? Maybe some less ergonomic brand check, as long as it doesn’t have side effects and is a clean brand check, I don’t care. I want a brand check. + +MAH: Whenever we talk about brand checks there’s a question about membrane transparency. This case is like `Error.isError`. It’s also fine because we already have a brand check but with a side effects: it’s already possible to detect whether a value is a native promise or not. It doesn’t change anything for membrane transparency for those. And in reality, membranes really want to actually pass promises by copy, like for errors. Create a new promise on the yellow side, resolved with the promise on the blue side. And to effectively do that, themselves, they need a predicate to know whether a value should be recreated like that, as a new promise without having side effects. + +MAH: Yeah. So that’s it. Can I have Stage 1? Can I have Stage 1 for basically bringing a brand checking predicate for native promises. + +KG: I support, despite generally disliking predicates. I think the motivation for this specific predicate is strong. + +MAH: Thank you. + +JHD: I mean, to no one’s surprise, I am on board. The name should be `isPromise`. I think perhaps ten years ago, the term promise was generic. But at this point, most of a decade of await and AsyncFunctions, like forcing—killing promise subclasses and forcing every promise libraries promises to become native ones, I don’t think the qualifier is helpful. + +MAH: I agree. Technically, the naming is a Stage 2 concern, I believe. + +MAH: Yeah. So I see a few more voices of support. Justin. DM. Dmitri. And REK. I see a concern from CZW + +CZW Yeah. I am just concerned about that—the name `Promise.isPromise` would encourage people to check a value against if it’s a promise, rather than checking if it’s really a thenable because in most cases, in userland, you only need to check if the value is thenable from `.then`, rather than `Promise.isPromise`. I am just concerned the name would encourage people to abuse it. + +MAH: Yeah. Well, I would argue that userland should not check—most user-land, except for libraries implementing a promise, should not check whether a value is thenable or not. That is my opinion. I am sure that people are tempted to figure out whether to do something asynchronously or not asynchronously. But, yes. Some code will see this and might be tempted to use these predicates to decide that they want to do some operations synchronously instead of asynchronously. In that case, using this predicate is a mistake for when then encounter a thenable that is not a native promise. That is probably the main reason to make this check non-ergonomic, or use a name other than `Promise.isPromise`. But I have honestly zero idea on what that would look like in that case. + +MAH: I am hopeful that this is something documentation, like MDN would helpfully steer users away from using this predicate for that purpose. I don’t know. I am open to suggestions on that one. + +KG: Yeah. I mean, regardless of whether they should be checking is-thenable, they will use this to check. That’s what they will do and we cannot stop them from doing that. Which inclines me to name it isNativePromise to discourage this. + +PFC: Yeah. I want to + 1 that. I actually think the longer name, it makes it less likely to be used improperly when it pops up in somebody’s IDE. + +MAH: CM + 1. + +RPR: And then ZTZ says give them `isThenable` + +MAH: I would like to answer that. `isThenable` is actually a wrong thing to do, even more. Because if you look at the promises spec, you start having a TOCTOU problem with thenable. Technically, you can have then as a getter that will return you two different values, if you basically hit then twice. This is the reason why implementations following the Promises/A+ spec, you have to get the `.then` property once, and never touch it a second time. Once you get it and you verify it’s a function, then you call it with the thenable as receiver. + +MAH: So `isThenable` people actually shouldn’t be doing that. That’s what I am against it + +RPR: The queue is empty. We have heard a lot of support. + +MAH: Great. So I believe I have Stage 1? + +RPR: Let’s do a last check. Any objections to Stage 1? Okay. No objections. So congratulations. You have Stage 1 + +MAH: Yeah. I was wondering if I could ask for Stage 2 because I believe the problem space here is pretty narrow. Really, what we want is a predicate. How its name is really the main question. The spec text is pretty straightforward. So is there any objection to—any support for Stage 2, given that this is a pretty straightforward proposal? I can show what the spec text looks like. + +RPR: So just to be clear here, we are going to Stage 2, but we are not pinning down the API name at this point? + +MAH: Correct. And I believe it is in scope of Stage 2 to tweak API names as well. It is really the shape that I am proposing, some predicate on the promise constructor with a name TBD. The only step of the predicate is to perform the internal `IsPromise` operation. + +JHD: When a name is not a global or a prototype method and thus the web compatibility risk is smaller, we have a precedent to consider the name within Stage 2. + +JHD: And like, not changing it afterwards. + +RPR: And so, yeah, + 1. 1 for Stage 2 and happy to be a reviewer Dmitri + 1 Do we have objections for Stage 2? JSL has support. Also happy to be a reviewer. + +MAH: Awesome. Perfect. Thank you very much. So for reviewers, I heard Jordan, and James. Perfect. How many reviewers do I need? Sorry. I don’t remember + +RPR: I think the minimum is two. But we could always benefit from three. + +RPR: Thank you, JRL as reviewer. Congrats. You have Stage 2. + +RPR: We went through quite a lot of slots. + +MAH: Yeah. I am glad this went through like that. Thank you so much for everybody and happy to yield back some time + +### Speaker's Summary of Key Points + +Native promises are recognized by some operations like `await` and their state adopted internally. It is not currently possible to detect in user code whether a value is a native promise without side effects. This proposal brings a clean predicate to brand check native promises. There are some concerns of misusage for such a predicate, which may be mitigated by choosing another name than `isPromise` for it. + +### Conclusion + +Stage 2, name bikeshed before next stage. JHD, JSL, JRL as reviewers. + +## Non-extensible Applies to Private for stage 3 + +Presenter: Mark Miller (MM) + +* [proposal](https://github.com/tc39/proposal-nonextensible-applies-to-private) +* [slides](https://github.com/tc39/proposal-nonextensible-applies-to-private/blob/main/no-stamping-talks/non-extensible-applies-to-private-for-s3.pdf) + +MM: So this is non-extensible applies to private. I would like to ask for advancement to Stage 3. We were currently at 2.7. And last meeting, during the Stage 2.7 update, we established a general sense in the room, not a commitment, that the only thing that remains to go to Stage 3 was Test262 tests. + +MM: Okay. Recap: this is the entirety of the proposal that—these two operations in the spec are the only means by which private fields are added to objects. And the idea here is that in both of these cases, if the object that the private field would be added to is not extensible, then we instead throw a TypeError exception. That’s the entirety of the proposal. Okay. This is a recap of what the stats looked like last time. + +MM: This is what the stats look like now. We—everybody who expressed—whose looked into this and expressed an opinion, in particular OFR from Google, says the numbers are insignificantly smaller, not a concern. And OFR, please correct me if I am misquoting or or getting the sense wrong. + +MM: Okay. So this is a recap of frankly—I am not exactly sure—what this is illustrating. But higher is worse and the numbers here, again, not shown, sorry, are considered by OFR at Google, to be acceptably small enough. And what is happening in the meantime, basically within the same range. + +MM: Okay. There was a long conversation mostly on—in the issue about babel downlevelling, and finally, the proposal from NRO—not proposal, because it’s not a stack issue, but NRO actually implemented this downlevelling in babel. Is that correct, NRO? + +RPR: I believe he may have had to step out. He stopped note-taking and not on the call speak + +MM: in any case, whether he’s implemented this or not, his plan is to implement this in babel. I will let the issue so speak for what the algorithm is that turns this into that. But this completely sidesteps the problem. + +MM: So as I said, the sense, not a commitment, is that only Test262 approvals were needed for Stage 3. So this is what is happened just recently, which is—I want to thank, in particular, RGN and OFR, for help getting this to this state. And PFC, for giving the spec feedback, and the spec feedback was substantial, it had some real points to it, and this is my explanation of what I did in response. And in response to that, Philip took a look and said, approve these changes. + +MM: So any questions, and at this point, I will stop recording. And shut down the slide show. And then let’s do the questions. And then I will be able to also see the queue. + +OFR: Yeah. Just for the graph that you showed, that was correctly quoted. The numbers are super small. It’s relative, and the numbers are so small that it doesn’t show the number on the—on the axis. + +MM: That’s why there is no numbering on the axis. That’s great. + +MM: So do I have any support for Stage 3? + +RPR: + 1 from OFR and DLM. + +WH: I support Stage 3. + +RPR Thank you, Waldemar. + +MM: Great. Any objections? There are no objections. + +RPR: So congratulations, you have Stage 3. + +MM: Thank you. + +RPR: All right. Great. Would you like to either write or dictate a summary and conclusion? + +MM Okay. I will dictate. + +### Speaker's Summary of Key Points + +MM: The remaining questions is, first of all, since time has passed is do the stats look any worse and are the stats still acceptable? And the stats are collected by Google and OFR from Google agreed that the stats are still well within the acceptable range, no negative consequences of what we have seen as time has progressed. The other issue was, the few problems that we did see, most of them seemed to be caused by the existing babel downlevelling. We had a long discussion issue thread, NRO proposed an algorithm of which I showed an example output that we all seemed happy with and NRO either has or is planning to implement in babel. And then the final issue was adequate 262 tests. I want to thank OFR getting started on those, I want to thank PFC, giving us good feedback of what needed to be tested and thanks to RGN, for helping me write the tests in response to PFC’s feedback. And those tests were proved for Test262. + +### Conclusion + +And with support and no objections, we are now at Stage 3. diff --git a/meetings/2025-09/september-23.md b/meetings/2025-09/september-23.md new file mode 100644 index 0000000..1301787 --- /dev/null +++ b/meetings/2025-09/september-23.md @@ -0,0 +1,996 @@ +# 110th TC39 Meeting + +Day Two—23 September 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|---------------------|--------------|--------------------| +| James Snell | JSL | Cloudflare | +| Aki Braun | AKI | Ecma International | +| Ben Allen | BAN | Igalia | +| Chengzhong Wu | CZW | Bloomberg | +| Chris de Almeida | CDA | IBM | +| Daniel Minor | DLM | Mozilla | +| Istvan Sebestyen | IS | Ecma | +| Jordan Harband | JHD | HeroDevs | +| Kevin Gibbons | KG | F5 | +| Mark Miller | MM | Agoric | +| Michael Saboff | MLS | Invited Expert | +| Nicolò Ribaudo | NRO | Igalia | +| Richard Gibson | RGN | Open JS Foundation | +| Ron Buckton | RBN | F5 | +| Ryan Cavanaugh | RCH | Microsoft | +| Ashley Claymore | ACE | Bloomberg | +| Waldemar Horwat | WH | Invited Expert | +| Andreu Botella | ABO | Igalia | +| Michael Saboff | MLS | Invited Expert | +| Andreu Botella | ABO | Igalia | +| Bradford Smith | BSH | Google | +| Caio Lima | CLA | Igalia | +| Chip Morningstar | CM | Consensys | +| Dmitry Makhnev | DJM | JetBrains | +| Jase Williams | JWS | Bloomberg | +| Jesse Alama | JMN | Igalia | +| John Hax | JHX | Invited Expert | +| Justin Ridgewell | JRL | Google | +| Keith Miller | KM | Apple | +| Kris Kowal | KKL | Agoric | +| Mathieu Hofman | MAH | Agoric | +| Olivier Flückiger | OFR | Google | +| Rezvan Mahdavi H. | RMH | Google | +| Romulo Cintra | RCO | Igalia | +| Samina Husain | SHN | Ecma International | +| Zbigniew Tenerowicz | ZTZ | Consensys | +| Linus Groh | LGH | Bloomberg | +| Devin Rousso | DRO | Invited Expert | +| Rob Palmer | RPR | Bloomberg | +| Philip Chimento | PFC | Igalia | +| Samina Husain | SHN | Ecma | +| Erik Marks | REK | Consensys | +| Steven Salat | STY | Vercel | +| Peter Hoddie | PHE | Moddable | + +## Opening & Welcome + +Presenter: Ujjwal Sharma (USA) + +## Intl Era Month Code Stage 2.7 Update + +Presenter: Ben Allen (BAN) + +* [proposal](https://github.com/tc39/proposal-intl-era-monthcode) +* [slides]( https://notes.igalia.com/p/2025-09-tc39-plenary-era-monthcode-update#/) + +BAN: This is a Stage 2.2 update. We will probably be asking for Stage 3 at the next plenary. The main thing, holding us back right now is finish up test 264. This is update on last plenary. And getting on the repo. + +BAN: Okay. So the premise of era monthCode, is okay. Temporal supports non-ISO 68—860 calendars. And traditionally, the behavior of these calendars has been defined out of ECMAScript since would you say it’s out of our domain. Although we can’t be the one to specify that, we want to put behavior in order to avoid implication divergence + +BAN: The goal with the proposal so we don’t want to overspecify behaviors for matters for which ECMAScript isn’t the correct authority. But within the constraint, we want to minimize opportunities for discrepancies. + +BAN: Additions to 402 with this, we are describing supporting calendars. Valid ranges for era years and calendars. + +BAN: And the thing that we have been having a lot of updates on as I can say in general is years and lunisolar calendars. Basically, the quirk with lunisolar calendars, there can be leap months, which can present problems. + +BAN: All right. So editorial changes. Like I said, the big editorial changes involve a lot of stuff that previously had been described in prose, giving algorithm steps for it. And will spare the small details accept upon request, but these are the he will to reside PRs where we have done that of. We are not changing for consensus, but I want to give an update where we are. Probably going for Stage 3 next plenary + +BAN: Okay. So we reverting a change that we had previously made in July involving, what happens when you use additional to go into a year with a non-existent month? So here, we are starting with a year and month that exists. This is a leap month in the year 5784 in the Hebrew calendar. But that specific month doesn’t exist in the following year. Previously, at the last plenary, we got approval for change to go forward into the non-leap month equivalent of that month. + +BAN: As a result of discussion, we have decided go back the behavior for overflow reject, when adding a year. We are going back to work more or less to re-establish consistently for weekdays, just like in ISO 8601. We reject going forward from February 29 into a non-leap year. Likewise, we’re going back to throwing a RangeError, when using `overflow: “reject”`, moving forward a year from a leap month. So this is the sort of summary of what I just said. I just probably should have like flipped forward to that slide before saying it + +BAN: One of the things that provokes this, first of all, I think it’s persuasive to say we are going to return to the previous behavior, this matches the behavior for leap days. One of the problems that arose is the standards for what month you should go to, when going forward from a leap month, varied calendar by calendar. And it’s stuff that is sort of outside of our purview in a lot of ways. In the Chinese and Dangi calendars, you go backwards to the previous month. And the Hebrew calendar you skip forward to the next month. Yeah. We’re at this point planning on erroring towards strictness, because if we lose that, we can go the other way. + +BAN: This is something that is a summary of a very, very, very long discussion. I’ve tried to boil it down to something short. So there’s the question of reference here. In ISO 8601, it makes sense to use the year 1972 as a reference year for any date. If you have a month and a day—internally, if we have PlainMonthDay, internally we’re representing that as a ISO 8601 calendar day. But if you have February 29, what do you use for the reference date? Well, you can’t use 1970 because there is no February 29 in 1970. So instead we can use 1972 as the reference year because every day that can appear in the calendar can exist in that year. + +BAN: This is straightforward for most calendars—the Gregorian calendar. But it is not straightforward for some solar calendars. For some lunisolar calendars, there’s no year that contains all month and all month—that contains every month including every leap month and every day including leap days within that leap month. And notably, some leap months are very, very rare. So in the Chinese calendar, the leap months in the winter almost never happen. And in these cases finding a reference year for arbitrary month day can involve going into deep past. + +BAN: There’s lots of discussion on this. This is a snippet of a table that was generated by ABL, showing how far back you have to go to find a reference date for some combinations. If we are looking for the 29th day of the leap year after the 12th month in the Chinese calendar, you have to go back to 1404. And the problem is, with this, is that, like, our data is not actually—the calendars, we don’t assert the calendars we’re providing are reliable this far in the past. If we need to find a reference here and go back to 1404, that’s actually not a good reference year. That’s not a year we can vouch for. + +BAN: Why does this matter? Practical terms you canners the month with the month code M11L, that leap month in the Chinese calendar occurs in December 2033/January 2034, but in our range of calendars it doesn’t occur before 1972. The best reference date for this particular month is going to be 2033. But this is the wording currently for finding reference dates. Which is the latest ISO 8601 date corresponding to the calendar date which is earlier than or equal to the ISO 8601 date December 31st, 1972. Well, it turns out that 2033 isn’t in that range. + +BAN: So our updated wording is with these lines added. If no such date is the earliest ISO 8601 date corresponding to the calendar date of between January 1, 1973 and December 31, 2035. That’s far enough in the future for us to pick up the year in the Chinese calendar that contains that specific leap month. We are planning on concretifying with a table of reference. And we don’t currently have that. That will be done by Stage 3. + +BAN: But the most meaningful thing with this update is in fact just—the next step to finish Test262, at which point we anticipate going to Stage 3. The TDLR has lots of spec clean up. Spec clean up responding to concerns by taking the steps that were written in prose and entering them into algorithm steps. We have edge cases involving reference years for certain lunisolar calendars we were dealing with and most significantly the Test262 is currently complete. That is what we will have done for next plenary + +DLM: Okay. Queue is currently empty. It’s great to see the work progress with implementations starting to ship Temporal. Let’s give a minute in case anyone wants to ask questions or make any comments. + +DLM: Okay. I guess we are good. Thanks, Ben. + +BAN: Thanks so much. + +### Speaker's Summary of Key Points + +* Intl month code supports many calendars defined in CLDR + ICU4C/ICU4X +* We cannot specify arithmetic for every calendar, but we do want guardrails around implementation to avoid divergence between implementations + +### Conclusion + +* Just an update.. No comments from queue + +## Deferred re-exports update + +Presenter: Nicolò Ribaudo (NRO) + +* [proposal](https://github.com/tc39/proposal-deferred-reexports) +* [slides](https://docs.google.com/presentation/d/1ok-qUnKrHK8ADWAHR071t5dglDkuKhXc0mPLCTS0meY) + +NRO: This is, update about the deferred re-export proposal. I apologize for adding this way past the deadline, but the only consensus request would be something that is not actually semantics. A recap of what the proposal does. It allows marking some export declarations as deferred. For example with the components library, that’s export deferring a bunch of components from a bunch of Internet files and then we have in our case, app.js. This means that we are deferring to app.js’s decision to load and execute the app.js file only using the button, which is defined by `button.js` file. + +NRO: And similar to `import defer`, when paired with namespace imports, everything but then does the lazy execution thing. + +NRO: Again, recap. This syntax only works with the `export defer` list of bindings `from` syntax. It doesn’t work with `export defer * from` which is exporting everything. The reason being that, well, the goal here is that the module specifier only loaded if we use one of the bindings’ `export`. But with `export * from` we don’t know what the module name specifier exports when we actually load it. It does not support the `export *` namespace because ambiguous because we are like creating a space. Whether we are getting the [nam] says and exporting. So loading only specifier when our importer is using an S. + +NRO: Then a little bit of history. This was part of the `import defer` proposal. Then when the import decision we left this behind because there was more work needed and confirmed Stage 2 as of June 2025. It might have been before. + +NRO: The main updates about the proposal is that we have spec text now. You can go take a look. Well, the spec text is complete. Before there were multiple todos and a bunch of bugs. It should be complete now. You can take a look at that. Just be careful when reading. It sits on top of the import defer proposal, you need to take both into account. + +NRO: There is another open question right now. A discussion between me and GB about the behavior of `export * from`. You can go look at this issue in case you have opinions. The summary for discussion is that if we are doing export stuff from some module and that module has a bunch of export defer statements, is this `export * from` causing all of them to be loaded because we are referencing all of them, or is it causing them to propagated and then our importer is actually choosing what to load. But you can take a look at the issue and give your opinion, if any. + +NRO: The reason we’re actually presenting here is to get consensus and we need one more reviewer for Stage 2.7. ACE and CZW previously volunteered. The three of us work together often, so it’s good that GB actually volunteered to be a third reviewer. And so just want the committee to approve we have a list of reviewers. + +NRO: And yeah. I expect the queue to be empty. Yeah. So— + +DLM: Yeah. The queue is empty. + +NRO: Yeah. So I think we consider the third reviewer to be approved? Okay. Perfect. Just a timeline about the proposal. `import defer` is being implemented in all the browsers right now. There are in-progress implementations—If you have not seen a patch yet, the patch is coming soon. So expect that to be done in a few months, and this proposal will be ready for 2.7 when import defer is close to being done. + +NRO: Thank you. I am done here. + +### Speaker's Summary of Key Points + +* Export defer has complete spec text +* There is ongoing discussion for the behavior of `export * from` + +### Conclusion + +* Consensus on reviewers: ACE, CZW, GB + +## AsyncContext yield* + +Presenter: Nicolò Ribaudo (NRO) + +* [proposal](https://github.com/nicolo-ribaudo/proposal-async-context/commit/32856cb7ce1aaaa9f310f8f4b6532b93b459012f) +* [slides](https://docs.google.com/presentation/d/1g7Xgf9uAxv5gZYvms23m2_hP513IRYNktz0fDiE1NmA) + +NRO, you have the next topic as well, AsyncContext yield* behavior. 30 minute box. + +NRO: Okay. So this is an update about AsyncContext. The last updates were about web integration. Actually, there is an update with the ECMAScript semantics. Specifically about the context that propagates through `yield*` operations. + +NRO: Just a little bit of a recap about how AsyncContext currently works in generators. The proposal currently follows the principle that once you are in a given code, the context is constant and some other code cannot find their way to leak a different context into your code. This means that if you have a generator function like on the left of the slide, let’s say when this function starts running we have active context 1. Even if we yield or call the next functions in, the active context will always be context 1. Even if as on the right, we are calling the method using a different context. + +NRO: And while that works in most cases, sometimes that’s not actually what you want. Some use cases we found is that you might have some ambient AbortSignal, maybe provided by your framework or by something else. And you want to be able to abort some work when any of the ambient AbortSignals are aborted. So you want to have the one active when the generator started running, but also the one active when the next started. And this is trivial to do in manually written iterators, because in manually written iterators, it just gets the context from the caller. + +NRO: When it comes to generators, there is a way to make it work which is you basically create your own wrapper that creates like a iterator internally, like, a dot matrix and that in the throw. When the generator runs when the yield will be intercepted to—the snapshot from the caller. In this example, this `withNextContext` would modify the that the key of the running, to this context to, we called the `.next` method from the side relative to this yield expression. This is some problems because wrapper functions are not always easy to use. They work with class methods because there isn’t syntactically a place to put the wrapper, and it means to you have to have the function, in the stack traces. + +NRO: Okay. So that was what the problem we have is and what is the current workaround. Before I go into the solution before proposing, let’s look at this, like, some that describes the spec text for all yield generator works. You can see here when we do `yield 1`, spec-wise we have this value 1 stored in this variable and then we take in Step 2, a snapshot context. Then we pause the generator body sending yieldValue to our caller and resume the caller. Then when the caller does its thing then calls next next to reassumes ourself which is value positive next and before returning the result, it’s not returned from the generator. It’s setting the result of the yield expression. Before doing that we do the context snapshot right before async. And when it comes to `yield*` we do something very similar. So we—we get the iterator, snapshot, yield* is not just one value. It’s a loop that keeps calling `.next` on the iterator. Then pauses the generator, pauses the body and sends the value iterator to our caller. And pausing, when the caller resumes, it restarts the context and continues iterating. It calls again into `.next` + +MM: Nicolo, I am going to interrupt because I have a clarifying question. + +NRO: Yes. + +MM: AsyncContextSwap, I need to know what that does. The word swap, I generally use that when things are exchanging places. AB + +NRO: Yes. Yes. So what this operation does, it sets the current async context to the one passed in the argument to do the operation. And it actually returns what was the previous async context in this case we are just not using that because we don’t need it. + +MM: I see + +NRO: And this AsyncContextSwap is not that it exports to user code. + +MM: Thank you. + +NRO: So what you might notice from here is that when we call A, it is guaranteed that the active context is always the ones started in the genContext variable. Because after resume, in step 4.c, we like always resume, restart the context before continuing the loop. + +NRO: The proposed change is to stop restoring the context within the loop. And instead, just restoring it when we are done with the whole loop. What are the consequences of this? From the point of view of in the body of `fn`, nothing happens. If body continues running after that we are done with this whole `yield*` thing. After step 6. When the body of `fn` is running, we restart this genContext. However the change is feasible to `iterable.next` to the new iterator. Specifically, if our function, if our generator `fn` is currently paused on the `yield*` and the caller calls `.next`, it will basically forward the call to the new iterator, it will run step 4.a. It will forward that call in the context. The context of the caller of the outer `.next` is propagated to the inner `.next`. + +NRO: Now, how does that solve the problem? Well, it means that if we consider these cases from the beginning where we want to read the ambient AbortSignal variable that was active when `.next` was called, we can—instead of wrapping the whole generator function with a wrapper, we can just use `yield*` with some if you iterator that is—and adjust so it can read the context when it was next calls and returns that context. Like here on screen. And then we can, like, use both, in this case the merging the signal with the one body and the one that is coming from the `.next` call. Note that this is still not changing in any way the context in the generator body. It just makes it impossible for the new iter star to read the context. + +NRO: Another use case I didn’t mention, But it was calling a callback in that context. And again, you can just use yield* with some new iterator and it does that—reads—tracks the callback in the context. + +NRO: So yeah. This is the proposed change. I also want to note that if the inner iterator is itself a generator, all of this does not actually matter because, well, the generator body of the inner iterator has a context. This change is relevant when the inner iterator is written manually. Are there any questions? Mark? + +MM: Yeah. So you’re only proposing this change, and all the code things that you just showed to make use of it? Let’s call those helpers. You are not proposing any helpers? + +NRO: No. That’s all user code. This is just what is making it possible to write those helpers in userland. + +MM: That’s great. Let me make sure I understand. You’re not actually doing anything that enables combining context? The helpers cannot actually combine contexts. + +NRO: No. + +MM: What you are doing is, you are reifying the other contexts so that you have both contexts in hand, so that you can—you know, use one or the other. + +NRO: Yes. The helper could, for example, just take a snapshot of the outer context and return it to the generator, and the generator has the two contexts and decide what to do with them. + +MM: Okay. Thank you. I support. + +NRO: Okay. Is there anything else? If not, yeah. We have consensus to making the change of the current yield* behavior for what this slide to what is in other slide, moving the AsyncContextSwap out of the loop. + +NRO: We are actually Stage 2. Given it’s quite tricky things to deal with, and that the spec text is mostly stable and mostly blocked on the web integration, we are considering almost Stage 2.7. So that’s why I am just checking that everybody is fine with this. + +DLM: The queue is currently empty + +NRO: I assume that’s a yes. + +### Speaker's Summary of Key Points + +* We propose the change to how yield*propagates the AsyncContext specifically allowing to propagate the context of the caller `.next` to the inner iterator passed to yield*. + +### Conclusion + +* There were no concerns with the proposed change. + +## Continuation: Intl Era Month Code—Normative changes + +Presenter: Ben Allen (BAN) + +* [proposal](https://github.com/tc39/proposal-intl-era-monthcode) +* [slides](https://notes.igalia.com/p/2025-09-tc39-plenary-era-monthcode-update) + +DLM: Next up we have a brief continuation because Ben would like to continue for normative changes that he presented in the intl error month. Apologies for the process troubles. + +BAN: All right. So I wanted to grab this continuation to ask for consensus on the two changes that I mentioned. Let me present my screen. + +BAN: So the first is reverting the change we made last plenary. The one I talked about, we are going back to throwing to advancing a year from the leap month in a lunar solar calendar where the leap month-endize does not exist in the fooling year. This works the same way as leap days and the calendars, solar calendars. 8601. Then we are returning to the previous behavior where this code here will result in a RangeError because this month doesn’t exist in the year that follows 86—5784. I would like to ask for consensus on this change which reverts to our previous behavior. + +DLM: Okay. At the moment, I am the only person in the queue. SpiderMonkey supports these normative changes. It’d be great to hear support from someone else as well. + +DLM: We have + 1 from ACE. Thank you, Ashley. Fantastic. + +BAN: All right. The other one that we would like to ask for consensus on is the one allowing for reference years after 1972. For this one specific case, the previous wording was—let me share. Share. All right. So previous wording, was to pick a reference date that is earlier than or equal to 1972. Our new wording allows for reference dates going up to 2035. And that’s specifically to pick up the case of this leap month in the Chinese calendar. there no in-range reference date before 1972, but there is a reference date before 2035. + +DLM: Okay. SpiderMonkey team also supports this change. + +BAN: All right. + +DLM: Once again, it would be nice to have a second person for support. Once again, +1 from ACE. Thank you, Ashley. + +BAN: Thank you very much. Thank you for the overflow for asking for consensus. + +DLM: No problem. + +### Speaker's Summary of Key Points + +The speaker is requesting the following normative changes on Intl Era Month Code + +* Normative: Revert change to month code constraining [\#82](https://github.com/tc39/proposal-intl-era-monthcode/pull/82) + * Reverts [Don’t reject when adding years and landing on nonexistent month \#67](https://github.com/tc39/proposal-intl-era-monthcode/pull/67) +* Allow reference years after 1972 for calendars that need them + * There’s been lots of discussion on the issue Stage 2.7 review feedback: [Provide guidance on calculating a reference year for MonthDay when not provided](https://github.com/tc39/proposal-intl-era-monthcode/issues/60). + +### Conclusion + +* The committee provided consensus to make the requested changes + +## Update on proposal-await-dictionary + +Presenter: Ashley Claymore (ACE) + +* [proposal](https://github.com/tc39/proposal-await-dictionary) +* [slides](https://docs.google.com/presentation/d/1TaLCZt2jJtrVY1PjFd49jlBW-vAVil4NYsOzAyYPP7Q/) + +ACE: Hi, everyone. My name is Ashley. Delegate from Bloomberg. And I am here to not ask for consensus on anything, but just give you an update, and then get the feelings from the committee on the next steps of this proposal. + +ACE: So we last talked about this proposal in plenary back in July last year. That was in the context of the iterator helpers proposal. And then we talked about this proposal more properly, way, way back in 2023. It’s been a while. I will give a bit more of a reintroduction to this, to put ourselves back, back in March, lovely, sunny Spain. + +ACE: So the problem that this kind of proposal is centred around is, not this code, this code is fine, but over time, the code might start at await and now in a classic waterfall situation where we’ve not started a second request until the first one is finished, even though there’s no dependency between the two, and we are doing this because this is the easiest thing to write. Someone might suggest on a PR that maybe the developer themselves rewrite it, to do `Promise.all`, now this is fantastic. We are doing the two requests in parallel and not holding one up and not introducing an unnecessarily dependency. But when the code likes that, when someone comes along and adds another action they can just keep following the pattern. There’s new logic that comes in and over time, and it starts to become, like, not very elegant or easy to reason about code. + +ACE: Mostly because this is, like, an ordinary API. So the original motivation for making this error, I found code similarish to this, and I wanted to be confident that we were destructuring in the right order. And because there’s so much logic here, you know, literally counting the lines on the screen trying to make sure that the third variable down feature flag does line up with the third promise that we are passing into the array. That’s not what you want to be spending your time doing when reading code. + +ACE: So what you could do today is separate those two phases. We could, like, launch all the requests, collect all the promises, and then separately, now await them. This is okay. It’s kind of a bit annoying that you have all these extra variables in your scope. Like, further down the file someone might reresolve a promise that’s already resolved. But another issue here is that we’re only attaching the `.then` on these one at a time. So if the first promise throws, and then the later one also throws, then there was never any hand or touch to the other ones. So you could get a background unhanded promise rejection, and then in some environments this would crash your application. It thinks you are not handling the code. + +ACE: That goes away if you then use `Promise.all`, because it handles all of them. You still might be swallowing an error, but you’ve at least done that explicitly, you are saying, I am going to group all the promises together and handle them as a set. If one fails, it doesn’t matter if others fail. + +ACE: So then with this we are back to where we were at the beginning of, having to use `Promise.all` and keep track of the list of things and we’re now writing quite a lot of stuff. + +ACE: Mark has a clarifying question. + +MM: Yes. Thank you. You used a phrase, probably immaterial to the point you’re making, but I couldn’t understand it in context. “We resolve a promise that’s already resolved.” + +ACE: Yeah. I could have said that much clearer. So with the code like this (referencing presentation). Personally, I find it a little upsetting at the rest of them, code, that you can imagine, underneath this, there’s like both variables still in scope. We still have `sessionP` and `session`. And we can’t, like, remove `sessionP` from the scope. When typing down below, you might use `sessionP` which is unnecessary. We almost want to delete `sessionP` from the scope. There’s no reason. I wasn’t very clear about that. + +MM: Thanks. + +ACE: So the proposed solution for this proposal is, an alternative to `Promise.all`, which is a named API, so instead of passing in an array or more precisely an iterable, we pass in an object where we’ve named each of our promises. So now when we destructure the object, we destructure by name, and it’s much harder to mess this up. We can destructure in any order or ignore some. Just a little bit easier to kind of read and get right. Very small—very easily to implement yourself, but a nice, convenient thing to have in the language. + +ACE: The naming of it, `allKeyed`, where did that come from? That was specifically what we talked about last July before last, was this shape, like ignoring the kind of domain of promises, the actual shape of the API is identical to iterator. The proposed iterator of methods, static helpers, where you have `Iterator.zip`, takes an iterable and you get back an iterator that’s also of, like, an array. And then you have `Iterator.zipKeyed`, where you give it a bag, and each item in the iterator is then an array. Not an array. So the—from a type perspective, they’re exactly the same. What we talked about, with `Iterator.zip`, should we overload? What should the name be? With that proposal we landed on `Iterator.zip` to be the ordered one and `Iterator.zipKeyed` to be the named one. And then if we do that, for that proposal, I would want to mirror that for this proposal, so that’s what we have done. + +ACE: So just some general stats around what people are currently doing. This API has been around the promise libraries out there for a long time, and you can see them being used quite a lot. And you get lots of downloads. There’s definitely a thing people are reaching for. + +ACE: There is spec text written for the method. So theoretically, I could ask for reviewers and ask for 2.7. But it’s been a while since we talked about this. So instead of doing that, I wanted to gauge the committee’s feelings. Like, do we think the proposal is the right shape? Is it the right name? Should we get reviews and go for 2.7? Or should we actually continue discussing this more broadly? Such as, should be there other methods in this proposal? We have `.allSettled`—should there be a keyed version of `allSettled`? I am happy to do this. Personally, I was hesitant because I haven’t come up with that particular use case a lot and in general mostly because `Promise.allSettled` is a niche API. I didn’t want to add it for completeness. But we as a committee think it’s worthwhile and we are not just kind of filling out the cross-product, then I am very happy to add that as well. So you have opinions on including that. + +ACE: And a wider conversation on, should this be an API or should we be doing a syntactic approach? And we were talking about it internally at an Igalia/Bloomberg plenary preparation meeting. Even if there was syntax, you could still have the API because the syntax really works when there’s a known list of things. I am looking for code, we can use there were use cases of you having a kind dynamic object with promises in it. So I think even if we did want to pursue syntax, I would rather do that as a separate proposal, rather than in this proposal. + +ACE: So that’s everything. And I can see we have people on the queue. + +DLM: First up, Nicolo + +NRO: Yeah. So for the problem statement of, like, having multiple promises, and you want to store them all to variables. And promise.because—I don’t think this is the best solution. It’s actually a little bit too many line to have to repeat them twice. And creating the object in line. However, I am still happy with how the proposal is. Because I would use this more frequently not in the structuring context. I write quite frequently code that does `Object.fromEntries(Promise.all(Object.toEntries()))`, something like that. To implement this method in line. So yeah. If we were focussing on the promise statement, maybe a different solution would be better, but I am very happy with what is mentioned here. + +NRO: Thanks. + +DLM: Next we have Kevin + +KG: Yeah. I am in favour of this with the current shape. I do think we should have `Promise.allSettledKeyed`. It doesn’t come up as much. But it does come up and like exactly the same motivation for `Promise.all` applies to `allSettled`. There’s literally no difference. So we should do both. I don’t think it’s actually very much more work for implementations or anything to do both. And it’s just a weird thing to leave out. + +DLM: Great. Thanks, Kevin. + +ACE: Yeah. If the majority of people feel like that, I certainly can feel that way too. + +DLM: Next we have JSL with “+ 1 but I think 2.7 is premature”. + +DLM: And then ZB? + +ZTZ: Yes. Also, +1 to having both methods. I believe `allSettled` is the more important one because it makes `Promise.all` redundant somewhat. We can have only that one, and it would suffice, not the other way around. So I would say both is a good choice + +DLM: Thanks, ZB. Mark + +MM: Let me start by responding to ZB. `allSettled` does not subsume `all`. `all` aborts early on one issues. `allSettled` does not. You still need both. Neither one subsumes the other. + +MM: Okay. Now, Ashley, could you go back to your syntax slide? Okay. The second bullet point, if that’s what it is, in syntax, I don’t know what it’s trying to say. + +ACE: So the `await.all` array syntax and— + +MM: You are suggesting that we might choose syntax that depends on the syntactic shape of the literal following `await.all`? + +ACE: Yeah. One atomic thing that the—it wouldn’t be just there’s an expression on the left-hand side. We would actually look at, it must be one of the two shapes, just as a wild idea. + +MM: Okay. Say you can’t have an expression where you can have a literal is just weird. In any case, I am glad you are not proposing the syntax in this proposal. And I do support the—you know, `allKeyed` and `allSettledKeyed` together. + +ACE: Okay. Thanks, Mark. + +OFR: Yeah. We actually kind of—we liked the proposal in general, but the issue we had was with this clumsy way of repeating the identifiers. So we are actually in favour of adding some syntax for it. But I think you actually convinced me with your—with the point that there is still the opportunity to add syntax later, and the API in itself is useful anyway. + +ACE: Great. Yeah. I completely agree. Like, it is frustrating having to repeat them. And I would be interested in exploring a syntax space, if the committee doesn’t think that it’s going to be an uphill battle. I am pleased you are convinced that the API—it’s not that because of the advantage of syntax, you don’t want to do the API. I am pleased you are happy the API can live on its own too. That’s fantastic. + +ACE: So it looks like the queue is empty. I think it was JSL that said 2.7 is premature. I completely agree. Especially because it seems like we’re in favour of adding the second method. So there’s more spec text to write. One thing I will definitely need is reviewers. If I could get—apologize if people already put themselves forward for review, but I don’t think we had. So if you already had, please remind me. JHD. Great. Thanks. And if we could get a second person. JSL as well. Excellent. + +### Speaker's Summary of Key Points + +* Represented the problem statement of trying to await multiple promises in parallel +* Discussed the updated naming of the proposal, as well as additional potential scopes such as extra methods and syntax + +### Conclusion + +* We agreed that we should include an additional method for `allSettledKeyed` +* Not going to pursue adding syntax as part of this proposal. + +## Import Bytes for stage 2.7 + +Presenter: Steven Salat (STY) + +* [proposal](https://github.com/tc39/proposal-import-bytes) +* [slides](https://slides-import-bytes-2025-09.vercel.app/) + +STY: I’m Steven. On the internet I go by @styfle. I work at Vercel. I am presenting Import Bytes proposal. I am the champion, and GB is an author. Its currently Stage 2, I am asking for Stage 2.7. I think I will go through the checklist at the end so we can just go through a refresher on what the proposal is first. + +STY: So the proposal here is basically built on top of the existing Import Attributes proposal, as well as the other proposal for Immutable ArrayBuffers. This adds support for importing arbitrary bytes. This gives us a way to write once and run it everywhere, in all JavaScript environments. Some examples here use `type: “bytes”` to import a png photo and you will get a Uint8Array backed by an Immutable ArrayBuffer. Last time this proposal was presented, it was type buffer and we changed to type bytes. Because we are no longer returning ArrayBuffer, but now we return Uint8Array that’s backed by an immutable ArrayBuffer. + +STY: Why do we need this? We want to be able to import raw bytes for all files. And similarly to how we added JSON modules to read a file for JSON, we want to do similar syntax for arbitrary bytes. And this provides an isomorphic way, a universal way, to read arbitrary binary files. And then you can process them later. So one example I came up with is reading an image. Or reading a font. And you want to process that with a JavaScript library like satori, it runs anywhere JavaScript runs, but it needs to accept the bytes as input. And so lots of use cases. Those are just two file types, but obviously, any file would work with this proposal. + +STY: What’s the problem with this today? Well, you need to know about these file read operations on different runtimes. And this example is a little bit of an exaggeration, now that Deno and Bun support Node APIs. But there’s still new runtimes coming out using the JavaScript standard, and they have to decide what they are going to do. How they are going to read a specific file. And then obviously, browsers only have fetch(), you don’t can’t perform actual file reads. So at the very least, you need this fetch to be able to support browser environments, and so this example is effectively a polyfill, and we would basically need to include this any time we want to read a file in an isomorphic way. And I also wanted to note that while Deno and Bun are adopting the Node fs API, in the browser it won’t work. We need something to work in a universal way. We are ultimately maximizing portability and reducing boilerplate with a single line of code. + +STY: The nice thing we get with this is that bundlers can now optimize this much more easily because now there’s a standard way to understand this file should be included with this JavaScript file that’s in the import. And so a bundler can statically analyze it and then, inline that file into the bundle. So now you don’t need to distribute multiple files so you can imagine something like a CLI written in JavaScript or even just JS bundles that are served to the browser or maybe you’re bundling JS to run in the backend and you want to have one file to distribute. And now bundlers can take advantage of the expectation of a Uint8Array and inline it. + +STY: So the behavior is basically following along with how it works for JSON modules. If you give a key `type` and value `bytes`, the host must fail import or treat it as a Uint8Array and bundler. And the browser fetch-sec-dest header will be empty and regardless of the response content type will be ignored. It won’t change the behavior of the import. Similarly in the local environment like Node.js is the read file and regardless of the extension won’t matter, you still get the same bytes. + +STY: So some prior art. Deno did ship this behind a flag and return UInt8Array. And we see some other like webpack and where have to use the asset loader. They have a couple different ways to do this. But it’s common for like SVG. Moddable SDK uses the Resource class and, are a JavaScript runtime for embedded systems. This will basically inline that binary data for the logo image. And then Parcel is also similar to other bundlers, you can do data-URL. Both Bun and Turbopack have PRs to implement the `type: "bytes"` proposal. And I believe Turbopack already merged it behind a flag. + +STY: Why not mutable? Why are we using immutable ArrayBuffer? And one of the reasons is memory issues. As Deno implemented this, they pointed out there would be multiple copies of the buffer in memory, and we can avoid conflicts between different import types and there’s some unexpected behavior with multiple modules importing the same buffer could cause detachment issues. If you are going to send that to `postMessage` or call `transferToImmutable`, that’s going to be problematic. And there’s also just resource constraints, and embedded systems like Moddable are immutable because they utilize ROM instead of RAM. Lots of feedback from the first presentation and that was favorable among everyone to keep this as immutable. + +STY: So then why are we using Uint8Array as the return type here? The first biggest reason, Node.js buffer is compatible with Uint8array and existing code in the ecosystem can accept this without transforming it to something else which is really nice. And then if you do a fetch response and call `.bytes()`, you get back Uint8array and bytes. Also w3c recommendation to use Uint8Array binary types source type, and seeing that more commonly among JavaScript APIs. The reason why we’re not returning the ArrayBuffer directly is you can’t read it directly. You have to add the view on top. It just makes sense to just go ahead and provide the most common view, Uint8Array. We didn’t choose Blob because it’s a W3C standard. Also had a mind type. We will not use the MIME type. So it didn’t really make sense. And same for ReadableStream. It’s part of a different standard but also there’s no helper method to even turn this into a Node Buffer. So you end up with boilerplate again if you mute that which is the common use case. Last time we talked about phases, we ruled them out because the header would have to be script. We don’t want that. We want to keep it empty because it isn’t a script. So in summary, we get ISO morphic file reads and reduce boilerplate and maximize affordability. And we get the bundler optimization opportunities that is great and then memory safety for environments that want to utilize That. + +STY: I do want to talk about the Stage 2 checklist because there’s a lot of things here. I want to make sure that I got everything on here. We talked about some of the minor details of API names and renaming it. How to fail the import and how to—what we rename from buffer to type bytes. We got reviews and some experimental implementations. Talked about Deno, it has it behind a flag, and bundlers have PRs. Have an HTML spec PR that’s in draft right now. And I have some feedback on it. And the spec is complete. It’s been reviewed by the three assigned reviewers. And then lastly the editors need to sign off. So one of the three signed off. I think that’s everything. + +DLM: Okay. Number of items on the queue. First up is spec text looks good for me from MF. + +KKL: I’m super excited to see this. I have been doing stuff like this for ten years. This is definitely very well motivated especially as you say about making portability—making importing bytes portable. Also I want to elaborate on the motivating case and not only does it make it possible to do this with the uniform text size and also the concern that not all modules should have an IO capability much less need an IO capability to do this. This is going to be a vast improvement. + +STY: Thank you. + +MM: Just wanted to clarify earlier it sounded like on the browser what this is doing is kind of the same thing as what fetch is doing. One really important thing about this that’s really unique to it is that for the static import, you get the bytes right away, you get the bytes synchronously. And that’s done by the module walk before the module evaluation. So in any case, I enthusiastically support this. Full steam ahead. + +STY: Thank you for pointing that out. I will add that. That is a good call out for the synchronous import. + +JSL: Just want to echo the support for this. Cloudflare Workers has had a data module and byte module for a number of years now. This adjusts to the biggest complaint with that that is the ArrayBuffer and immutable. And this will be great ahead and absolutely in support of moving forward. + +NRO: We also support this proposal. Quoting PFC who had to step away, "I wholeheartedly support import bytes". + +DLM: Thank you. We have +1, yeah, from CM. And +1 for Stage 2.7 from DEMITRI and +1 for 2.7 and blocked ArrayBuffers from JHD. That’s it for the queue. Lots of support for 2.7. Give it a moment in case anyone has any concerns. Question from NRO? + +NRO: Just a question I guess for implementers, what is the current status of immutable ArrayBuffer? + +DLM: For our case, we have the implementation but we haven’t shipped it yet. + +NRO: Thank you. I will assume it’s similar for the others. + +MM: Unless PHE or PST is here from Moddable, I can speak for Moddable. Moddable has already implemented this—I’m sorry, Moddable has implemented immutable ArrayBuffers, not this. But I think this will slide in very easily. + +STY: If I remember correctly, Immutable ArrayBuffers is currently 2.7, so if this goes to 2.7, it will be kind of stuck at that unless immutable ArrayBuffers advances; is that correct? + +MM: If RGN is here he should interrupt me and clarify, but immutable ArrayBuffers last time achieved conditional stage 3. in other words, it was officially approved pending Test262 test approval. And RGN came up with an extremely exhaustive test plan that he’s making great progress filling out. But it’s not complete yet. + +DLM: Just to clarify, we’re waiting until it’s fully at Stage 3 before we would ship. + +JHD: In general, if both proposals were at 2.7, then the one being blocked, import bytes in this case, could not advance beyond the immutable ArrayBuffer one. But it seems like they will be moving along properly. + +KG: But also the thing that is necessary to get to Stage 3—well, the other thing that is necessary to get to Stage 3, is tests. And you can write the tests right now. You don’t have to wait for anything in particular to happen before you can write the tests. And in principle you could advance to Stage 3 together if the tests are ready and approved. + +JSL: Just worth pointing out that the runtimes like Workers or whatever, don’t really need the engines V8 to actually implement this particular spec. We can do this with the existing machinery. We just need the immutable ArrayBuffer. + +DLM: We’ve heard lots of support and no objections. Clarifying question from STY. + +STY: Just wanted to clarify: to get 2.7, it says relevant editors signed off. Do I need SYG to sign off? I don’t know if I got approval? + +KG: No, you don’t. The editor group signing off doesn’t necessarily mean every individual editor signing off. + +STY: Got it, thank you. + +### Speaker's Summary of Key Points + +* Provide an isomorphic/universal syntax to read arbitrary bytes +* Addressed earlier feedback switching type: “buffer” to type: “bytes” and returning Uint8Array +* Spec complete +* Asking for Stage 2.7 + +### Conclusion + +* Stage 2.7 achieved +* In addition to existing Motivation, should mention synchronous import not achievable with fetch() +* Can begin writing tests now and preparing for Stage 3 +* Cannot advance further than Immutable ArrayBuffer proposal since this proposal depends on it + +## Continuation: Convention: strings-as-enums are kebab-case + +Presenter: Kevin Gibbons (KG) + +* [proposal](https://github.com/tc39/how-we-work/pull/165) +* [slides](https://github.com/tc39/how-we-work/pull/165) + +KG: I just wanted to continue this item because we didn’t come to a resolution. I was hoping that we could resolve on something right now. I don’t think that Shane is in the room, but I believe he made his position clear. I would like to propose that we adopt this convention to 262 going forward, except in cases where there’s a specific reason to deviate. For example, because Temporal is making use of an enum that is already present in Intl, it makes sense for Temporal to spell the values in that enum the same way they are spelled in Intl. But other than that, I would like 262 to adopt this convention going forward. And I’m not proposing that this change be made for Intl given that they already have a number of enums, or at least non-zero enums that follow a different convention. It can possibly be decided on a case-by-case basis for Intl going forward. And not proposing at this time to update any existing APIs to support multiple different cases. Personally I don’t really love that solution, but I’m not opposed to it if someone wants to argue for it in other cases which is to say I’m suggesting only adopting this convention for 262 going forward, and not proposing to change anything else at this time. As I recall, there were a number of items on the queue. Perhaps we can go through that. + +DLM: We restored the queue. First up is MM, “where we accept both, one should be understood as deprecated” + +MM: Since only this proposal only proposes it to apply to new things, the case I was concerned about doesn’t come up. It can come up later if we decide to extend this, but that doesn’t have to be part of this one. So I support. + +DLM: NRO had a reply. + +NRO: That’s something else. Do you remember what is— + +CDA: The dash idea of Unicode? + +NRO: Never mind. + +KG: MM brought up the fact that some string values in some APIs are dash-delimited but still might want to parse them. And dash is an easier thing to parse than casing. Maybe that was— + +NRO: It was in Amount where the units were not fully units now, but then some potential follow up proposals with dash conversion is the partial unit and not understand as the dash but something Unicode defines with base units like kilometres an hour and then kilometres dash to compose them. But not understand those things as enums. + +MM: I want to just jump in for a moment. One of the things that I raised is, since this is just an advisory document anyway, would you be willing to say the things between the dashes are alphanumeric? + +KG: I’m happy to say that should be preferred and a strong reason to deviate, yes. + +MM: Good, thank you. + +DLM: We have a reply from PFC. Rounding mode. + +CDA: Doesn’t appear PFC is here. + +DLM: Thanks. + +NRO: PFC is not here but was going to mention the thing that ECMA-402 has some running modes. I don’t know if Temporal has some running modes and decimals. But KG already said this case is much preferred because there is precedent there. + +DLM: We have the topic from SFC saying changes of 402 need to be discussed in TG2 but I think SFC is not on the call at the moment. + +KG: Yes, unfortunately. But like I said, I am not currently proposing any changes to 402. + +DLM: Okay, perfect. That’s the queue. + +KG: Okay. Well, I recall there was a fair bit of support for this when we talked about it previously, and MM mentioned he supports it today. I would like to formally ask for consensus on this convention, which is to say merging this PR and update it to prefer alphanumeric between the dashes but I’m not going to. I don’t think that’s controversial. + +DLM: The queue continues to be empty and we heard explicit support already. Are there any objections or go ahead and say this has consensus? + +KG: Sounds like consensus to me. + +DLM: Sounds like to me too. Congratulations. + +KG: Thanks very much. + +### Speaker's Summary of Key Points + +(see original discussion for thorough summary) + +### Conclusion + +The committee reached consensus on merging the PR adding kebab-case to the normative conventions document, with several caveats—not the least of which excluding ECMA-402 from this convention. The conversation continues on [the pull request](https://github.com/tc39/how-we-work/pull/165). + +## Update on proposal-module-global + +Presenter: ZB Tenerowicz (ZTZ), Kris Kowal (KKL) + +* [proposal](https://github.com/endojs/proposal-module-global) +* [slides](https://github.com/endojs/proposal-module-global/blob/main/slides/2025-09-stage1-update.pdf) + +KKL: Today the intention is we are giving an update on the module global proposals which entered Stage 1 at the last plenary. And in this presentation, I’m going to give a review of the problem statement motivating cases. We want to focus specifically on one that we don’t talk about a lot because it’s not touching every website. But one that we think is very important for the web going forward is what was—as other platforms review the feedback that we received and the intention that we intend to go with the feedback and then close off the next steps in the summary. Our problem statement is, a way to evaluate a module in its dependencies in the context of a new global scope within the same realm, that is to say a new module map in which to evaluate additional modules. + +KKL: Let’s talk about motivating cases. Testing comes up. There’s a lot of test runners that do some suboptimal things or have done in the past for the lack of a better solution of the problem of wanting to create a new module map or isolated global environment. We have seen cases where, for example, they would create a new VM context and then create all of the realm intrinsics and overwrite in the global scope that creates porous and imperfect modulation of the new module map. It’s porous if you don’t catch everything that goes in and patched over the new realm there’s inconto you any thank you of what you didn’t remember and remember TypedArrays in this category because the TypedArray wasn’t the instance of a TypedArray. We want to put that behind us and this allows us. + +KKL: Very much core to the champions the ability to create safe, fast multi-tenant plug in systems. I will not talk about this today. I will hand off to ZTZ to talk about supply chain attack mitigation. I should note that we have a shim that does this. I'm going to talk about why the shim isn’t the last word, but this is in the wild. And we have had some success. + +KKL: ZTZ, is this where I turn it over to you? + +ZTZ: This is very much yours. + +KKL: All right. So the stance that we have on security is a little bit sloppy at the moment. For most websites, if the hacker infiltrates the data centre and makes away with PII, you can paper over with customer support and some assurance that all of the—at this point customers assume that a lot of the PIIs are available to everyone. This is the world we live in and it’s too late to go back. But there are special cases. What if you’re a bank? I believe this is where we hand over to ZTZ. + +ZTZ: Yes. So while the most popular website will probably not even have a log in and if it does, it’s probably a log in just for the person who is creating content on the website or for a person whose preferences are being tracked. Not a lot to protect there. But there are use cases where you actually have a lot to protect. And to protect against supply chain attacks is a big undertaking currently, which some institutions and applications do attempt. So if you’re a bank, you might be interested in keeping or improving your security. We know this might not always be the case. But it definitely should be possible. There’s password managers and all kinds of web properties that store secrets that care very much about not being attacked through supply chain. I can reveal that I didn’t get a statement yet, but I had some interest in this proposal from OnePassword. There’s wallets that I happen to be working on securing as well. And I would say most enterprise SaaS applications even if they’re just a boring database application, they are still collecting information that should not escape and they would be in a lot of danger if they were to be attacked through their supply chain. And a chat application is somewhat surprising also in that category, that the chat could no longer be a viable business if they go through the supply chain attack. + +ZTZ: So there’s a lot of things you can do. It starts with lockfiles and integrity checks. There’s a lot you can check. But ultimately, the current wave of supply chain attacks is mostly based on the maintainers themselves being targeted and the maintainers’ own infrastructure being used to publishing malware of existing packages. You would either have to put funding into having a small set of dependencies that you want to rely on that undergo much more scrutiny and put bug bounties in and keep on auditing what you install. You could and sometimes should try lavamoats. + +ZTZ: That being said with the most recent attacks, we are in a position where it’s really getting hard to protect yourself. So imagine you have an attacker that has successfully gotten publishing rights for the software you are installing. The malware detection still mostly depends on humans noticing that something is wrong. And we were very lucky with humans detecting that something is wrong. That includes the maintainers that were recently compromised detecting immediately that they fell for a phishing attack. And also earlier situations where people detected attacks that were pretty stealth but they noticed that they were there. But average time to detection, if you take into account only human researchers, it tends to be about a month, that’s a pretty old statistic. And we have AI-based detections now that have much higher level of false positives, and they are not immune to certain attacks like a comment saying “this code looks like malware but it’s actually fine”. There’s been cases of that too. + +ZTZ: So imagine someone decided to go in and attack your own dependencies and they don’t go with the most obvious case of postinstall script, or you happen to set up your installation to not run the script one way or another. And now you end up updating your dependency. This is a list of dependencies that have fallen victim to the most recent attack where they had code introduced into them that was not a post-install script but actually code that had a malicious purpose at runtime. So these constitute over 2 billion, with a B, weekly downloads. And they ended up being in almost everyone’s dependency trees. Luckily barely anyone managed to update and publish their applications before the malicious versions were removed. But this was a surprisingly fast reaction time on the maintainers and registry side. So, again, we got very lucky. + +ZTZ: Let me mention that LavaMoat, the solution that is built on top of what we wish to put into the language, is preventing this malware from working. If you want to review how that exactly happens and what the malware was, this link leads to the article that you can read about that with the demo where the malicious code actually runs. But let’s move on to how LavaMoat build on top of what we’re proposing. Lavamoat is a tool that takes the trust on first use approach. We assume that you have a safe state at some point where you can generate a policy to say this is what every package in my dependencies should be using out of the powers available, that is, globals and whatever you can import including other dependencies. We do that separately for each package and put in the policy file. So you can make decisions per package and then at runtime hardened JavaScript is used for policy enforcement. So only the things that were early detected as needed for a given package are being made available to that package as globals and imports. And lockdown is used to eliminate the most basic, most severe cases of prototype poisoning where you could break the prototypes of object or array to pivot around the application a lot. + +ZTZ: And with that, we have lockdown, harden, and compartment as the three elements of harden JavaScript. Lockdown freezes the shared intrinsics which also is a prerequisite to compartment usage for complete isolation that we need. And compartment is giving us a new global scope within the same realm. And thanks to lockdown and the ability to harden whatever you want to pass into that compartment, we are in control of the situation to a point where leaking the actual globalThis powers can be avoided. + +ZTZ: And I think we can now switch over back to KKL. As a finishing thought I want to add that the recent attacks were the reason why people pointed out that the JavaScript ecosystem is not the same place to be because of these attacks and you should be looking into other languages or ecosystems. I believe that’s a very wrong take because all of the other ecosystems are just a bit behind on the adoption curve for malware. That being said, if we can put robust tooling in place, we can reverse that and make JavaScript the best environment to be in if you want to risk your data being compromised less with your enterprise application. + +KKL: The way I like to say this is that although JavaScript is a popular target for these, that’s because there’s a lot of things that only locked away behind JavaScript and it’s easy to accuse JavaScript not to be the right place to stand if you have a security mindset. But that is totally into sis because JavaScript is way ahead of the curve of being a sand box. No other language was born as sand box and rose to these heights. + +KKL: In any case, yeah, what we’re proposing today is the new global and new module map. And that’s part of a complete security breakfast for supply chain attack mitigation including `lockdown` and `harden`. We’re not proposing `lockdown` or `harden` at this time. `harden` is just the transitive freeze of prototypes and properties. That’s trivial to do in user space. `lockdown` is an interesting case because it has a lot of knobs and a lot of—there’s a great deal of differences of extent that different applications might need for it. I don’t think it is time to talk about `lockdown` at TC39 right now, but very much time to talk about `Compartment`. That is to say, one piece of feedback we had is that we shouldn’t propose this kind of mechanism if it can’t be used—if it isn’t part of the complete solution. It is part of the complete solution but a lot of that doesn’t belong, we believe, yet in 262. Or I should say that is my opinion and not a uniform opinion held by everyone in our champion group. In any case, let’s talk about how we shim this today and why we feel this is a good place to start and language support can improve upon these foundations. + +KKL: So one of the things that we do as part of lockdown is block the exits. So we do some changes to the primordial realm that make it so that code that is inside of the compartment can’t trivially escape. And we do so in a way that doesn’t break most things or isn’t a way that breakage can be easily repaired. And so among those things is we deny access to the shared constructor for functions, and that makes it so when you’re in the compartment that is denied the ability to create functions in other globals that have access to the globals import behavior, this shuts that door. And then we deny access to some sorts of nondeterminism which are helpful for a lot of our cases and also deny access to certain fingerprinting that is otherwise undeniable having given a certain safe shared intrinsic subset of the language to compartments that we wish to isolate. Not all compartments should be used for isolation purposes. I mentioned there are other cases like test runners that don’t need to do that. Have no use for doing that. But for the supply chain attack mitigation case and also building plug in platforms, we close off all of these Avenues to escape. And then the core of the mechanism and why we are largely here is that the way that we are able to do this today is by taking all of the sharpest edges of JavaScript and then using them pointed at each other in order to create an environment. That is to say, we make use of with blocks, sloppy modes, strict mode, direct `eval`, `arguments`, and a proxy, and all of this together create an environment where we can evaluate code that does not have access to the actual global object but does have access to the compartmentalized global object and one of the tricks that’s not obvious on this slide is that in the inner most scope is the eval scope that allows the original realms direct first class eval function to appear exactly once inside of this direct eval expression in order to allow it to capture that argument in the lexical scope. + +KKL: I mean, it works, right? But friends wouldn’t let friends do this, right? We just don’t have an alternative. So when we can commit these crimes today,why do we need support in the language? Most JavaScript libraries work without modification in the environment. + +KKL: There’s some divergence of the behavior of the language for the exact same code outside of the harden JavaScript environment. The biggest issue we run into that is vanishing slowly is the property override mistake and fading because folks move from old style ES5 type classes to modern classes. That’s where most of the property overrides on the shared intrinsics occur. There are other places it can happen, though. It would be great to solve that essentially. It is not essential to this mechanism. Content Security Policy forces us to use `with` statements by bundling. So when we bundle for this environment, we don’t necessarily use the shim. We can embed the mechanism and in order to do that we have to use with. It would be ideal not to. + +KKL: So here is some of the caveats of the emulation that is nested with block eval trick the quadruple back flip we sometimes call: For one, the receive object inside of the exported function should be undefined with the emulation, it turns that when using the with block, the receiver of a function called on that is going to be bound to the receiver of this. So in this case, you get the globalThis as your think when it should be undefined. This is a survivable limitation of the environment. We almost never run into the problem because of it. But it is a thing that we could fix. + +KKL: Another caveat is we lose one of the benefits of strict mode by having a scope proxy that denies access to the global object. We’re in a sort of weird situation where it is possible to emulate the behavior of a normal realm, a normal strict mode realm, and throw a reference error when you access the thing. But it has to know about all the properties that it’s overshadowing so it can allow names that are not on the global object to pass through. This is risky. But more importantly, it gives the confined code the ability to fingerprint its environment and tell what environment, what host it’s on by probing for what things throw and what things return undefined when they hit the scope proxy. That’s not ideal. Another thing is that in order for this to operate quickly at runtime, we use a number of heuristics of regular expressions to forbid HTML comments, and natural forbidden of the language concept of modules and because we’re using eval for this scope, HTML concepts are different behavior out of a module to determine whether certain expression as a comment or not. To the end the thing that we run into occasionally is that because we forbid lexical utter answer of eval or HTML comment there are things that are confusing and we get around by obligating the person proposing the code to run transforms before they bundle it and execute in this environment. They are able to do it in the environment because it's practical in Babel and suffer the performance cost of doing so, this works out in the end and we have to have precompiled bundles that ideally we can send original sources and debug them as such. + +KKL: So some of the feedback that we received—language support for module maps and separate globals. The idea is that given these have these caveats with the emulation of JavaScript is language support. What we benefit from is being able to rely on the language’s own parser instead of censorship heuristics and dependency on the parser and the JavaScript parse, all of those concerns are swept off the table if we’re able to take advantage of the native module system. And then also we would be able to faithfully emulate the semantics of the language instead of having these few divergents. + +KKL: To recap what we are talking about is apply chain attack mitigation and not discussed but also gives the place to do plug in systems for multi-tenant realms and then of course testing infrastructure. So based off of the feedback, there are essentially three flavors of feedback and all of which lead us to one solution in the problem of the design space. We will be rewriting the explainer to include a new proposed design that effectively merges a bunch of ideas from our much much older compartment proposal with some of the new ideas that we have introduced in the previous iteration of the global proposal for the purposes of minimizing the concern of new categories of global on existing implementations. + +KKL: One of the pieces is no new paths of evaluation. We can’t rely on eval as the mechanism to bind to the particular module map. That means we have to the another mechanism of implementing import. Global doesn’t adequately express how globals are distinct from *the* global. And then also we have realized and appreciate the feedback that we should not attempt to add non-serialazable hooks to module source and also realized this was sort of already table stakes based off of the progress of EFM source base and heard it in the direction that module sources are effectively the identifier of the purposes of transfer through postMessage or structuredClone to be in order to be hydrated in another environment. Surprisingly what it has done for the design space is we have shoveled a lot of the issues with the compartment proposal and we are going to revisit some of those ideas and propose them in the future meeting into the design. So one of the things that we have been also suggested to look into import map and what we can reply upon that and to bring into the proposal that work is planned and not yet done. And we wish to consult the implementers regarding the global complications. That’s planned and a call for action as I would like to schedule some folks to come to the module harmony or TG3 meetings or ad hoc meeting if scheduling is difficult to discuss that particular issue and then we need options to avoid the import hook trampoline for performance reasons. That oddly enough is already addressed of the design of the compartments that we’re using today in the shim. The reason back to compartment is already a place to hang the import method that is not necessarily array globals and passes the shed test. We did the bikeshed experiment on what we call such a thing and compartment seems to be the least objectionable and we can also reopen that if there are further. It also gives us the place to hang the undeniable intrinsics like async function constructor if we needed to and other methods like import that are otherwise otherrable. And dynamic eval. Here is one problem to get into the weeds as far as possible, one of the problems for module loader is of course resolving relative imports relative to the base of the module that it’s in. And module sources don’t intrinsically have a base in 262. So we would be compelled to introduce a base for the purposes of communicating to the import machinery how to resolve import specifiers. The base is currently a host defined behavior that gets carried along in the host data internal slot of the module record or of the module source object, pardon me. And that communicates through structural clone and post message but in order for us to—we’re proposing that instead of—it’s already the case that we have a base module sources for hosts and we need to put it in 262 and create the mechanism with the module source that is obtained to give to the different base and moved into the different compartment. That’s the mechanism that I’m proposing. + +KKL: Separation of roles basically means duplicating some information currently hidden behind the host data that needs to remain there in order to be in the place to unforce content security policies on the web but also create a separate base that would be used for resolving import specifiers regardless. That is to say the host data would also be the true where it was—– reflect the origin of the source was obtained from and all decisions about whether to import it would be based on that. But all of the import machinery would use this new mechanism in 262. + +KKL: Also I proposed previously that we would add a hook to the module source constructor and we are throwing that away. Undesirable because it captures unserializable state and makes it awkward to transport. But what we can do instead is just make a string an additional property that communicates with module source when it gets moved over as part of the 262 module source base internal slot. The way this looks afterwards is that we have to bring back the resolve hook that exists on the existing compartment proposal for the case where an import returns a module and that has a different base so we have to—it obligates us to create the memo map key in that particular global based off of resolution and then obtain the source object given this is the predetermined key for the resulting module. And mechanisms that we could use for this is using import source to associate the existing source with the base or the module source constructor options bag. I’m favoring the latter because it’s already the case that module sources carry a serializable origin. + +KKL: One of the pieces of feedback is that we need to be able to prime a module map without having to go through the import hook trampoline. This is avoidable by priming with theel module source you can have have the module source and in the presence of the algorithm and inject it synchronously and won’t trampoline the user space. That we means we would be in the position in the future to propose synchronous import hooks which is to say hooks that can affect the evaluation of modules that have already been loaded without—and similarly to import defer taking into account the top weighted intransit and such. This would throw if the graph is not already loaded. And then we would need a mechanism `compartment.module` method. This already exists that allows us to link module sources across compartment boundaries. If you import an personal package A from one compartment assuming you’re putting a compartment around every package as we do for the supply chain attack motivating use case, this allows us to draw the line around the package and give each package the local logical import name space and logical and portable import name space that is to say this survives a trip through bundling to the web when it gets executed. + +DLM: Sorry to interrupt. You only have five minutes and there’s a few items on the queue. + +KKL: We’re almost to the end. New pass through evaluation. Of course, we do want to have new paths to evaluation in the distant future or possibly even in this if we can get the consent from the committee but we want to leave the door open and in the interim if we are compelled to not have the eval mechanism that forces us to have we need to have some way to import within the new module map, this is the direction we’re going, we create a compartment constructor and initiate dynamic import and absent on the global. It would be equaling satisfying to have the first class import method but there in lies complications best avoided. So we want to evaluators. We don’t need them to make progress on compartments. It’s still possible to use compartments using import source as the way to hydrate your compartment with sources that were loaded by the host module loader. We want evaluators because specifically it’s useful to say open up the zip file and compartmentalize the contents and do that you can’t appeal to the host module loader. Next steps, I want to invite Kevin, Mathieu and Anne specifically to speak with us at a future meeting to talk about the evaluation concern minimizing impact on global object categories and your concerns here. So please if you’re not on this list, I don’t know that—please let me know that I need to reach out to you and schedule a conversation. With that, that’s our update. I turn to the queue. + +DLM: We have a few minutes. + +KG: I will try to be very fast. First thing, regarding the no new paths to eval, I think this is less of a concern, the proposal as it was previously presented strongly implied that the purpose of the proposal was only achievable by using `eval`. And it was phrased as if that was the contents of the proposal. That was most of the contents of the repository. If this is not primarily intended to be used with `eval`, I’m okay with there being an `eval` function inside of the new compartment, as long as the proposal does not expect that to be the primary way that you leave it. + +KKL: Or to rephrase, as long as it is possible to use it in a no-eval environment, it is good. All right. + +KG: A little stronger. I don’t want the focus of the proposal to be eval. I’m okay with it containing eval as a thing that already exists. I don’t want that to be the intended primary way of using it. So that was the minor thing. + +KG: Second thing that we don’t have time to talk about is I’m very confused where this proposal stands with respect to ShadowRealms. I had understood the motivation of ShadowRealms is almost identical to what was presented today but it makes a bunch of different decisions and notably not default and many on the global and many are powerful and notable the callable downgrade. As soon as someone is passing in the upgrade from outside of the compartment that gives what inside a lot of power and that was to prevent that. I don’t understand why this proposal exists if ShadowRealms exist and I don’t understand why it makes different decisions. + +KKL: The summary is essentially “por qué no los dos”—both are complementary. You are correct that they do solve a lot of the same—that they address a lot of the same concerns but not the exact same use cases with the exact same trade offs. We can elaborate on that. + +KG: They’re both like massive. And I’m not convinced that it is a problem that warrants solving twice. + +MAH: Just really quick, the callable plenary was not about restricting capabilities but more avoiding all the footguns that came with two different realms being able to mix each other as objects and I suppose mistakenly leaking access to some things you didn’t mean to. So any way, it’s not quite the same. + +KKL: I should note that the ShadowRealm is not a useful mechanism for the supply chain attack mitigation case. + +DLM: We’re almost at time. Is this a quick comment, MM? + +MM: I will pass. + +KKL: Given we’re over, may I recommend that we capture the queue and resume this conversation later in the meeting? + +DLM: There is an underflow this afternoon. It’s quite likely we can do a continuation this afternoon and defer to KKL since he is returning this afternoon. + +MM: I’d like to hear OFR’s question. + +OFR: I’m not sure if I can deliver it quickly. Summarizing basically I think you presented two things. You presented quite a big solution space, and one very particular use case that you have that you are already able to solve with the tools that the language gives you at the moment. So I guess my biggest question coming from this presentation was, what is exactly—what is the minimum, what would be the smallest change that you’re missing from the language, and how does that core of the things that you miss from the language relate to the very big problem space you present? + +KKL: I’d love to answer that. Do I have time? + +DLM: We should save it for the continuation I think so that people have their lunch breaks. + +KKL: Thank you. I look forward to it. + +### Speaker's Summary of Key Points + +* Compartments are uniquely positioned to mitigate supply chain attacks +* We are able to hack Compartments in userspace today with reasonable but not perfect emulation of a non-Compartment environment +* We are revisiting the Compartment design because it responds to most categories of feedback we received at the previous plenary + +### Conclusion + +* We are seeking feedback from specific individuals and appeal for additional feedback, regarding paths to evaluation and minimizing the concern of making additional globals and module maps. +* We have received feedback that this proposal does not need to willfully omit globals like eval provided the focus of the proposal is not eval, and that the proposal is useful without a working eval. +* We must provide an explanation of why ShadowRealm and Compartment are not redundant. + +## Continuation: Update on proposal-module-global + +Presenter: Zbyszek Tenerowicz (ZTZ) + +* [proposal](https://github.com/endojs/proposal-module-global) +* [slides](https://github.com/endojs/proposal-module-global/blob/main/slides/2025-09-stage1-update.pdf) + +OFR: The eval version of the question. It’s more of a comment than a question to be honest. I think basically we saw three things in the presentation. We saw the motivation, so you presented a motivation and a scope, like, a problem scope that was roughly supply chain attacks and running untrusted JavaScript code. Then you showed a sample that you already had and implementation that you have and then you outlined like the shape of a proposal that is about to come. And sort of in my mind, if you draw the then diagram, then, yeah, we get all of the intersections of these three circles. There is part of the motivation that we don’t cover in the proposal, there is already an existing implementation that works without the proposal and there’s a proposal which introduces more concepts to the language that is required, like, that is the minimal requirement, for example, your implementation. So this is like really something that I struggle with this discussion. + +ZTZ: If you’re trying to respond, we’re not hearing it. + +KKL: I am trying to respond. All in my head, though, so far. + +ZTZ: Okay. + +KKL: For one, thank you for that. + +ZTZ: I can start if you want. + +KKL: Go ahead. + +ZTZ: This is very useful feedback on many levels and one of those levels that I feel competent to address is that we didn’t set the boundaries right in what we presented today and before. The point being yes, we wanted to show that we have an implementation that works, that proves it makes sense to have it. And that defends currently ongoing attack. But there was a section that was showcasing all of the trade offs necessary for the current implementation. Those tradeoffs are really getting in the way. The tradeoffs mean some of the performance that should be available to the programs in JavaScript might not be available to it. It also means that we need to eliminate some of the code that exists in the ecosystem. It’s a very minor part of the ecosystem, but having this in the language would mean any correct ESM module could run inside of a compartment which currently we cannot say without adding a few caveats. So the reason why the implementation was presented in so much detail is to show it’s not only scary looking with all the with statements and the proxy and everything, it’s also much more work than we would like it to be. And not only for us, but also for the JavaScript engine running it. And introducing compartment in the language is the fundamental bit that is missing for us to be able to run the rest of it natively. So the proposal does not include any bits described under the name of lockdown or preventing exits and so on. This is what compartment also enables. You can take the compartment and also build the testing use case with it or build a more complete isolation than the testing use case would require if you plug in the details that lockdown is handling. So I think we need to flush out better where the borders are between what needs to be in the language and what we already have and what we have and what we moved. But I think that’s the first layer of the response. And maybe by the time I finish, Kris has something to add. + +KKL: Sure, yeah. Mechanically, as ZTZ said there’s more—I showed more today than what we’re proposing with the proposal of module globals is not much—has not changed much since we last spoke other than the things that I specifically called out. We’re just basically saying the new global, rename it new compartment and add another layer so that the compartment and the global have the same identities and otherwise it’s largely the same proposal. We are specifically and only asking for a mechanism to create an execution environment with the separate global and module map that shares intrinsics with the same realm. And that is the proposal—and the shape of it will largely be very similar to the compartment proposal we brought forth five years ago. And my hope is before we present here again, we’ll update the explainer to reflect our current thoughts and merge the ideas from compartment to new global so that we can see concretely what we’re proposing and that should clear a lot up. Notably the proposal that we’re bringing forth is considerably smaller than the compartment proposal from years ago because module harmony is advanced. We can now stand on top of module source, for example, without further explanation. + +KKL: I find that extremely hard to believe. You’ve been a generous audience. + +KG: I had a comment that I mentioned at the end of the last presentation, that we didn’t really get to talk about because we were short on time. Maybe you could talk some more about the relationship between this proposal and ShadowRealms because I’m still very confused why we would have both. It seems to me if the motivation of this is mitigating supply chain security attacks, if it’s sufficient to do that, I don’t understand why we have ShadowRealms. If it’s not sufficient to do that, I don’t understand why we need this. Can you say more? + +KKL: Yeah, absolutely. Thank you for calling that out. My intention is to make that the kernel of our next update. But as a preview, neither subsumes the other. You cannot with acceptable performance use a ShadowRealm to confine the third party dependencies. That’s a thing. It wouldn’t be practical to take an existing application that is standing on top of the base realm and then compartmentalize—pardon me, ShadowRealm- its working dependency and have it working in the end. You might be able to make it work. That’s dubious. You definitely wouldn’t be able to make it work with the acceptable performance. Where ShadowRealm shines however is that it subsumes all of the cases where it is absolutely necessary to have a set of fresh intrinsics to run a third party plug-in safely in the application as an embedded component. Think Figma, it does it inside of the wasm container and provides the same security albeit not the same performance properties because of the obligation to load and all of that. And so ShadowRealms is an improvement on the situation for that particular motivating use case. It is complementary in the sense that you can use compartments inside of a ShadowRealm so that a third party plug in defend itself from prototype pollution and supply chain attack as well. So that is why I call them complementary. The compartment proposal is primarily the best solution we have seen so far for minimizing the attack surface exposed to third party dependencies in the same realm with the acceptable performance, with acceptable performance whereas ShadowRealms are better in the case where you actually need a fresh set of intrinsics to provide confinement. There’s so many arrangements in-between those points on the spectrum. I think that we would be very well suited to have both. That said, as champion of this proposal, I’m only pushing this proposal. + +KG: Okay. I feel like that didn’t answer my question. I appreciate the response. Concretely, do you think that the compartments proposal is sufficient for isolation and if so, why would we still need ShadowRealms? + +KKL: Well, perhaps this is a question better answered in terms of different threat models, right? I mean, we do not presume all applications have the same threat model, and we also don’t assume that every application has the same performance needs. Between those two, there’s the decision tree that leads you either way in the direction of compartment or ShadowRealm or both or neither in some cases. + +KG: What is the threat model where compartments are sufficient and ShadowRealms are not required, and what is the threat model where ShadowRealms continue to be required? + +KKL: Consider the threat model where—take the case of third party dependencies. The requirement for third party dependencies is largely have to work as they work today, right? They need third party dependencies need to be able to interact on the object-to-object boundary. You cannot suffer a membrane between, say, your main application and chalk, much less— + +KG: Chalk is a bad example. + +KKL: Chalk is just— + +KG: It's of the only ones you could actually do it. But yes, I understand there are cases where third party code can’t reasonably run in a ShadowRealm. + +KKL: Right. Take @noble/hashes, for example. @noble/hashes doesn’t inquire any IO capabilities and is doing sensitive cryptography and in the place and if you are depending upon that as the third party dependency currently, your obligation as a reviewer and as an application owner, you are obligated to ensure or trust that it has been reviewed to the extent that if a modification to that package reaches for a capability like the file system or the Ethereum or anything of that nature it absolutely doesn’t need to do cryptography. The cost of that audit on the current model is that you must make sure that every upgrade to that package does not reach for new capabilities whereas if you’re standing on top of a lavamoat or something like that based off of compartments, your obligation is no longer that, your obligation is to make sure it’s computational correct and trust it can have access to things it does not need. + +KG: Compartments does not give you that property, not without the callable boundary. That’s the whole point of the callable boundary. + +MM: I’m sorry, state the property again. + +KKL: The property is that your obligation to audit a third party dependency assuming that you have not endued it with any further capabilities than it needs from a position where you’re not injecting capabilities it does not need is lower than the obligation you have – the obligation you have as an auditor is lower provided that you know that you are enforcing that it will run in a compartment at runtime. + +MM: ZTZ can probably talk about the experience with LavaMoat where much of the motivation is exactly that as to be able to focus the attention on the places that are still hot spots of danger because you’re able to pay less attention to the places that just don’t have the power to do much damage. + +KKL: To your point, KG, it is true that that boundary is not absolute. It frustrates attack. It does not prevent every single case of escape. + +MM: Okay. + +KKL: In the sense if you’re operating at the module to module boundary that don’t have the boundary, there is the possibility of interaction beyond the expected. That being said it is a considerably better place to stand—being in the compartment is a considerably better place to stand than not. + +CDA: Sorry, I will interject, we have six minutes left approximately and lot of items in the Queue. + +KG: Okay. I will let this go. I want to say, again, that didn’t answer my question. I was asking what is the threat model for which compartments are sufficient and we don't need ShadowRealms and what is the threat model that ShadowRealms continue to be necessary even if we have compartments. I'm not so interested in the other differences. I am interested in, what are the different threat models. + +KKL: All right. Received. We will use that as the centre piece for the next—well, no promises, but I will— + +ZTZ: Let’s make sure that we get the chance to meet again under less time constraint and talk about this because I would be happy to iterate on that. I’m pretty sure I can answer your question directly. It would just take more time than ten minutes that we had available. + +KG: I’m also happy if you just update the readme and ping me on it. It doesn’t have to be online. + +ZTZ: Back and forth might be necessary. Let’s take that async. + +KM: I guess this question I think was answered. To double verify, the expectation is that all the code will verify that no dangerous things pass through the callable boundary? + +KG: Compartments don’t have the callable boundary. + +KM: Right. Saying the expectation is now on the—everybody who calls anything in the library is ensure that nothing dangerous ever passes into the things that they’re compartmentalizing? + +MM: The inter realm dangers are much like the inter-compartment danger because you can callback and forth and therefore somebody might provide something else, the ability to do something dangerous or say something that confuses it and make use of the corrupting state. There’s still always—we’re not reducing for anything the audit attention needed to zero, but the audit attention that is needed in theory for current systems is simply a degree of burden that no one ever actually engages. Once again, the statistics from NPM is the most applications, 3% of the application is code written for the application and 97% of the running code are third party dependencies. So if you have to treat every third party dependency as fully dangerous, as we do in JavaScript today, you will not every discharge the audit burden. Besides, whatever the audit work you did is completely invalidated the next time that any of those dependencies update. Lavamoat shows that you can conserve on the audit burden tremendously without anywhere reducing it to zero by identifying the remaining big hot spots for danger versus those things for which you know that what the limits are and the danger it can cause. + +ZTZ: And the goal here is to have compartment as something to stand on to be able to give the programmer the ability to control the environment and also now I’m talking about lavamoat and lavamoat as the use case for the compartment, we are not aiming to provide full isolation where there is like an equivalent of the network communication between modules. That is something endo does with vats. Let’s not get into that topic today. We want to maximize the usefulness for the existing ecosystem and eliminate entire classes of attacks. So the goal here is to prevent attacks from scaling. You can still have targeted attacks that will rely on the specific object passing of your application, and if someone controls two modules, two packages in two different areas of your application, they might be able to figure out how to sniff out the right things and maybe attack you very specifically. But they will never again once this mechanism is rolled out everywhere be able to create an attack that scales to majority of the applications that would download the corrupt package, in which case the 2 billion weekly downloads number is no longer relevant. And that’s my goal here, to make these big numbers irrelevant. + +OFR: Yes, this was regarding your comment that you think this will be faster than ShadowRealms in terms of performance and my question is, why do you think this is the case, because I’m actually not sure I would agree? + +KKL: Well, the basis of that comment is that there isn’t a membrane between compartments that you are able to communicate in terms of the same shared realm intrinsics between objects in separate compartments. There is no serialization, there’s no rehydration of objects on either side of the membrane. And that is the basis and when we’re talking about inter-package communication, that’s important. + +MM: You also need a full set of primordials per package if you’re just going to use ShadowRealms as the means by which you insulate packages from each other. Whereas with compartments combined with lockdown—lockdown is not part of this proposal, but compartments combined with lockdown you can share all of the intrinsics because they’re all immutable and harmless. So you don’t have to pay the cost of the full new set of intrinsics per package as well as not paying for the callable boundary and not paying for the membrane. And not paying for the infidelities of the membrane. We had to back off in committee a dozen times and had to invoke the practical membrane transparency that is the most you can do over the callable boundary whereas direct object-to object contact over compartments doesn’t have any of those infidelities, it lets the linkage between packages work as it had worked. + +PHE: This is Peter from Moddable. Okay to add to that? + +CDA: We only have a minute left and folks are not using the queue. There’s replies on the queue and new topics on the queue. Great if folks could use that to take turns in order. That being said we only have a minute left. I know we got started a little bit late. But given this is a continuation, that’s the cat, I don’t want to shortchange the regularly scheduled topics that are meant to start in about a minute’s time. So I’m going to capture the queue at this point, and I think we will be able to schedule another continuation. I am noting we do need to call for notetakers. Thank you JSL. I’m capturing the queue and we can schedule another continuation and I’m going to try to not murder this cat. And while I’m trying to not murder the cat, can we get a volunteer to help with the notes, please? + +KKL: Just want to thank everyone and hope to talk to you as well if we don’t get a continuation. Thanks again. + +### Speaker's Summary of Key Points + +(summary of original topic covers all continuations) + +### Conclusion + +(conclusion of original topic covers all continuations) + +## Temporal update and normative change + +Presenter: Philip Chimento (PFC) + +* [proposal](https://github.com/tc39/proposal-temporal) +* [slides](https://ptomato.name/talks/tc39-2025-09/) + +PFC: Hi everybody. My name is PFC. I work at Igalia, and I’m presenting this in partnership with Bloomberg, and I will be speaking about Temporal. I had a lot of these presentations, but just the one word or one sentence recap of Temporal is replacement for the JavaScript date object that brings modern date and time handling to JavaScript. Today I’m going to give a progress update and I have one normative change to propose. + +PFC: So the most exciting update is that there are currently two implementations that pass 99% of the Test262 tests. I will have more to say about that in a moment. And then the normative change is a bug found by a user! Exciting that we have actually people using this in the wild, and thankful that when they find bugs, we still have a chance to address them. + +PFC: All right. So this is the test conformance graph, which I’m told that people find fun. Just the usual disclaimer. 99% test conformance does not necessarily mean 99% done. The test conformance may have some gaps in it, which we will close before going to Stage 4, and often, you know, the proportion of work done does not have any relation to the proportion of tests not yet passing. So as I said in the beginning, we have two implementations passing 99% of the Test262 tests for Temporal. And then another exciting thing is that we for the first time on this graph, we have the Kiesel engine that didn’t have an implementation last time I presented and now does and I believe is using the same temporal-rs library as both V8 and Boa are doing. + +PFC: So I thought it would be good to outline a path towards Stage 4 now that we have two nearly finished implementations. So one thing I think we need is for the Intl Era/Month code proposal to move to Stage 3, which you heard from BAN earlier, we are planning to do next plenary that as far as I understand seems to be the requirement for V8 to unflag the implementation. SpiderMonkey implementation is unflagged right now and shipping to the web. The V8 one is under a flag. The requirement for Stage 4 is two unflagged implementations. We will need that to happen. There’s still some Temporal tests in the `staging` folder in Test262, so much fewer than there used to be. But those remaining ones need to be moved to the main Test262 tree and updated and expanded as needed. We have a few gaps in the test coverage that we have identified that are currently listed in open issues on the proposal repo. Those will need to be filled. Not a huge amount of work. But needs to be done. And at that point, we could consider going for Stage 4. Probably a good idea to move three proposals to Stage 4 at the same time since they’re all kind of related: The Canonicalization proposal and the Era/Month code proposal. That is my current thinking for a path to Stage 4, and if anybody looks at that and say “hey, you’re mixing X”, I would love to know what X is in that case. I welcome feedback on that. + +PFC: All right. Then the bug fix, certainly want to thank Patrick Hensley, a user of Temporal who noticed this was happening. There is an edge case when in `ZonedDateTime` difference arithmetic when the exact time and the wall clock time differences have opposite signs. So you take, for example, 1:01 AM on a date when daylight savings skips back. So these two `ZonedDateTime`s—because they have the UTC offset and everything they are exact times, and objectively like the first one is 59 minutes earlier than the last one. But the time on the wall clock, the first one is 1:01 and the second one is 1:00, the first one is the time that’s actually on your watch is minus one minute. This is fine. This is a weird thing that happens in daylight savings time changes. All good so far. But if you take the difference and you request a calendar largest unit that is broken, it fails the assertion in the spec text and it should be +59 minutes, because minutes is the—you can’t interrupt (? round?) 59 minutes to any unit larger than minutes. We have a fix for this. It fixes the above case as well as the similar problem in the round and total methods of Temporal duration. The fix is in the PR. + +PFC: Unfortunately while I was investigating, I realized this used to work correctly and it broke. We broke with the refactors to avoid extra user code calls almost two years ago. I would certainly like to prevent this sort of regression in the future so it goes without saying, I added 262 coverage, I think Adam edited it. We had the Test262 coverage for this specific case. I’m also working on writing scripts to test date and time arithmetic with many different permutations with interesting `ZonedDateTime`s, `PlainDateTime`s, et cetera, and compare it against the snapshot so that if anything like this breaks in—the results ever change in something, some refactor where we’re not expecting them to change, then that will fail loudly. I’m also writing these so they can be run against implementations so we don’t have to go through millions of cases of subtracting `ZonedDateTime`s and figure out the expected results can be but it can be brought to the attention if two implementations disagree on what the results should be. So I’m hoping this will be useful to prevent this sort of regression if it happens again, and also surface any implementation divergence that currently exists. + +PFC: So got any questions on what I presented so far before we move to the call to consensus? + +CDA: Nothing on the queue as of now. + +PFC: All right. Then I would like to formally request consensus for the pull request that I mentioned earlier: [“Change `ZonedDateTime` difference method and Duration round/total to handle the daylight saving time case”](https://github.com/tc39/proposal-temporal/pull/3147). + +CDA: You have support for normative change from DLM + +PFC: Thank you. That’s especially relevant since SpiderMonkey has the only implementation of this that’s currently shipping to the web. So that’s good. + +DLM: Any other voices of support for this normative change? It’d be cool if we can get at least one additional person to support. + +JSL: I support it but rather have one of the implementers speak up. + +CDA: OFR supports +1. Great, thank you. JSL also expressed support. Do we have any objections? All right. You have consensus. + +PFC: Thank you. I have put a summary for the notes in the slides and since we didn’t have that much of a discussion, I think I can just copy it in. And that was a lot less time than I had requested. I will give you all some time back. Thanks for your attention. + +CDA: Great, thank you. + +### Speaker's Summary of Key Points + +* There are currently two implementations that pass 99% of the Test262 tests +* That's not the same as being 99% done, as there are gaps in the tests, but progress is steady and measurable +* There's a bugfix necessary in `ZonedDateTime` after a regression while trying to reduce amount of user code needed + +### Conclusion + +The committee reached consensus on the proposed normative change to `ZonedDateTime`. + +## Continuation: Update on proposal-module-global (again) + +Presenter: ZB Tenerowicz (ZTZ) and Kris Kowal (KKL) + +* [proposal](https://github.com/endojs/proposal-module-global) +* [slides](https://github.com/endojs/proposal-module-global/blob/main/slides/2025-09-stage1-update.pdf) + +CDA: Pulling the queue up and populating. If we want to get a head start, though, the current item on the queue which I’m sure if we were through is OFR on the queue talking about performance. + +MM: OFR, can you restate your concerns or comments. + +OFR: I’m not sure if I have much to add. I want to say I’m not sure if it would be much faster. There’s many kind of—there’s many performance cliffs that are likely to occur around—like, we already have performance cliffs around cross-realm objects, and probably very similar issues will present itself when we would have multiple global objects. But it’s all speculation at this point. + +KKL: Well, to speak to this point, I invite PHE to respond. PHE at Moddable does embedded implementation of compartments. + +PHE: Just wanted to provide some since the topic of compartment performance came up, I wanted to provide some feedback based on the experience Moddable implements compartments in our access engine and we use them like regularly as part of things that we deploy. And we love compartments because in fact, they’re as close to zero overhead as we can find. And on embedded systems where performance is always a challenge, you know, if they were having a significant impact, they would just go away. But in fact, in terms of performance, you know, there’s nothing more than getting the compartment created and whatever overhead we choose to add in terms of getting involved in module resolution, but there’s nothing in the mechanism that makes it expensive, it’s just what we choose to introduce. And in terms of memory… it’s sort of funny, a typical compartment in the world takes about 3 kilobytes of memory to set up that we’re disappointed it’s that big, but actually it’s quite tolerable and we hope at some point to be able to optimize that down. But, yeah, we think compartments are great. They’re very, very lightweight at run time effectively zero beyond putting them in place in whatever overhead you introduce on the module resolution. So feel really good about that aspect of the design. + +MM: Let me also mention historically that the overall shape of the compartment proposal started with Moddable. You know, we were working with Moddable trying together to solve the problems that compartments ended up solving, and Moddable proposed this with the eye to all of the efficiency concerns. Granted in a different engine than the JITed engines—anticipating something on the queue. + +KKL: But also to respond in the affirmative, it is that Moddable’s compartments are not web globals—don’t have web globals and don’t have the internal slots of web globals. There’s some overhead that we expect to be slightly higher than Moddable experiences, but we also expect that the tolerable overhead will be higher as well. + +KM: I don’t want to claim to speak for OFR but at least for us, there are all kinds of JIT optimizations that might—it’s probably more fragile to rely on them in a multi-global system. For example, like removing iterators and all kinds of overheads of those things in the JIT, highly rely on specialized things that attach to the particular like global object and if you try to mix global objects you’re more likely to break those assumptions, and I think probably a decent amount of work that would have to go into like understanding the primordials of the global objects are the same and probably all kinds of weird bugs from the fact that we’ll try to throw exemptions and take the wrong global object because right now the global objects are tied to the primordial intrinsic. It’s free to just reload the global object from the intrinsics like one and use that to throw an exception, but then that might cause other weird problems. And then that aside from the perf issue in some respects and these things are very delicate in some respects. Ideally they would be very principled, but just within the constraints of JavaScript and trying to get something that works a lot of the time do not. Is that kind of what you’re getting at? + +OFR: I mean, this was definitely part of the thing that I was thinking about. But even in the runtime starting like maybe if this global or if this compartment is not something that you can just like for example load from the snapshot because it somehow mixes an existing realm with some new things, that might be slower to start up like a new realm that is completely empty, for example. So it’s not clear to me that this would be cheap—I can well imagine that it’s simple and lightweight to implement in the runtime that doesn’t do a lot of speculative optimizations, that I can see. I’m not sure if it’s in the context of V8 that would be something that is as fast as running normal JavaScript code. + +KKL: I can say those of us using compartments are tolerating a slow down of being forced onto the no-JIT path because of the nature of the—we’re using `with` blocks and a lot of the— + +OFR: Okay. But I was making this comment because you were explicitly stating performance as the motivation. + +ZTZ: I can respond to that just a little bit real quick? The ultimate motivation is making it possible to optimize this. If we tried isolating with multiple realms, there are some optimizations that will never be possible. This proposal puts us in the situation where over time, all that can be optimized away, and I’m hopeful of that because we had a case where TypedArrays were 50 times slower, observably, for us, and it turned out that it was because: if a TypedArray had a frozen prototype, so even freezing `Object.prototype` was enough to trigger this, there was an up count from optimizing a function in compilation and effectively functions uses TypedArrays in the loop were being treated as hard to optimize and V8 would bail out from optimizing them. But a very small fix was able to fix that. And I believe that over time we will accumulate enough of those fixes that the experience of using compartmentalized code will not be that—suffering the performance impact that’s noticeable to your application. And if we attempted to solve the supply chain motivated isolation in any other way, I don’t believe that would be ultimately a reachable goal to stop paying the performance overhead for that in years in the future. + +KKL: To restate, we have heard and understand it will be tricky to implement this on a web engine. + +MAH: I’m going to bring up my first comment really quick, there seems to be a lot of talk about what the global object being different might mean. I’d like to highlight this is mostly about the global scope being different. I mean, I don’t claim to know how exactly web engines do to attach to the global object, but I suspect in most cases there can remain a single global object for the realm and really what this is doing is introducing a new global scope. Which hopefully should mean a lot less scary implementation. My topic here was actually in answer to JSL, that I think his reply was actually first. So his question is, can we reconcile compartments and ShadowRealm and why can’t it be one solution? I think compartments and ShadowRealm really work at two different layers. In my opinion, it’s like asking why can’t we reconcile VM and containers? They’re just two completely different things that provide different levels of isolation and they work differently. So I don’t think there’s a way really to reconcile them. They’re just two different type of technologies and approaches. + +MM: I feel like we have addressed, maybe not to everybody’s satisfaction, but addressed if you have ShadowRealms, why do you need compartments, over and over again. The other question I don’t think we have addressed well, in fact, most things that you can do with ShadowRealms, you can accomplish with compartments. And the place that ShadowRealms came from is initially we were collaborating with Salesforce on compartments, and they were planning to use compartments for their plug-in architecture, and they ran into too much existing code for which they have lots of existing third party plug-ins that wer not simply lockdown-able, maybe because they mutate primordials, probably, in the way that is not easy to separate. For whatever reason, they couldn’t be locked down but still need to be isolated from other programs and from the plug in as the whole and not isolated from the dependencies. Or rather, Salesforce didn’t feel like they needed to address that. + +MM: So they invented the ShadowRealm in order to have a compartment-like mechanism for containing essentially an application as a whole, or a plug-in as a whole. + +CDA: We have only a few minutes remaining. I’m going to ask that the presenters, ZTZ and KKL, can you take a look at the queue and see if there’s any topics to cherry pick, or just continue going down in order? + +KKL: Nothing in particular. + +MM: I want to address the Maginot line comment. + +MAH: I would like to address that too. + +MM: KM, we’ve been using compartments together with harden and lockdown for a very long time. In fact, you know, essentially this architecture we have been using it since 2009, if you count Google CAJA as the first implementation of hardened JavaScript and it’s not a Maginot line, it’s actually secure. We had multiple in-depth reviews, some of which we published and we have had formal analyses of sufficient subsets to talk about the principles, and the formal analyses held up. So the compartments by itself, if their purpose was security, they would be a complete failure. What they do is coupled with those other things, they do provide security. So they enable security. And the Magginot line might be useful for other things in the absence of attack, which is why it’s a separate proposal. + +KM: Can I reply of my intention of the comment? I guess it ties into the related topics how do you verify that you haven’t leaked data when you have 300,000 dependencies and without a callable boundary? I mean, you have reduced your intractable problem of like mutable globals to the probably equally—I mean, I would assume equally intractable problem of validating every call between every dependency. + +KKL: You are correct that the edges become interesting. You are correct that the edges become interesting. The way it does reduce things is that you go from being from the problem where the interactions are a full click of all of your thousands of dependencies upon all of your thousand dependencies, into one where the interesting edges are edges of explicit dependence. But it’s also that because compartments can’t arbitrarily reach other compartments, they are linked out of band based off of the explicitly dependency graph in the way that shared globals do not have that, shared globals have the greater degree of freedom. The thing that is interesting about it is that if you have an edge where you have failed to audit the communication edge, or failed to carefully manage a communication edge, that is—then you’re back to table stakes along that one edge and not worse off. And also those dependencies stand in a better position to defend themselves if they choose to, right? You can create hardened packages where you defend at every package line and we do in a lot of cases. + +MM: Yeah. In fact all of Agoric’s software running on blockchain and elsewhere is all built. The individual components are written defensively and object to object contact with we believe understandable risk. That’s held up in the security reviews including outside security audits. I want to retreat from something that KKL has been saying, because I think we need to distinguish confidentiality and integrity. Confidentiality can leak through side channels and it takes—it’s only a very special case where we can protect against side channels, and so integrity is where we can make the strong claims without having to worry about side channels. Side channels cannot endanger integrity. + +CDA: All right. We are past time. I did capture the queue before I cleared it out. We will have more time during this meeting tomorrow to continue again. So thank you everyone. + +### Speaker's Summary of Key Points + +(summary of original topic covers all continuations) + +### Conclusion + +(conclusion of original topic covers all continuations) + +## How Websites are Put Together + +Presenter: Kevin Gibbons (KG) + +* [slides](https://docs.google.com/presentation/d/1vEYoTix5yHN3vc1cXZQCk87P8d_GqL0bYsOpoEcnjuo/edit?usp=sharing) + +KG: This is an informative talk based on my own personal experience. I will not be doing much—or doing anything asking for the committee’s consensus but I will at the very end pitch an opinion that I think is relevant to the committee and the previous talk. But this is not a consensus-seeking item. I’m mostly just talking from my own experience and expressing my own preferences here. We’ll start with a little background and then talk about why I’m talking about this and then we’ll get into the meat of the thing. So first thing, general background, there’s a lot of page loads of websites. Many people are loading many Web pages. It’s back of the envelope map but I got in the order of 100 billion page loads a day that is a large number. That might be off by the order of magnitude in the direction but it’s a very large number. Almost all of those take place on the top few hundred websites. Of course many of them are specifically Google or Facebook or Instagram. Many of those are shifted to applications, to mobile apps. But in terms of websites, the top few hundred do capture almost all of like humans’ time. Very large number. And as such, our decisions about JavaScript primarily touch people’s lives through how the decisions affect those websites. Just to give a rough idea of scale if we add a feature what is used on one page load on a hundred and has the benefit to those pages of speeding up that load by one millisecond and multiple time saved by the feature. There are many other ways that decisions affect websites and other consumers, but I just want to keep in mind the sense of scale for this part of how JavaScript affects the world. + +KG: I think it’s worth understanding how those websites are put together. I want to be very clear that I am not saying that these are the only consumers that we should consider. I’m only saying that these are consumers that we should always consider in how our—how the design of the language is going to affect people in the world. So it happens that I have a lot of experience with this, because my day job as it were involves building an application that is integrated into other websites and because of the amount of money that we charge is primarily integrated into the larger websites, these like relatively popular websites with major banks and retailers and airlines and that sort of thing. Because we’re directly integrating with the websites, I’m very often in the weeds poking around like how the websites are assembled and talking to teams at the sites and so on. But obviously I’m not talking to very many of the teams, that doesn’t scale. I’ve talked to a few of them when issues have arrived and I’ve looked at the process by which these sites are constructed in a large number of cases. Again, I’m only talking from my own experience. I’m not making claims this is how everything works, I’m giving colour how things are generally done in my experience. That’s the background. That’s why I think this is worth talking about. + +KG: Let’s get into the meat of it. How are these sites actually built? So the biggest thing is that any of these large sites—commercial sites, I should say, doesn’t necessarily apply to Wikipedia, but these large commercial websites are assembled by multiple teams and not just multiple front end teams but back end team working at different parts of the stack and different parts of the application and also they usually not always but usually are including scripts from different companies. These can be served either as first party or third party scripts which is to say same origin or cross-origin. In a lot of cases the scripts will be provided by other companies but served as first party because the properties that you get by being a first party script that are often necessary for the functionality of scripts and it’s not just ads where this comes up. There’s core functionality that is outsourced—my own product, the product that I work on is not an advertising product in any way, but it’s providing defense against certain kinds of attacks against websites. There’s lots and lots of different kinds of functionality that gets outsourced to other companies. These pages often have scripts that are not just often like usually in my experience have scripts that are provided not just by multiple internal teams but multiple different companies. And these are not particularly coordinating. These teams are not particularly coordinating. Even the teams within the company are frequently not able to coordinate effectively because they’re in different parts of the organization or just because they’re internal processes but certainly there is almost no coordination possible externally. The scripts are provided by other companies is not possible to actually coordinate directly between the companies that are making these scripts and the companies that are executing them, the scale just isn’t there. + +KG: Like, my team has half a dozen engineers working on the script that is on—I can’t name numbers, but many websites where it is simply infeasible to talk to teams at all of the websites. This is the common case. It’s just not feasible to actually coordinate between all of the people who are involved in assembling a website. And these scripts. These scripts that are on the page are written by companies with wildly different standards. Some people might be building like little jQuery snippets and other are doing defense against prototype pollution and other is heavily using eval. The standards are different. They’re included on the page in what is from the perspective of the script is random order, not literally random. People made the decisions for the outcomes and they are not on the needs of the scripts but the needs of the people assembling the web application and the parts of the stack that they operate. Sometimes it is literally random in the sense that the scripts are loaded async and evaluate in order in which they have complete loading. These scripts are quite old. Sometimes years and sometimes decades old. And on major websites and look at [Amazon.com](http://Amazon.com) or whatever, even the commercially used scripts are often years old especially on places like banks are extremely Conservative on updating scripts on the page with newer versions available. So the scripts are not exactly frozen in time in the sense that people are updating some of them. You can’t assume nothing will ever change. You can’t assume that any particular thing will be updated or updated in any particular time frame. Also these scripts are frequently patching built ins. There’s a lot of reason to do this. Polyfilling is one reason. Certainly the most common reason and far from the only one, I patch a lot in my scripts. Lots of other commercial products are patching other scripts. This is just a basically necessary part of how these scripts integrate with each other and with Web pages. And this is usually but not always refusing to patching—apart from polyfills refers to web platform stuff and fetch is popular and XMLHttpRequest and form submit and these things, it is relatively common of function toString or error stack and other stuff including intrinsics that is helpful. Notably these are through page’s lifetime. + +KG: It is not the case that you can reasonably say we’re done now, scripts have loaded, because scripts will continue to be loaded throughout the page’s lifetime. Sometimes in response to user behavior, just to give an example, you know, you might have a thing where if you have you hover over a product, that will pitch not just the product information, but maybe some additional script that provides functionality for the little pop up that shows up where you wouldn’t want to pay the cost of loading that script that does like the little React UI in the picture in picture if you weren’t actually ever going to mouse over a product, so that script will be loaded dynamically in response to user behavior. + +KG: Also of course the lack of coordination in general means the scripts that basically are running with no particular knowledge of their relationship to each other can never break any built in functionality and never make any assumptions about what other built in functionality is being relied on. You just don’t know where you’re running. Indeed, you can’t assume you will be the only people patching some built in. You can’t replace fetch with something backed by XMLHttpRequest and someone is previously patching fetch and now you clogged up the changes. The only thing is patch fetch so it does something else and then defer to whatever the previous implementation of fetch was so it can continue to be patched by whoever was there previously. It is very common to run into scenarios where multiple different scripts are patching the same built in especially for things like fetch. And honestly astonishing it works as well as it does. Basically only works because everyone where every has learned to more or less respect the discipline of not breaking anything. And not making assumptions about which things other people are going to rely on not being broken. And people learn this discipline often the hard way by I’ve done it I’m sure most people integrating with the websites done it broken sites because I made an assumption that I could touch something that wasn’t a valid assumption. I’ve assumed that I could use—that I could trust if someone was doing XMLHttpRequest open and call to close of the same realm of the XMLHttpRequest and that was wrong. And people reach into the iframe and call its stock close. You have to do the perfect emulation and find out the hard way if you’re breaking any built in functionality. Not necessarily you won’t necessarily find this out the hard way. But you often will. The only reason that this works is because people find out and then avoid breaking built in functionality. Okay. So that’s sort of the background or the bit I wanted to say, you know, these are things to keep in mind about how large commercial websites which again are like the majority of how JavaScript is affecting people’s lives. These are things the know how the sites are put together. + +KG: But then I want to go over some things that I think are relevant to implementations for the committee. The biggest one is that you can’t really have any mechanism, shouldn’t say you can’t have. Mechanisms that rely on global coordination among scripts on the page are basically not usable on this kind of site. Just for all of the foregoing reasons, the people maintaining the scripts aren’t necessarily at the same companies, and anything which requires them to coordinate just isn’t happening. You might if you are lucky be able to do some coordination that is mediated by sales teams, but often times you won’t because the scale makes that impossible. Similarly a script can’t know its order on the page. The scripts are being put on the page by different teams working at different parts of the stack. My script is often inserted by a reverse proxy that is run just before the page is served to users and often not always assume that our script will be first on the page and no one can assume they will be first on the page even if they were at the time they were injected because they were injected by a different piece of hardware in a different data centre, or at least a different part of the stack wherever it happens to be. It is for this reason basically not feasible to have globally consistent order of scripts and can’t generally assume they are running first or last or anything like that. As the consequence there’s no point you can say polyfilling is done. This is just not knowable for any script on any of these kinds of pages. + +KG: Similarly if you need coordination between front end and back end, it’s—I will not say not happening. It can happen in some cases for applications which particularly need it. But in general, these teams tend to be pretty siloed. Even within a company, setting aside the issue of third party scripts that are included on the page. Even just the people building the page, the front end team and the back end team are often pretty far apart, and if the front end team needs to make a change for the back end for something to work, odds aren’t they just aren’t going to do that thing. There are some mechanisms in the web platform that rely on this thing. CSP is a classical example. It provides a means of limiting what scripts can execute on the page among other things. But that mechanism as ideally designed relies on being able to put hashes of every script that will run in the header for the page and that’s not happening, or put a nonce on every script that is run and that is not happening. What everyone has to do in practice is rely on exceptions for running any script for the particular host. You hope this is just the first party host. That tends not to happen because these things are getting assembled by many different scripts from many different sources. And so in practice, what most sites end up doing with CSP is allow scripts to run from a large list of origins, and in practice some of those origins tend to allow anyone to upload any kind of script. This just completely bypassing the mechanism but the only thing that people do because coordination isn’t feasible. Because CSP was designed in this way and requires coordination, it doesn’t work in practice. + +KG: I should say this doesn’t apply to everyone. In particular, it doesn’t apply to Google. Google and Facebook are organizations which are capable of having some teams which dictate how everyone in the org is building the script or the website, and these applications tend not to include much in the way of third party scripts. So at those two companies, you can do this kind of thing. And not much of anywhere else. I think in part this is why the web platform keeps growing these mechanisms that no one except Google can use. CSP, that sort of thing. Because they are designed by teams who assume that websites are put together like Google is put together, and that assumption is false but it looks like it’s true if you’re at Google. So I’m not saying that this is never possible. It’s possible if you’re Google, possible if you’re a small shop. Not possible if the you are one of these like large scale but not literally the biggest website in the world places. This kind of coordination is generally just not feasible or at the very least needs to be extremely limited. + +KG: I want to caveat this further that is when there are strictly additive mechanisms sometimes this can happen. For example, if you need the website to be served with the additional header but the header doesn’t break anything else, you can often make it happen. The frontend team that needs it will communicate the needs to the back end and the back end will add that header and like this doesn’t break anyone and then like if the front end ever stops needing it, they won’t tell the back-end they stopped needing it. That tends to work out as long as it is the first party that needs the header. If you’re building a library, you’re not in the position to say I will talk to the backend team because the back end team is at a completely different company. Even then there are restrictions. For this reason you see low use of SharedArrayBuffer because it requires headers. It is unfortunate because it is best mechanism to make websites faster but no one can do it because it requires this. + +KG: Another consequence is scripts are not going to start freezing built-ins. It won't happen. Scripts can’t know when it’s safe to freeze built in options. Like I said scripts continue to be loaded and executing polyfills and patches over the lifetime of the page. And stop optimizing for the case. This came up a few times in the past that people suggested that maybe we shouldn’t bother specifying this optimization because scripts can freeze things and scripts can freeze built-ins and engines can have a fast path for when the built-ins are broken. I don’t think that will happen because I don’t think scripts are large commercial sites that are the primary consumer in terms of like powers of human time, these sites can’t freeze built ins and so engines are probably not going to start optimizing for that case. I’m not the engine and can’t speak for them. This is my own speculation. That’s my understanding of the state of it. + +KG: Similarly, one-way global “mode switch” flags are not ever going to be widely used on this kind of site. We talked some in the previous presentation about the lockdown mode that would essentially be this and freezing all of the globals. It’s conceivable that you could do some sort of lockdown within the ShadowRealm or something because that’s something that doesn’t affect other scripts on the page. But at the top level where it would affect all of the scripts on the page, it’s basically not feasible because there’s no way to coordinate among all of the scripts on the page to decide at what point it’s safe to enable this mode. The only kinds of mode switches that are safe are those which like are usable—which don’t break any functionality of any script and at that point it can just be on by default. If it just works for everyone, there’s no particular reason to make it a mode switch. So, yeah, this sort of global coordination switches I think are largely infeasible for these kinds of websites. + +KG: This is I think the relevant implications for the committee. This last slide it is my own opinion, which is that I don’t think we should be spending our time on features that are not usable by this kind of website. Now, I want to be careful with what I’m claiming. I’m not claiming this is the only JavaScript that matter. I use JavaScript outside of these contacts a great deal. I’m just saying that I think if we are going to be building something and putting all of our time and the committee’s time and the engine’s time and imposing additional costs on run time for users, then it should be possible to use those things under the foregoing conditions. That is how almost all JavaScript is executed—again in terms of in context for humans are waiting for that JavaScript to execute or affected by the results immediately. And I think that we should generally be trying to do things which are usable under the conditions of no global coordination. This hasn’t been a problem historically, like, when we have built things in the past, we have generally assumed that they’re not going to require global coordination. You can use a proxy within your own script without that catching any other script. The web platform has been less good about this. An example is import maps. Import maps on the web platform provide a way of mapping module specifiers and initially specified that was only one import map on the entire page. And this basically again was unusable. That’s now been changed so that scripts can load import maps at runtime without clogging the import maps of other scripts and like I think this is basically a necessary change in order for import maps to be readily adopted. TC39 has generally not done that sort of thing. I don’t know whether this is happenstance or people tried to maintain and generally the case that the features we add are usable within one script without requiring the buy-in of over scripts on the page. Or like knowing that the script is running first or last or anything like that. And my personal reason is that I think that we should continue to spend our time on those kinds of features and not start work on different kinds of features which require global coordination, because those things will in practice based on my experience never be usable on the large commercial sites which I have been talking about through this presentation. + +KG: Alright, thanks for coming to my talk. I am happy to discuss any of the foregoing points or I also want to open up if people have other questions about how these websites are put together, I can try to talk with my own experience. + +CDA: First order of business is we need an additional note taker to help out with the notes as we are losing ZTZ. + +RBN: I can take notes. + +CDA: Thank you. + +CM: I think this whole presentation is illuminating. I think you actually make a very compelling argument for things like the compartment proposal, where part of the whole point is to be able to isolate pieces from each other to minimize the amounts of coordination that is needed between the pieces. A lot of the issues that have been talked about today, things like supply chain attacks and the very idea that you have got not just this big NPM ecosystem but in fact within the context of a single website you have a whole bunch of different actors with different interests and different engineering practices, different policies and different chains of accountability, means we need better tools for helping these folks to keep from stepping on each other’s toes. And I think perhaps those of us who have been advocating for things like hardened JavaScript and compartments and all of that stuff can be making our case a bit better. But to my sensibilities all of that is really fundamentally motivated by these very concerns that you are articulating. And so I don’t know what to say beyond that, but I think you have made what I think is a very compelling argument for a lot of kinds of measures that some of us have been advocating for some time. + +KG: Yeah. I think the mechanisms which will allow us to have less coordination—or perhaps easier to say, mechanisms which allow scripts to run without caring as much about what other scripts are running, are generally positive things to the language. + +CDA: I’m next on the queue. I want to be very careful here because a lot of people put a lot of time and effort into proposals, but I’m wondering, are there any in particular that you would flag as having been not prudent for the committee to investigate or advance? + +KG: So I guess I should say the genesis of the talk is at the previous meeting I gave a presentation where I was suggesting that the committee should consider locking down `Array.prototype[Symbol.iterator]` and `ArrayIterator` and specifying these things to be non-writable and non-configurable to enable optimization and engines. I got feedback from some people instead of doing it in the specification we should start saying scripts can do that or perhaps we can have some sort of flag that would enable all of the built ins to be frozen and engines can optimize for that case. And much of the point of this presentation is that I don’t think that that’s going to happen, and I don’t think we should be designing for a world in which that’s going to happen. And we should be instead designing things like we can at the language level freeze ArrayIterator because scripts basically can’t. That said, I think that it has occasionally come up the idea of having a function that you could call that would freeze all of the built ins has occasionally come up. I don’t want to say that isn’t worth pursuing. I want to say I don’t think that’s worth pursuing unless it comes along with some other mechanism that allows you to, for example, only doing that in the compartment because I don’t think that such a thing—or like in a ShadowRealm or something because I don’t think that a freeze all of the built ins will ever be usable in the global scope on almost any major website. + +CDA: Thank you. Your slides had something about whether TC39 was the right venue for certain things. In this example, is this?—because it sounds like you’re saying this wouldn’t be good to land regardless and not just the choice of standards. + +KG: No, I mean, I think if someone wants to propose that for node and node wants to do it, that’s just fine. Node has a frozen intrinsics flag which I think is a great thing for node to do. If other runtimes want to standardize on that flag, then they should feel free to do so. I think that would be a beneficial thing for servers, I just don’t think it makes sense as part of the language because it is not going to be beneficial to this like major class of consumers of the language. + +CDA: Okay. I understand. Thank you. + +REK: Yes, hi. Thanks for this presentation. It was very illuminating for me since I’m working at MetaMask and the kind of website that I work with is very different from the one that Kevin is describing in his presentation. I guess if I understand your point, your thesis is—and please correct me if I got it wrong—the committee shouldn’t spend its time on things that aren’t practically usable for this class of consumer, which is a stronger statement than just saying we shouldn’t harm these consumers. The latter seems like table stakes to me because obviously they’re a major class of consumer and it’s part of the backward compatibility mandate that we shouldn’t harm them. I guess this is kind of a meta question about how the committee views its work and on what time scale we operate, because it seems like this class of website is something that has emerged organically in the real world because it’s useful to build this class of websites in this particular way. But obviously these websites didn’t exist 30 years ago and I’m wondering do we expect them to exist 30 years from now? And on what time scale are we considering the implications of the things that we put into the language in committee? We often make the joke that JavaScript may outlast human civilization—which, maybe or maybe not and I guess—but either way websites and web users will have a long time to adopt anything we introduce to the language. So like, where do we draw the line there? Are we only trying to serve existing users and enshrine existing path-dependencies? Or are we at all interested in trying to dislodge the status quo and showing people new ways of doing things? How do you view that question? + +KG: My expectation is that if there are websites in 30 years that this will still be how they’re put together. No matter what new functionality we add to the language. This mechanism for assembling websites arises kind of out of the business process and it isn’t so much in the way of technical decisions, it’s just like as a practical matter, you cannot coordinate all of the people who are involved in creating the content of the website and anything which assumes that like requires all of the people involved in building a website to coordinate cannot happen as the practical matter. That’s part of why I don’t think—that’s really the main reason I don’t think it’s part of the committee’s time to build something for the world because I don’t think the world is going to come to pass. I think if we are building a feature that is usable by a script on the page without the coordination of other script and improve the script on the page and sand box dependencies, that’s great, I have no problem with doing that. It is these things that would require global coordination that I don’t think are going to happen. Anything that can be incrementally adopted I would continue to consider in the scope. + +REK: Okay. Are you sort of categorically opposed to proposals that are not usable but this website but doesn’t actively harm those websites or their users or more like I would rather we didn’t or I don’t personally care? + +KG: I am personally fairly strongly opposed. I don’t think that’s a good use of our time. And like in practice, the primary consequence in terms of like engineering hours that are spent of the committee advancing the proposal is the main line production implementations do a lot of work to get something integrated with the systems and anything that requires deep integration for TypedArrays and proxies and these things have CVEs and performance impacts and various other effects on users. Even many of the most trivial effect and download side of the browser is what they care about. I don’t think it makes sense to ask the costs to be paid if it is not going to benefit the primary class of websites where people are spending their time. Now again this is just my opinion. I’m not asking the committee adopt this position, but it will be a hard sell for me personally to do any of these Things. + +REK: Right. I see. I don’t know that I agree with your opinion, but I understand your opinion, thanks. + +PFC: I was muted. I think Kevin covered my point. + +JSL: Just throughout my life and career I heard a lot of people say, well, look at this new technology, it will be so great and get rid of the old one and it will go away. CJS will be here forever and HTTP 1.0 will be here for ever and cobol will probably be here forever. And HTTP websites will be here forever. If we make something good enough, then certainly the preponderance of users might migrate to it or use it for old things and the old things will never go away and humans don’t seem to have a great track record with migrating en masse from the crappy thing to the better thing. There’s also like cell networks in the U.S. versus Africa and they have better cell networks because they skipped the middle period that we migrate away from. This is not unique to software. This is for any individual I would recommend or suggest that your life will be happier if you … yourself of the notion that we can ever migrate from A to B completely. + +ACE: So I’m wondering how much I should read into what you’re saying Kevin in terms of proposals and the implication on the polyfills, are you saying or advocating that we should seriously consider like compromising for want of a better word of a proposal if it makes the polyfill much like just easier to roll out on the—if it’s easier to have like ten versions of the polyfill on the page and that not be at issue even if because obviously the polyfill ideally only lives for a little period of time for people who want to early adopt, in reality lots of teams want to early adopt and use features before they’re ready and polyfill forever. Is it worth that the spec should technically live longer than the polyfill but are you saying we should compromise to make the polyfills just easier? + +KG: No, I don’t think so that. Also I don’t think that’s an accurate description of polyfills. Most modern polyfills for most features don’t actually end up replacing the native thing unless they need to feature detect. You include the script unconditionally but don’t actually patch the thing unless it’s necessary. The polycode itself will continue to run in practice; polyfill won’t. I don’t think it’s important. If it were the case I would be more inclined to say we care more about polyfillability. I don’t think we should care zero about polyfillability. I know some have the opinion. We should care some amount because it does come in up practice. But I think that generally speaking what happens is that the polyfill code lives forever and the polyfill itself isn’t running. It isn’t a big deal and people upgrade. + +JSL: Sorry to jump in. That is how most language polyfills work. I don’t have enough experience to know if web polyfills is the case. + +ACE: The issue we had in the past is that that isn’t what code is doing in terms of, you know— + +JSL: Polyfill from the previous decade and polyfills written or upgraded in the last decade match Kevin’s description. Per my earlier point, the earlier ones will still be there forever. + +MAH: Kevin, would you think the committee should have spent time on something like native modules given that native modules even today are still not used at runtime by most of these large websites? I’m trying to understand basically what we should spend time on if clearly and I think even at the time we understood that native modules would not be adopted by large existing applications. + +KG: That’s a good question. I guess I will leave it to browsers to speak to whether they think that the effort that they have spent in and continue to spend is worth it. I also disagree that these things are not adopted. They are not widely adopted, but because it is possible to have the module that is the native module that inter oner rates is present on the page alongside scripts that are not native modules you see the case and it is creeping up. That said, from I recall the conversations at the time, we were not pessimistic. The thing that happened is worse than the outcome I at least was anticipating back in 2015. At the time, there was excitement about HTTP 2 push and all of the other mechanisms that were expected to decrease the overhead of these things and HTTP push basically failed for other reasons. Well, related reasons. And like at the time I was not pessimistic as the current outcome. In 2015 you told me this was going to be the state going forward and that native modules were not going to see much use on the web and like they were still going to have considerably worse performance in production, I probably would have thought that our time would have been better spent elsewhere, yes. + +MAH: All right. I am still distressed that we would say let’s not really invest in the future and on features that new applications might be able to use even though there might be no hope or little hope for existing large applications to adopt because as JHD also said, existing applications are not and policing implementations are not going away. But new applications are built and those might be able to benefit from those features. And it just seems really sad to just give up on investing time for these new features if they’re not immediately applicable to these large cohorts of websites that exist today. + +KG: Again, I don’t think it’s just not immediately applicable. I am pretty sure in 30 years no matter what features we add to the language, websites will continue to be built in the way that I have described.. + +MAH: Not all, but… + +WH: This is a quintessential example of “shipping the org chart”. That effect is familiar to anyone who’s been around the industry long enough or seen examples of JavaScript out to the wild. Is there anything that surprised you? + +KG: So the main thing I end up being surprised is just what things people get up to, like, the amount that ??? law continues to hold. I mentioned in example briefly in more detail and my script patched HTTP exist and XHR we had certain headers for certain conditions. Fairly natural to do because of how it works and we have to patch the constructor and open and send. The initial implementation assumed that if you are calling XHR open and call XTR send with the dot send from the same realm. That assumption is false. This broke introduction because there was a fairly widely used library which under some circumstances would be like function prototype calling XHR prototype send from the iframe on the XHR different frame and we had to make it work because the website work. My main surprise is just how disciplined you need to be in making things work the way they are specified and that given this amount of discipline is required, everything ends up working as well as it does. Every day I’m shocked that this house of cards is still standing. + +WH: I also have observations about how things like this can change. There are occasional phase shifts where entire languages and paradigms get replaced. For that to work, the replacement has to be able to happen incrementally where your organization can switch to something better without making everybody else switch at the same time. This often happens via subsumption if you can create, for example, something better and then just have it compile to JavaScript + +KG: Or WASM now. + +WH: Yes, and HTTP is now rare compared with HTTPS because people could switch to HTTPS incrementally without having everyone else switch at the same time. That is an example of a successful migration. + +KG: I’m very in favor of improvements to the language that can be adopted incrementally. I think compartments are a good example of like you can have one script that has a compartment and isolates its own stuff and like to the extent that that is providing better security for the script, that is good and maybe some day that allows us to get on the path where that is just the way things are done by everyone. + +WH: Now, one new way in which subsumption can happen in the next few years that wasn’t possible before is with AIs that can look at the whole massive script mess on a website and transform the process to something different. + +KG: Well, the problem is like I said, the scripts are coming from different sources. + +WH: Yes. Until now it was impractical. AIs might make such coordination and replacement feasible. + +KG: My script is inserted by a different piece of hardware than most of the scripts on the page and I don’t know where AI could be sitting and manage coordination between my script and the other scripts. + +WH: It’s an interesting question. We’ll find out the answer in a few years. + +KG: Yes. + +MM: So I think Kevin’s talk actually helps illustrate what the difference between realms and compartments are and when you should use either one. Kevin, what should I call these components that are not coordinated with each other or written by a different team and just kind of mashed together in the same page? Are you more comfortable with mash up or plug in. + +KG: Mash up is better and plug in I think of plug in systems that this isn’t. + +MM: Okay, mash up. Each mash up internally like you already established might want to internally protect itself from supply chain attacks; attacks, you can’t defend against. You can defend against many accidents using compartments by themselves. You cannot defend against attacks using compartments by themselves but if the plug in, for example, if the team—not plug in, mash up. If the team behind the mash up finds they are repeatedly being screwed by supply chain attacks and they care, they might choose to put the bulk of the mash up inside of the ShadowRealm in which they run compartments with lockdown to get actually safety against supply chain attacks. + +KG: I guess I’m unclear on what role the compartment is serving in this world. + +MM: The compartment is enabled within the mash up. The mash up author to give different packages, let’s say, you know, in a lavamoat and actually using lavamoat is able to arrange to give different packages different initial authority and different connectivity to each other so that it can reason about the limited potential to do damage. And I realize in saying this that our previous discussion about the audit burden is missing something that is important which is what all of these techniques are about is not isolation per se, it’s about enabling intended interaction while minimizing the risk of destructive interference. That’s minimizing and not eliminating. There’s still an audit burden everywhere like I said, but we have found over and over again looking at the supply chain attacks in the context of full harden JavaScript that harden JavaScript would have prevented many of these existing supply chain attacks. And that’s adequate. If they continue to lower the overall risk burden internal to a mash up, because the mash up using a ShadowRealm is able to give itself an internal world in which it can do a lockdown, then it can protect itself against those dependencies that it has. Without either ShadowRealm or compartment, let’s be very clear in the world you’re talking about, how bad the supply chain risk is. One dependency of any of these mash ups might be a targeted attack at any of the other mash ups. + +KG: Yes, I agree things are pretty bad. I have a question which maybe you can answer offline so we can get through the queue before the end of the meeting. I still do not understand in the vision that you’re presenting if the compartments are sufficient to handle this kind of coordination between the modules within the mash up, I don’t understand what world the ShadowRealm serves and if they are not—sorry, if they are not sufficient to handle the coordination, I don’t understand what the compartments are doing. Maybe this gets into the different threat models that Kris was talking about, I would love to see an answer to that like in writing me or something. + +MM: So compartments by themselves provide a measure of mitigation of accidental interference risks. They do not provide by themselves any defense against malicious interference. + +KG: Okay. Interesting distinction. Let’s get through the queue. + +ZTZ: Somehow this was slow. So I hear a lot of explanations why global coordination is not possible. I don’t really understand what that is a response to, because while it’s not really debatable at the scale coordination, even if someone thinks it’s possible, it’s just going to be do so or not going to work and going to be very expensive and only very few companies have the culture built from the ground up necessary to do that. It’s interesting to observe the situation from your perspective where this is—from your observations are at the scale that is at this point the ultimate scale of the mess that is happening there. I still don’t understand why the coordination would be necessary for applying protections. I think the assumption that coordination is necessary is coming from the assumption that it would have to be a full freeze to all intrinsics everywhere. And since you’re already mentioning that libraries are using a method from a different realm within the iframe on the XHR of the current realm and that has been a problem, people are already using iframes to contain some of their code. And I know I’m probably in the younger half of the meeting here, but my first job, first serious full-time job was also building scripts that people would put in their websites and I would have to run things in there. I call it hostile environments. And it was very obvious to my very inexperienced me that I needed to reach for the iframe to be safe from all the mess that the websites were shipping with back when prototype JS was still pretty common. + +KG: We’ve only got two minutes. I would like to respond. + +ZTZ: Okay. So what I’m saying is every piece of that whole website could go and on its own protect itself from its components because as any other software, it probably consists of only 3 to 5% of its own code. And that we can enable and benefit from. And that’s a path forward that will end up with adoption on those sides. + +KG: I think you’re responding to something I didn’t say. I did not say we should pursue compartments or ShadowRealms or any of that. That is not a claim that I made. My claim, my personal position is that we should not pursue anything which relies on global coordination. If you have something in mind which doesn’t, I’m not speaking to that. I’m only saying we shouldn’t pursue things that are in global coordination. One of the things discussed several times is the lockdown mode. Sometimes discussed in the context that suggests the lockdown mode would be feasibly usable at the top level of the website. That would require global coordination and therefore ruled out by this. I’m not making any claims we shouldn’t allow people to do hardening which they can do internal to a script. Not a thing that I said. + +ZTZ: All right. So what I hear is that if I recall correctly, you proposed a very limited version of hardening that would only cover the most significant areas where prototype poisoning would occur. + +KG: I proposed that the language lock things down, not that we provide scripts a way of locking things down. + +ZTZ: Would the provision to lock things down enough to satisfy that? + +KG: No. Again— + +ZTZ: I think we are limited by don’t break the web. But— + +KG: I think we can freeze—the opt in is not going to be used. People are not going to use that. + +ZTZ: I would agree with that. Freezing iterator is a good thing in general. I believe where this is brings us is depending on the level of coordination or the scope that you can carve out of the whole problem of impossible coordination, depending on how much you can have, there are different levels of lockdown and for the behaviors you can have. So having one that’s by default and then an opt in to more sounds very appealing to me as opposed to one of the two on its own. + +KG: I think that it makes sense for the language to just enforce on everyone that certain things are frozen like array iterators if web compatibility to do so. I don’t think it makes sense to expose mechanisms that allow people to do more than that because those are just not going to be used. + +CDA: I will jump in here. We are past time. Maybe we can get to rob’s comment quickly. + +RPR: All right. I will try to be quick. So I kind of wanted to advocate for JavaScript beyond just the sites in particular, even beyond browsers and so on. I realize that the restriction you’re putting in here is for these sites. But outside of that domain, we’ve seen plenty of opportunity to benefit from things that do require global coordination such as the capability to freeze and lock down things. And I think in the future, if benefits were to be advertised with these things or on the case by case benefit if there was security advantages and if there were performance advantages, these are the things we want to take advantage of. We’re dealing with an internal system of Bloomberg maybe in the order of one thousand developers and found that coordinating that is feasible so if we get to 2 X performance win by freezing global object, that would be great. If these things were also available and still got the benefits of running things in the isolated container where the coordination just within that, that’s great. But if it could only be done at the global level, that would be very appealing. + +KG: Yeah, there are definitely cases where this kind of thing is valuable. Just, I don’t think that it ends up being valuable on the web in general, and things which are not usable by JavaScripts like major consumer, I don’t think that it is necessarily sensible for the committee to be specifying those things. + +CDA: We are now a few minutes past time. Can you be very brief, please. + +KM: Just on the 2 X note, I think that’s—you probably never see anything close to that. You would see any speed up because at least in JSC, we have code that you try to change—attach called the watch point and try to change any of those things with jet son all of the code and assume those were the right values. We don’t do run time checks on the values like day. So my guess it would just be basically be the convenience and might save some memory and we don’t have to allocate the objects that watch for all of the functions that watch every property of every prototype that you touch would be the main benefit would be memory wins and not necessarily through put performance wins. + +KG: Fair. Thanks for hearing me out. Wasn’t seeking consensus or anything. Just presenting some background and my own opinions. That’s all I got. Thank you. + +CDA: Thank you Kevin. Thanks to all of the presenters today and to our notetakers especially. Really appreciate everyone’s help and we’ll see folks tomorrow. Have a great day. diff --git a/meetings/2025-09/september-24.md b/meetings/2025-09/september-24.md new file mode 100644 index 0000000..7ba3692 --- /dev/null +++ b/meetings/2025-09/september-24.md @@ -0,0 +1,713 @@ +# 110th TC39 Meeting + +Day Three—24 September 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|-------------------|--------------|----------------| +| Waldemar Horwat | WH | Invited Expert | +| Michael Saboff | MLS | Invited Expert | +| Nicolò Ribaudo | NRO | Igalia | +| Ben Allen | BAN | Igalia | +| Jesse Alama | JMN | Igalia | +| Eemeli Aro | EAO | Mozilla | +| Chris de Almeida | CDA | IBM | +| Samina Husain | SHN | Ecma | +| Dmitry Makhnev | DJM | JetBrains | +| Istvan Sebestyen | IS | Ecma | +| Erik Marks | REK | Consensys | +| Jesse Alama | JMN | Igalia | +| Philip Chimento | PFC | Igalia | +| Devin Rousso | DRO | Invited Expert | +| Dan Minor | DLM | Mozilla | +| Guy Bedford | GB | Cloudflare | +| Jordan Harband | JHD | Herodevs | +| Justin Ridgewell | JRL | Google | +| Kevin Gibbons | KG | F5 | +| Kris Kowal | KKL | Agoric | +| Keith Miller | KM | Apple | +| Mathieu Hofman | MAH | Agoric | +| Mark S. Miller | MM | Agoric | +| Olivier Flückiger | OFR | Google | +| Ryan Cavanaugh | RCH | Microsoft | +| Rob Palmer | RPR | Bloomberg | +| Shane Carr | SFC | Google | +| Ujjwal Sharma | USA | Igalia | + +## Opening & Welcome + +Presenter: Ujjwal Sharma (USA) + +## Continuation: Amount for Stage 2 + +Presenter: Ben Allen (BAN) + +* [proposal](https://github.com/tc39/proposal-amount) +* [slides](https://docs.google.com/presentation/d/1cDQBcMzSAht9jZiuaMKAEIDlPmlSmjeBJ-sw23AySWI/edit?slide=id.g37deebb6a10_2_54#slide=id.g37deebb6a10_2_54) + +BAN: So let’s see. Since the past two very long days we have been addressing a number of concerns that came up with folks with regards to amount. I believe SFC has some slides from the V8 team. We were talking some on matrix about, do you want to show them first or go ahead with my slides? + +SFC: Yeah. Sure. I can go ahead with the—these slides. I think I am sharing my slides. There they are. So thanks for the update that you gave on Monday. I am here to share some feedback that I got from other Googlers on the V8 team on the amount proposal. And some concerns they had and things that we need to think—the champions need to consider here. These are—this slide deck corresponds to the items I put on the queue on Monday, but we didn’t have time to discuss on Monday. So I went ahead and organized them into a slide show for you. So I want to go through this. So I want to say, this deck state concerns not solutions. I trust that Ben will follow with some—with proposed solutions to these concerns. But I want to make sure the concerns are well understood. So the first concern is that were concerns that the amount night introduce a so-called back door decimal. We are concerned about introducing a new numeric type into the language. This is of course because amount represents not only numbers and BigInts but also strings based on the ROI that we saw with BigInts. There’s concerns that adding a decimal might not be the right direction for JavaScript and we acknowledge this is not the decimal proposal, but because of the way it’s currently shaped including with, you know, with numeric methods and so we worry this could in effect be a back door decimal. The second concern is that we feel we should emphasize the formatting forward use case because we are concerned that there may be an expectation that the amount proposed could be used for things other than what it is intended for. And given that the main use case seems to be the interoperability with intl, we feel that the proposal API should lean into that perspective. The third concern is that amount is not a general purpose solution. There is not really such a thing as a general purpose solution for representing measurements or currency. Amount uses a very specific representation of them. Amounts uses to represent with significant digits for precision, but there’s other way, like margin of error, discreet incrementing. The significant digits model is based on what intl currently uses and that’s another reason why that use case should be emphasized with API design. The fourth thing] that was brought up in the July meeting. We would like to avoid reading the internal slot everywhere in the spec to do that in places we accept an amount. We could also accept the amount protocol in order to encapsulate the internal value of the type. And then, the 5th is the HTML amount, early Stage HTML tag proposal, and if JavaScript moves forward with this amount type, we should definitely do so in such a way that it is designed to integrate with the HTML tag. It would be kind of, you know, shortsighted to have an amount type that when the amount—gets added to HTML they interoperability with each other. I went through those fast. I am happy to go back and sort of focus on any of those, if people had any questions about those concerns. The last slide is about Stage 2. The Google position is that the intl use case is motivating and support Stage 2, but operable of the concerns listed in the slides are addressed and there are 5 that we feel need to be addressed. So I went through the slide much faster than I anticipated going through it. But I will go ahead and turn it back over to Ben and then—unless there’s anyone in the queue with clarifying questions about my section of the slides here. + +DLM: There is a question from Mark + +SFC: Great. Cool. + +MM: This is the slide the question is on. So what is the HTML amount proposal? I have never heard of it. + +SFC: I think maybe MLA can share more about that. It’s an early stage proposal to annotate amounts which are numbers with units. It’s minimum to how the time tag annotates times and date times and that’s my understanding. It gives these type semantics meaning in HTML + +MM: Was this invented independently, this amount proposal? + +SFC: This is a—this is a question that also some of us on the Google team had, we are aware of the proposal. We were asking the champions to make sure we coordinate and maybe EAO can share more color. + +EAO: Yes. The HTML amount proposal is separate and not sharing in origin with the amount JavaScript proposal. It’s being forwarded not by me and we’ve been relatively close coordination with regard to this proposal. + +MM: Okay. So currently, are there any controversies green the proposals with regard to semantic differences? + +EAO: Not really + +MM: Great. Thank you. + +DLM: Next up we have CDA with a request to share the slides link. + +BAN: I believe Jesse is put them in matrix right + +DLM: Great. And then we have Ryan. + +RCH: Yeah. The question is self-contained. Would moving this from the topic of namespace to intl amount kind of address those concerns + +SFC: That’s getting into talking about solutions and I think that I would rather let Ben discuss solutions. As far as problems two and three such a solution is one solution that I believe would address concerns. Number 2 and 3. But it’s not the only solution that would address 2 and 3. So I would rather let Ben discuss that part. Go ahead and I think the queue is empty. So I am going to go ahead and press the stop presenting button and I will let Ben take over from here. + +BAN: Okay. We are visible. Okay. And throughout most of this proposal, most of the presentation, we will be using the name amount placeholder rather than amount because of the naming concerns. + +BAN: All right. So one thing that’s very important to make clear is this is not meant to be a back door to doing math on arbitrary mathematical values. There is no arithmetic present in the proposal. It’s meant to be a container for mathematical values, within limits. Obviously, I think most of the rest of the session aside from the amount is discussing limits. With limited functionality. So storing precision in some form and a unit. And something to add, the—even in the future proposal with unit conversions would not offer a path to doing arbitrary arithmetic on unbounded values. So one of the things that we have been discussing a lot is a name for it that indicates its use. And it indicates it is not something to be used for doing arbitrary math. So one thing that was discussed was moving it under intl. I will let other people talk about that. Some names we have considered are LabelledNumber. To indicate that this is in fact a labelled number. It is valued with a unit and a precision. Another is PlainAmount. This is sort of like by analogy withPlainDateTime in Temporal. Essentially, it’s something to encourage users to go look up what it means, rather than assuming this amount is something we can use to do operations, arbitrary operations with arbitrary mathematical values. Another name is FixedAmount that we have considered. Indicating that this is in product. This is a immutable thing used for formatting or exchange. AnnotatedNumeric that’s another name we have considered. It’s sort of like LabelledNumber. And then Dimension is something else that was brought up in discussion. We are amenable to essentially all these names. Something I would like to say, EAO might be able to talk about this more, but as I understand, the author of the explainer for HTML amount said there’s not conflicts and that he doesn’t see anything wrong with using the word “amount.” Obviously, we have to address these concerns. Throughout the presentation, we will say amount place holders, since we said— + +EAO: One of the reasons we ended up with Amount as the current name of the proposal is in order to align with the HTML proposal. + +BAN: Correct. All right. We have been going over a lot about precision in the last couple of days. I think most naturally going into detail with that, in a few slides I will be talking about some recent changes related to significant digits and throwing it over to NRO for that. But our design is at this point consistent with a future extension where we support other approaches to represent precision. Where we consider other approaches. So in the future, the placeholder amount whatever we are calling it could get a `.resolution` property. So, for example, a 0.500 kilograms measured with a scale that can detect grams would have resolution 0.001 and `.fractionDigit` 3. Likewise, the same on the scale that detect 5 gram increments could have resolution 0.005, this is useful for thinking of amount as something to represent quantities. This resolution idea (?) was world quantities. One of the motivating things for this was, there is some discussion on one of the issues about well, okay. So say we are using a barometer with the resolution 5 millibars or whatever, this could capture this. Other options considered, percentage-based tolerance. Interval radius, similar properties. And for rendering purposes, all of them would have a fraction digit-based approach. + +BAN: With regards to protocol. We are happy with this string protocol. So syntax to be determined. This is sort of a placeholder for the syntax. So in this case, it could be a string with the unit quantity in brackets. Obviously, notation that is future-proofed and extensible with eye towards potentially adding precision. Objects with `.unit` and `.fraction` could also work. So either of those is fine with us. Either of those proposed solutions. As we have mentioned, there is the amount HTML element. There's an in-progress explainer for it. That’s all there is now. So yeah. This would necessarily integrate with that. And the author of the explainer says there are no conflicts. + +BAN: Then yeah. Maybe full integration with HTML. All right. Hopefully this provides proposed solutions to all of the V8 concerns. We can discuss more obviously and will be discussing more. The other major set of concerns that we have been working on for the last couple of days is there’s some puzzles involving significant digits. One option is to remove support for that. JMN put up a PR for that. And NRO has put up an extensive PR, proposing changes to how we compute advantages. And with the thing to do right now is to flow a significant digit. + +NRO: Just given this other slide continuation of like very long discussion the other day just to make sure everybody is heard, let’s go through the agenda then we come back to this. + +CDA: Thank you. + +NRO: Sorry. Not the agenda, through the queue. + +BAN: Through the queue. Yeah. + +DLM: Going to the queue. First up, EAO. + +EAO: So this is a clarifying question for SFC: you mentioned that there’s a concern about the name Amount implying arithmetic might happen here. Is this concern specifically about arithmetic possibly happening on Amount in the JavaScript spec, or is this a concern about a user library implementing any arithmetic or other operations on amount? + +SFC: So with—we are pessimistic this satisfies the expectations of users in the long run as demand for arithmetic on amount values seems likely. Especially given its envisioned use cases. + +OFR: Maybe I can add one thing. For example, one thing that we noticed is that the spec itself had only, like, toString method on it. But then, the slides already had, like, a toNumber method on it. And so this was kind of showing a direction and then if you think about the toNumber method being used in practice, then I can quickly see libraries doing like `amount.toNumber` + `other.toNumber` and create a new amount from that and clearly the next proposal would be well, it should + and so that was part where this came from. + +BAN: Yeah. And that’s my bad. I went through the slides quickly. We have in the past couple of days updated the spec to remove those conversion methods, we removed toNumber and ToBigInt + +JHD: Yeah. I guess, people—the wrong thing from the name all the names because names are hard. `Proxy` doesn’t have to do with network forwarding. That was my cheeky thing in the queue. A word can mean a number of things because it’s a word. So what if a few people think they could do arbitrary math and then 3 seconds later play around with this and realize they can’t. What is the danger there? And as far as if they—if the thing they want is arbitrary math, well, then a proposal can seek to provide that, but, like, I guess I am curious what is the problem? Maybe I missed the initial slide of it. None of these alternative names seem better to me. Dimension would actually imply nothing about significant digits to me. Although it could certainly work and all the rest are like JavaBean factory non-sense. Amount is the right name and I am confused. I would love to understand what the risk is of some amount of people intuiting the wrong then and discovering they are wrong. + +EAO: So just reiterating, OFR, what I understood from what you said earlier, that the concern you have is that this opens up a little bit too much of a door for a later proposal to add arithmetic to the spec on amount. And that is what you’re concerned about. The concern is not about generally speaking some completely outside the spec library, implementing the sort of operations that apply on amount instances. Also, you mentioned there being a difference, for example, in the presentations previously on what is actually in the written spec and what is being proposed. I would urge you to consider that to be actually sort of—the direction of the development of this proposal has I think been the other way than that implies. In that, when this got to Stage 1, it included a lot of, like, currency—sorry. Unit conversion and other features possibly including arithmetic that had been stripped out of this as we have been going on. The spec as written has not included all the things we have been considering and proposing for an amount, but it’s definitely not in a direction where we want to expand what it does. More like contract it to be the minimal thing that still makes sense while not being any smaller or bigger. + +KM: Sorry. Maybe more of a question, maybe a reply. Do we not expect since there’s going to be a string output like, if you want to add these amounts to do toString or like, extract the numeric value to, like, do my addition to reinsert back into an amount? I am not sure that—people are resourceful at finding ways to glue the legos together and ways you don’t necessarily intend. I am not sure that—if that’s a real concern, I am not convinced even removing the, like, toNumber is going to solve your problem. + +NRO: Can I answer this? Can my next topic be in the queue. Yes, like it’s impossible to prevent people from doing math here. As long as there’s a way to get digits of this, people can construct a number. For example, the string orientation contains the unit. Which means that unit check, check if unit, strip out before the convert the number from the string. We can also make things like, the value of null does. Value of converts ToNumber if there is no unit, but we could also make the thing to make people don’t start doing, like, less thing to get the number out of it. Even just, if you pass to intl, and not true toString you can get that out without stripping the comments or the dots or whatever to convert this. It’s actually possible to fully prevent doing that. Even like with any possible type, if you put digits in there + +(?): So like a regex or whatever— + +NRO: Right. The more you make it, the more likely the people will look for different solution. Right? + +KM: Yeah. I mean, I agree. I just—I don’t think it necessarily precludes someone from doing that now. Let’s have a proposal to add amounts straight like that. If you're worried about stop someone from ever coming with that proposal, I think there’s a realistic way to stop is I guess is what I am saying without making the proposal completely useless. It could be stringified to something where you can never see the string again. Which I don’t even know you can do that on the web as you get straight back to the text on the HTML + +DLM: Let's continue the queue. + +OFR: Yeah. I wrongly assumed that the presentation was ahead of the spec. But if his the other way around, I am actually happy to hear that. And maybe also reply to KM. I think so, yeah, here is if we think about Amount as sort of a partial evaluation of a NumberFormatting, then I think this is, like, the use case that we agree with. And then, the use case that we are worried about is going into the direction of an arbitrary precision number being useful for normal JavaScript code. In that sense, removing an API that encourages such a use is—or actually not providing an API that would encourage use, but providing an API that allows us to render toString is going into the right direction, I think. And it is also tied to this idea of having a protocol, where basically you say, this is going to be the way, how you talk to this amount, object and you ask it for string representation and not for a numeric representation. And then you also saying, yes, people can do the other thing, yes, of course. But I still think there would be less of an expectation that it can be used for this use case. + +JHD: Yeah. So I guess I am still not—I don’t understand why it matters. People have an expectation of that. I understand why implementations don’t want to commit to arbitrary math and unbounded number values. Fine. Cool. People can come ask for and still say no. Like engines have done for, you know, primitives and records and tuples. Like, the existence of the expectation doesn’t bind anyone’s hands. If enough really want unbounded math, blah, blah, blah, then, like, you are not going to shut down something people really want by, like, making it annoying and painful and easy to screw up. And separately, if you are looking at it as just a, like, partially formatted thing that’s not a valuable container to me. I don’t see any motivation to do anything in the language if that’s all it’s providing. If it’s storing precision and a unit and a number, I should be able to extract those three things without having to do string parsing. Which, like, that is a far worse thing to encourage people to do than to do some math and occasionally get the wrong result because they are not guaranteed to do arbitrary math. We already have Number. And people are largely “fine” with the fact that, like, 0.1 +.2 doesn’t =.3. That’s not a bar to aim for. But, like, I think it’s—I don’t think people are going to be, like, pulling out pitchforks if they think they can do a thing that nobody told them they could do and the engine doesn’t let them do. I guess I don’t understand. And I think removing toNumber and toBigInt. Or removing the convenience methods to me doesn’t reduce—will not reduce anyone's desire for the use case if they have it at all. And it will just make it more annoying and easier to screw up. We shouldn’t go to extreme lengths. We doesn’t call BigInt like—we talked about the name for BigInt and talked about how we didn’t want people to believe that it was just a generic integer. And lo and behold people don't adopt and engines are complaining people didn’t adopt it. We need to work on things that are broadly useful and aren’t pigeon hold into unique use cases thus ensuring they won’t be adopted because no one will have the same idea + +DLM: Clarifying question from OFR + +OFR: So adding a numeric type to the language is a very, very extensive decision. And making ergonomic to use will be—either it's ergonomic to use which will make it very expensive or it will be unusable. And so I feel like if we were adding a numeric type to JavaScript, then we would have completely different discussions than we would need to—then opens up a whole area of discussions we would need to think about use cases for where to use numerical values. We here are talking about internationalization and it’s important to this will not evolve into a numerical value that we end up having in a language that is not designed to fulfill that purpose + +JHD: I don’t see this proposal at all as just for internationalization. For some people that is the primary motivation. But, like, I have amounts all over the—my career, you know, that have nothing to do with internationalization and have to do with currency or displaying qualities and so on. And it wouldn’t be of any use for me on Intl. + +DLM: Okay up is WH. + +WH: I have the same position as Jordan. I am not in favour of annoying users because of some vague fears. If an object is storing a number, you should be able to get the number value. It’s as simple as that. I am also not in favour of pigeonholing this to just supporting internationalization use cases. This is useful outside of internationalization and shouldn’t be bound up with internationalization. + +SFC: Yeah. I just wanted to note that, like, this—being able to—having the discussions ToNumber and BigInt I don’t use as foundational here. We have been shaping this as a—like a zero to pull that contains the three things, the precision, the unit and the mathematical value. And, like, these extra methods are supplementary. I think that they could be implemented in user-land. I think that, like, we—I think there’s a certain sort of tricky balance we need to strike between not encouraging people to use it in this way, but also—but also not making it too much of a barrier for people who insist on using it in that way and there’s a certain balance that we need to find. My comment down below that is one possible approach to that, but I will sit with the order I am in on the queue. + +DLM: Thank you. Next up EAO. + +EAO: So first of first of all, a reply to OFR. We don’t want Amount to be a numeric type that we are adding to JS; this is not the intent here. Amount is instead supposed to be able to wrap a numeric value and assign precision and unit information to it. So this can be used in the language. One of the places in which we are absolutely looking and needing to able to make use of this is `Intl.NumberFormat`. So you can wrap something like a currency value or a measurement value in an Amount and be able to format that. That’s where the story of this starts from. And I would say that there’s a big danger if we place Amount under Intl that JavaScript devs, who are known for their creativity in many ways, will recognize and use an `Intl.Amount` even if it’s made less usable to be a decent wrapper around numeric values and use it as such for interchange and other needs that exist in the language. So specifically, I do think that interchange is an important use case here. And in a way, I think we kind of need something very much like Amount in JavaScript, even in order to be able to parse arbitrary JSON without losing precision. Right now, JSON can contain numerical non-integer values that we don’t have in JavaScript without an amount as a way to directly represent them, except by a custom user object. I think there’s strong use cases for an amount that will be found by JavaScript developers even if we only add it to Intl. The question is, are we still okay including interchange as a proposed use case for Amount? And if not, then many of the concerns raised here of course follow and then it becomes questionable whether we ought to be doing this thing at all because. If we make it possible to figure out that a string is representing a numerical value, that is doing the non-formatting and non-internationalization parts of the Amount proposal, for the most part. Which I think are valuable, but if we think that is actively a thing we should not allow to happen in the language, we should recognize that and I would be very interested to hear why we should not allow a digit string to be recognized as representing a valid numerical value. + +NRO: This was a reply to some topics ago when somebody mentioned forcing people to use to get the number out of Amount risks getting the wrong results. Just be careful, even ToNumber can lead to wrong results because an Amount is not float based. It can represent the strings, maybe there’s rounding like, the number you do the math on is not the number that is included in the object. + +SFC: Yeah. I just wanted to suggest that one way to, like, allow creative users, I like that word creative users, to, like, do what they want to do without really sanctioning it would be to have the ToStringOptions, two options which is already that we have is there, if it’s not, we could do so. For example, suppress the unit and the unit and the output and so forth. If someone wants to convert to a number, call toString without the unit and call the Number constructor or parseFloat or whatever. And that’s very explicit about what they are doing. If it’s more explicit because of the point that NRO just raced, ToNumber rounds the amount, so having it explicitly go through string makes it more clear that is the operation you are doing. And, like, it allows creative users to get what they want without having to do string parsing. So yeah. That might be a way to sort of maybe achieve that balance I was talking about earlier. + +NRO: Yeah. Concrete example of interchange, in postgres, which is a commonly used DB in JS you can have the type which is a number with some length of digits and some length of fraction digits and the adapter doesn’t know how to represent is, like, it could—with Amount, it will just give an Amount to user code and then the user code actually decides what had to do with information stored in the database. + +SFC: Yeah. I just—since we’re talking about topic of interchange, I was casually talking with a developer in school, and sort of—early career, and I was talking to them about numbers and JSON and they were—then they were complaining to me about how they had a number 1 and 1.0 and 1.0 which the stringification of the one and not round tripping and they were—they found that very annoying because their JSON was being changed on the input and output. And I note that amount retains precision and that representation, it also, like, sort of addresses that annoyance of interchange. This whole thing about interchange with some—the fact that many languages have the concept of numbers with precision and JavaScript doesn’t have that right now, it does raise issues with this. This is an anecdote that I encountered that is a problem that developers do encounter and are annoyed by and this is, you know, a way to address that type of issue. + +KG: Sorry to raise this again, I raised this the previous time this was discussed, but the discussion got cut off because we had a long queue. I was hoping we could address it in the meantime or in this presentation, but I don’t think we did. I am concerned about the motivation for this proposal. I am—it is not clear to me that it makes sense for this to be a new constructor in the language. As opposed to just to convention that intl NumberFormat accepts objects with value, significantDigits, and unit properties. I think that if the concern is merely having these things that can be passed around, just establishing a convention that `Intl.NumberFormat` or whatever takes objects this particular shape and perhaps the HTML element produces objects of that shape is sufficient and I don’t really see what the point of having it as a first-class constructor in the language is. The topics mentioned last time that I recall anyway are: having it be immutable, which I don’t see as something to warrant inclusion in the language because you can just freeze objects if you want; and having conventional names, but having intl format accept objects with particular names is a perfectly valid way of establishing a convention. Yeah. This the single most important thing for any proposal going for Stage 2, it needs to justify its inclusion in the language. The details of other things are important to work out but this is like the front and centre thing for the proposal to get to Stage 2. And I have not yet understood why this is worth doing. Like, concretely, why it is worth having a type in the language. I understand why it’s worth being able to represent these values. And to be able to format these values. But the way I would do that is by having a convention of having an object with three specific names for keys. And, I want to understand why that approach is not sufficient, if we are going to be doing something else. + +SFC: Yeah. A few more things to cover here. For why this carries its weight as a level language feature in my view: the first is that this idea of NumberFormat protocol, which I agree should be included as part of this proposal, we have plenty of precedent of how protocols have primordials associated with them and I think this is no different than that. Like, adding a protocol effectively defines the shape of an object and then giving that object a name just corresponds to what the protocol is called. You can duck-type things like amounts. But introducing just a stand-alone protocol by itself for a thing that is actually a datatype that actually represents a real quantity is not something we have typically done. Second is integration with MessageFormat including third party implementations. There’s a lot of better ways of having a lingua franca for agreeing on the shapes of these objects. And a primordial is, you know, hands down the right way to do that. It’s, for example, you might have a case where you have like a React template that takes an amount and displays it to the user and that’s the result of some other library that gave you an Amount. And having simply an `Intl.NumberFormat` protocol does not move the needle at all on establishing that Linga Franca. So it integrates with MessageFormat including third party implementations. The motivation for 402 is strong. I would argue this is a type that should have been in intl 1.0. I think this design where the precision and unit are specified in the constructor of NumberFormat and then can’t be changed until you create another NumberFormat is sort of a flawed design because it doesn’t have a clear—it doesn’t establish a separation between developer options and display and user preferences which are three different places to obtain data for the purpose of internationalization. So like if this had been included in as part of Intl 1.0, it would have had a name. And I think basically we are establishing that now. You know, 13 years after intl 1.0 is sort of how I see this. And the other big thing which is something that as language designers we should not discount is the discoverability. We can have an Amount protocol as much as we want. Unless you can actually, like, point to a docs page at MDN which says what is the amount protocol? It’s the protocol that Amount. If viewers want to create their own objects like objects, that's totally fine. But having a—having a type in the language that implements that is just much better for the discoverability and the interoperability. It’s also a very small object. And as you point out, like if we were to, for example, add a method on Number or future Decimal type that returns one of these, then we should be returning a named object. It’s kind of silly to be returning an object bag that has no methods on it, especially since, you know, this type also defines other—it defines constructors, you know, and having a single type it gives the ability to construct it and interchange it and I see WH is next on the queue to will talk about interchange. I will go ahead and let him cover that for me + +WH: To answer KG’s case, the main use case is interchange of numbers with trailing zeroes. There are a number of situations where you want to preserve precision of a number by making the number of trailing zeroes significant. This serves as an interchange for such things. Now, I am a bit concerned about some of the shifts in direction which I have seen here—if we succeed in removing conversions to number types from this, then this will frustrate the interchange use case and then I agree with KG that this will become unmotivated. So this only works for its use case if we allow conversions to and from number types. + +KG: I am mostly responding to SFC. I am concerned with the idea that any conventional shape of object deserves its own constructor in the language. There are key/value pairs all over the place in libraries. And like, you can—they don’t need a type. There are lots of objects that are passed around that just have conventional shapes and they don’t need a type. If the motivation for the proposal is to cause an MDN page to be created, we can do that without adding something to the language. Like, it’s an open project and they are reasonable people, if we think that would be something important to do, they take PRs. I just don’t think that, like, "this is a shape that shows up a lot" is on its own a reason to add something to the language. + +DLM: Okay. I want to say, you asked for a 10-minute warning. This is the last 10 minutes in the timebox but I think this is an important conversation to finish. + +SFC: I just wanted to note I am not saying this is for any shape, but it is for objects that represent nouns that represent actual things. Because nouns, actual things can be passed across libraries, can be used for interchange or JSON formatting. And they have the methods to construct them to get the pieces of them. And so it’s not just any shape. For example, option bags are not actual nouns. They are basically a way to have a variety of arguments. This is an actual noun. And I see that that is a different class. It is a different class. + +OFR: Yeah. This is a reply to this question of whether it should have ToNumber, and it was already mentioned ToNumber will not be precise here. So it will truncate or round. The domain of the amount is way bigger than number and BigInt. There’s not a representative numbers in here. The main motivation is to interchange these amounts and do thing with them, do something new, then I think it absolutely needs a plus on it. Otherwise, you can’t do anything with it. And if we say, no, this doesn’t need arithmetic, then I don’t see the value of having ToNumber method or alternatively, say, like, okay. The value of this is just a number. That’s T if we say that’s enough for interchange, which is make the values not the number and numeric type. + +SFC: If the if numerics are just a number, like, this is—then we should basically be—you know `Number.Amount` that contains a Number and a `BigInt.Amount` that contains a Bigint, and then more like primordials adding that have a similar shape. As opposed to a single primordial that covers all the use cases inside one. I don’t know if that’s a direction that anyone would prefer that we explore. But yeah. That’s sort of how I see it. Like, if it contains a number, it should say so. + +BAN: WH, would you like us to spend a few minutes talking about the changes to significant digits we have been making? + +WH: I couldn't understand you. + +BAN: Given we are close to the end of the timebox, would you like us to spend the last 5 minutes talking about the changes to significant digits that we have been making based on your feedback from Tuesday? + +WH: Yes. + +BAN: Okay. Probably NRO is the person who can address this most directly. Over to you. + +NRO: So there was some discussion on Monday about how the resolving to significant digits, not working. There are requests out, one to suggest removing the accessor and say amount and only use construction digits and the other to change significant digits to work. Compute the significant digit from fraction digits. This is easy to do for numbers that are not zero. There’s just a form(?) for that, which is what Intl does. It’s more difficult for zero because technically talking about the number of significant digits of zero is just like not a well-defined statement. We need to pick an answer there. The request is the answer that you are speaking of, if you have a bunch of zeros and a fractional zeros, then the number is that digits is 1 + the number of fractional zeros and that matches how Intl behaves. That is consistency within the language. Applying this formal to zeros that have no fractional digits can yield negative numbers with no significant digits which is something that makes even less sense. So the formula is capped to give at least one, which means something like zero will be—it’s like 0 and E3. Zero times a thousand. The other thing one significant digit, the zero of the order of the thousandth. When you have small things like 0.000, so we could say the thing also only has one significant digit, with 0, for alignment within intl, we count from the units and go to the right. Sort of like 4 or 5 significant digits. + +NRO: Yeah. I guess we would like to go ahead with this. We get a potential approach being seen 0 has one significant digit, even though it doesn’t align with what intl already does. Are there opinions here? Thank you, WH for reviewing this request this quickly, by the way. + +WH: Yes. Either approach sounds good to me. So either we can entirely remove the *significantDigits* getter, or, if we do provide the getter, then computing them in the way that you described is the way to do it and aligns with Intl. + +DLM: MM has a topic. There’s time this afternoon, later today, if you want to request another continuation. Go to the queue for MM. + +MM: I want to make a quick remark. We do have duck-typing of other things in the language. And in particular, iterable, iterator and, iterator result are duck-typed. Iterator also has an iterator prototype that has helper method, but iterators are still recognized based on their duck typing. And iterable and iteration result are only duck typed. Those provide some interesting precedents for recognizing something based just on the duck type and maybe also providing helpers and even prototype which are optional. That’s it. + +NRO: There is an issue for that, If you want to comment there, if you have a preference. + +RCH: All right. Let’s see. `Object.entry` and `Object.fromEntires` is a case where we didn’t create a KeyValuePair primitive, it's just a two element array. It’s highly precedented. + +### Speaker's Summary of Key Points + +* Discussed and proposed solutions for V8 concerns and WH concerns re: sig digits + +### Conclusion + +* Continuation requested + +## Increase limits on Intl MV + +Presenter: Shane F Carr (SFC) + +* [PR](https://github.com/tc39/ecma402/pull/1022) +* [slides](https://docs.google.com/presentation/d/1V1BC6PtJ7-q6zVvsgKt9dcaLmIeeLnE3s8DwP4KOl7Q/edit?slide=id.p#slide=id.p) + +SFC: All right, so I want to present this pull request that is linked from the agenda. I want to first give a little bit of background of what this pull request and explain the current shape and alternatives to it. Intl mathematical value, what is it? It’s used in Intl NumberFormat as a way to represent the different types that Intl NumberFormat is capable of formatting. It’s a mathematical value plus four special values, negative zero, NaN, positive infinity and negative infinity. Create it from the number or BigInt or the string. Do it from the string it’s string numeric literal grammar. We had in the spec of Intl format V 3 that has been around for a couple of years now. That is what Intl mathematical value is. You may have also seen the concept referenced in the amount proposal but this pull request that I have to discuss today applies to Intl mathematical value as it currently stands in the ECMA402 specification and although this discussion could have implications for amount, that’s not what I’m currently presenting. I’m currently presenting this in the context of ECMA402. Let’s talk about Intl mathematical values that are created from strings. So prior to Intl NumberFormat V 3 strings were parsed as numbers and effectively a limit in terms of the numbers and significant digits and the exponent, because we didn’t support this ability to basically do formatting of string numbers. Currently Intl mathematical values created from strings only represent values that are also representable as numbers without rounding them. + +SFC: So this is the current line. Basically what we do is take the string and convert it to the number. If the number is infinity or zero, then we retain it as infinity or zero. If it doesn’t parse through infinity or zero, we retain the string including the additional digits that the string has beyond the capacity of number. So this in effect limits the exponent and then the formatting of Intl NumberFormat limits the number of significant digits. Just want to be clear that this limit applies to Intl mathematical values parse and strings. Doesn’t apply to BigInt. With BigInt we support formatting of BigInt that are much larger than the capacity of the string. So just to get some examples, these are all strings. Just want to emphasize that again. If you have the string of 1e, 308 that parses to mathematical value 1e308. This is close to `Number.maxValue`. `Number.maxValue` is not exactly 1e308, it’s 1.7 something. As a string, as the string value for Intl mathematical value, we retain the value, the mathematical value as in the string. However, 2e308 exceeds the capacity of the Intl mathematical value—sorry, exceeds the limit of what a number can contain, and therefore rounds to positive infinity. The same thing happens at the bottom if you go to the lower range of exponents. And the longest value that we can represent would be you can represent as many significant digits as you want as long as the numeric value is less than 2 to the power of 308 accept that of course the least significant digits are not able to be formats of mathematical value that in effect gives the limit to the number of significant digits. Problems with the status quo that WH mentioned is the limits are not enough to handle a future Decimal128 proposal and there’s a concern that increasing the limits could be considered a breaking change. The other problem that I added is that the string limits are easy to check if you have the number parsing library, but not trivial to check if you’re just looking at the number as a string, as a list of digits. You have to first see if you can project it into Number space. So checking the limit is not as trivial as it could be. So the currently proposed solution in the pull request is to change the limit on the exponents to be between negative 10,000 and positive 10,000. Why did I pick 10,000? Indicates this is an arbitrary choice. I could have picked some other number. I picked 10,000 because it looks very arbitrary. It looks very arbitrary and the significant digits, this is a little bit tricky to understand but reflects Hoi it is implemented in ICU and IC freak and limit is significant to map to the power of 10, that’s also within the range. What this means is that there’s a discrete fixed set of Intl mathematical numbers that are representable that are derived from the strings. Basically in increments of 1 to the minus 10,000, you can add 1 to the minus 10,000 and then get the next discrete Intl MV and you add again 1 to the minus 10,000 and next discrete of MV and do it to the largest mathematical value that is 20,000 nines with the decimal separator in the middle of the 20,000 nines. That’s the largest Intl mathematical value. So we have them in this discrete space. This is easy to enforce the—just checking the number of digits when parsing the string into the mathematical value and it covers the domain of decimal. + +SFC: Alternatives: This is arbitrary limits we came up with in the pull request. I’m still largely seeking feedback on those. One alternative is we align to the decimal 128 range. This doesn’t mean that we’re committing to adding decimal 128 to the language. I know that that’s not something that we have consensus on, but we can still use the decimal 128 range as a way to enforce the limits on IntlMV. That’s one option. The second option is use the exponent limit of 10,000 but enforce a smaller significant digit limit that reduces the implementation requirements. Note the current effect of limit is 408 because you can have 308 digits left with the decimal separator and one hundred after the decimal separator. Effective is 408 and make it encoded in the spec as 500. Alternative is no spec-defined limit. I want to point out we discussed this previously during the Intl number 3 proposal we want the limit here. One alternative you could be not to have the spec defined limit and allow arbitrary length strings to be processed that is not an alternative that I prefer. So I’m currently leaning towards an alternative to that is not exactly in the pull request. That’s why on the next slide—actually not the next slide but talk about web compatibility first. Intl NumberFormat V 3 changed this bound and changing it again is likely web-compatible. I want to point out why changing Intl NumberFormat V 3 increase the number of significant digits but didn’t increase the exponent. It is not exactly comparing apples to apples. I do still feel this is likely – that we still feel in ECMA402 this is likely web compatible based on the knowledge of the usage of the API; however, it means I agree with WH it’s better to do it sooner better than later in order to be future proof. I also wanted to discuss on the agenda, I had discussed this idea of spec-defined limits in general. But I will go ahead and spin that off into the separate discussion we can do in Tokyo. I will have this presentation be narrowly focused on the problem that’s before us now, which is this Intl mathematical limits thing. + +SFC: I’m sort of seeking two parts of the consensus, one is the limit of Intl MV should be increased in order to future proof for decimal128 and the other is which of the limits should we apply? I also received feedback on the pull request of WH that came in a couple of days ago about some of the language that we could do to make sure that the pull request actually does what it says it does. So I appreciate those reviews and I would like to continue getting feedback in that area. So before we actually merge this, we’ll make sure that the spec is bullet proof and WH will sign off on that. But I want to agree on like what it does. So I’m sort of seeking these two separate points here. That’s the end of my presentation. To the queue. + +WH: Clarifying question, the examples you have in the linked description of this are wrong; is that correct? I posted a question about it a few days ago. + +SFC: Yeah, I saw your question and, yeah, I’ll go back to my slides. These examples are— + +WH: I don’t mean these examples. + +SFC: And these examples. + +WH: I meant on the page that was linked from the agenda. + +SFC: In the pull request description, it’s possible that there is a typo in the description. I haven’t had a chance to verify that or not. But what I’m showing on the slide right here is the intent and I believe is accurate and I’ve double checked these, what’s on the slides right now. + +WH: Okay. So that’s good. Now, this slide conflicts with the slide in which you have option 2. So I’m not sure which one takes precedence. + +SFC: Yeah, that’s why I’m asking for the two-part consensus. The pull request implements this. It doesn’t implement alternative two. That’s why I’m sort of looking for consensus on one that we want to land a pull request that increases the limits in order to future proof decimal128 and secondarily which is the preferred approach of that? Procedure-wise I feel like we should come back again to this committee with the pull request that implements that we agree with here before we actually approve it. That’s fine. Or we can procedure-wise say that, you know, ECMA402 merge the pull request if the editors agree that it implements the consensus that we agree here sort of in principle. + +WH: I don’t understand the motivation for this because this is a mathematical value. Mathematical values can include uncountably large sets of values. A mathematical value can be any real number. So there’s an uncountable infinity of them. So I don’t understand motivation of limiting mathematical values to some number of significant digits. If you do, it’s no longer a mathematical value. Now, this is not to say that I’m advocating for having arbitrarily long digit strings in the implementation, but the thing that’s actually doing display of rounding should be doing that, rather than us limiting the set of the mathematical value. + +SFC: Maybe we can go to the queue. + +DLM: MM has a clarifying question. + +MM: Yes. So I’ve been referring to mathematical values, and I think the rest of us have as denoting real numbers. They do denote real values why they’re mathematical value. The only numbers that are denoted by the mathematical values that any of us have considered for any of these proposals are rationals. So just want to make sure that we’re not implying that any real number can be represented. And that’s it. + +EAO: Clarifying question also here. In the presentation here, you’re talking of limits of up to 10,000 significant digits, but the current PR for the spec is actually a limit on fraction digits. Can you clarify which is the intended limit? + +SFC: Yeah, the current specification says that the largest magnitude that’s allowed is positive 10,000 or positive 9,999 and the limit on the number of significant digits is effectively enforced by 10,000 fraction digits which means there’s in effect a limit of 20,000 significant digits. So, yes, you’re right, that the spec basically enforces it by capping the number of fraction digits. That is a correct statement. + +EAO: So the slides should have all places where you refer to significant digits be replaced with fraction digits, right? + +SFC: No, I think an alternative way of stating this would be to say that we truncate the mathematical value to having up to 10,000 fraction digits. That’s probably actually a more clear way of stating this. I’m stating it in terms of significant digits, but you’re correct that stating it in terms of fraction digits would maybe make the slide more clear. + +DLM: You had another topic EAO, unless you covered it. + +EAO: Yes, one alternative came to me while you were presenting is that given that Intl mathematical values can also be constructed from BigInt that do not have a spec mandated upper limit for which current implementations are fine for formatting, many values that are much higher than I think even proposed here, would it therefore possibly make sense to apply a limit only to the number of fraction digits that are accepted for formatting because then we would have effectively the same limit apply to all Intl mathematical values and not have different limits depending on from what type the value was constructed? + +SFC: So maybe we can go back, and I think I would like to discuss this—WH, I’m not entirely sure if the statement you made was an issue with the pull request. I will just note that we’re using the concept of the mathematical value to represent the numeric call of the string, but it is an effect a string number that is being formatted through Intl NumberFormat and I don’t see anything like—I don’t see why it would be contrary to the design of, why it doesn’t work in order that we use mathematical value and then just say, well, it’s a mathematical value, but it is only defined, we’re only using mathematical values within this certain domain. Basically what we’re doing. I don’t understand why that’s a problem. + +WH: It’s a nomenclature problem. You can do that but it’d no longer be a mathematical value. Having said that, I’m fine with the pull request as it is once you fix the crash-on-zeroes bug. I would not be in favor of limiting the significant digits to 500 but I’m okay with limiting decimal places to be between exponents of +10,000 and -10,000 which is what you had on the previous slide. This one looks good. + +EAO: Just noting that clearly a part of the conflict here is the naming of the value that only exists in processing. Maybe as was discussed yesterday or Monday, we should consider renaming this maybe in this context of something `Intl.FormattableNumericalValue` or something else that is more clearly identified that we are not intending for this to be a general practice representation for possible numbers in the usage that we could for Intl.NumberFormat. + +DLM: EAO in the queue for support for consensus for item Number 1. + +SFC: I will go ahead and no one else in the queue and go back here. I would like consensus on point 1. + +WH: I support that. I initiated this. + +DLM: Okay. So I don’t if you want to speak mark. There’s a response from mark. + +MM: No need to speak. + +DLM: See if there’s any dissenting voices and then we can move on to Number 2. + +SFC: Sure. + +DLM: Any dissenting voices to Part 1? + +DLM: Number 2 we should be clear on exactly what you’re proposing. + +SFC: So I mean, the space of what the actual limits are is infinite. I pulled a few things out of the infinite space to put on the slide. I was wondering WH if you can sort of—if you can elaborate on what your concern is with the significant digit limit of 500. + +WH: This goes outside of the scope of this proposal. We’re discussing just what mathematical value means here. It seems to me like you’re conflating a mathematical value with how it’s used downstream. So we’d need to understand how it’s used—you haven’t provided enough information as of now to usefully discuss that topic. + +MM: So I didn’t catch what the motivation was for the smaller range of the exponent. The previous slide that WH likes also makes more sense to me or I don’t see, it certainly covers more and if there’s no disadvantage to it, why even consider a small range? I’m sorry. Smaller significant digit. I got it backwards. But in either case, I don’t understand what the motivation is for limiting. + +SFC: Yeah, I mean, the motivation is simply that implementations can do something more efficient, for example, with the limit of 500, this can be represented in 250 bytes if you use a byte-encoded decimal representation that is small enough that implementations could choose to not use heap memory in order to represent this. They could represent it all in stack memory. So that would be one reason why a limit of significant digit of 500 is effective and if we go with 20,000 used heap memory in order to represent one of these. + +MM: So we got many implementers in a room, do any of the implementers care about the implementability of the larger units, or, Shane, are you basing this on feedback from actual implementation? + +SFC: Well, I’m also an implementer here and I’ve implemented this. And so I’m just saying that it opens up doors to like reduce the requirements used of heap memory here. And I can say that because the rule of thumb that most of the Intl libraries have used is that anything that is—we don’t want to exceed a certain number of bytes of space on the stack and that number usually hovers around a kilobyte of space, something less than a kilobyte of space on the stack is something that we’re okay with. More than that is something that we want to have the on the allocated. There’s various reasons to find that limit to be appropriate. There starts to be performance issues if you have too big of a stack size of an object because you have to copy it from function scope to function scope than have a pointer. If the object is small enough, the call stack for the object if you had to copy it around is negligible. + +MM: So let me just say I agree with that with regard to representing on the stack. What I didn’t know that you just clarified is that implementations are actually using the stack and not just hypothetical. + +SFC: As of right now, there’s like an overflow to heap thing that’s going on. I don’t believe that the implementations are currently using stack for all values of the current – because currently there’s an effective limit of 408 and I believe somewhere around 64 digits before the implementation is used to overflow. But that’s not necessarily a decision that is set in stone. Having the limit of 500 keeps the door open for implementation if they want to use this optimization and having the 20 K limit kind of closes the door. + +MM: I’m not an implementer of this obviously. The spilling to the heap especially since it’s already implemented seems like a small cost to maintain in order to avoid observable limits. You know, strings are obviously on the heap and have limits that are so large that people essentially don’t run into them without running out of heap. Since you’ve already got the spilling to the heap for larger sizes, those seem like the better Solution. + +SFC: Yeah, I mean, I think the spilling to the heap is already implemented. It’s more of a concern of future privy making it implementable without having to overflow to the heap. For example, this is over flowing to the heap is the only thing in all of IC FRIC and formatting that uses heap. Everything else is implemented without the library, without CC1 the all Kay Tory as long as you have the interface for writing to the string where interface can accept bytes and write them to the string. Besides that, this is the only other place where we use heap memory and, yeah, I don’t see much other alternative other than using heap memory here. So it basically locks in implementation to having the allocator be required. Maybe JS engines already require the allocator. In the narrow JS context, maybe this is not actually a problem, because it’s hard to implement JS without using an allocator. Is there anyone else on the queue? + +WH: Okay. Just to make it clear, BigInt can be converted to this type, right? + +SFC: To Intl mathematical value, yes. But the limits don’t apply to BigInt. You’re making an absolutely correct observation because BigInts are also formattable, it means that implementation that supports BigInts does need to overflow to the heap in order to format them. + +WH: Well, it’s worse than that. If you accept alternative 2, then you would round BigInts when you format? + +SFC: Only provides strings. Everything in this presentation applies to strings. And this is also a comment that EAO gave earlier and I added four strings to almost every slide. I didn’t add it to this slide. But it’s limit exponent of strings, of strings, of strings 10 K exponent limit and smaller significant digit for strings. Nothing here applies to BigInts. + +WH: It’s still not okay because if you were to provide a BigInt and round it through here, it should behave the same and it wouldn’t. + +SFC: It definitely does currently do that. If you take the BigInt and pass it to the string and pass to the Intl format never behave the same if you pass it to BigInt directly. + +WH: What changes? + +SFC: It gets processed as a string. It’s never been the case that the string valueOf the BigInt behaves the same as the BigInt itself when you pass it to NumberFormat. It’s never been the case. + +WH: What visibly changes? + +SFC: It gets interpreted as the string. Prior to Intl number format 3 you would choose the closest number value to the BigInt string after NumberFormat 3 and you retain significant digits and rounds to infinity and zero. If it exceeds the capacity of a number. + +WH: Okay. How does the value change, or does it change? + +SFC: The least significant digits could be truncated and if the value is too big, it would get rounded to infinity. It’s been this way for a long time. + +WH: I’m not okay with alternative 2. I am okay with the alternative on the previous slide. + +SFC: All right. I’m happy to say that we have consensus on this alternative and I may continue to consult with WH outside if there’s another limit that we agree with, I will come back at the next meeting. Otherwise, we’ll assume that we proceed with this currently proposed solution which is the pull request. So I’d like consensus on that. + +MM: I’ll just express support for what’s on this slide. + +SFC: Okay. Cool. I would like people to pick an alternative and support it. Let’s say the alternative on the slide is the one that we achieve consensus for. + +DLM: We only asked for consensus for part 1. We should probably explicitly ask for— + +SFC: That’s what I’m asking for now. On Part 2, I’m asking for consensus on the currently proposed solution which is the one in the pull request on this slide, which is the one on this slide. Let’s say the one on this slide since the pull request might have bugs. The one on this slide that is the intent of the pull request. That’s what I’m seeking consensus on now. + +DLM: We have support of MM and WH. + +CDA: Just for the notes there’s a few references to this slide and what’s on this slide. Can you just very quickly describe— + +SFC: The slide entitled currently proposed solution is the what we’re seeking consensus in terms of the concrete implementation, the concrete limits that is Part 2 of the consensus that we’re seeking. Part 1 of the consensus is do we want to increase the limits? That was agreed to. Part 2 is which limits we want to apply. Consensus slide currently proposed solution are the limits we’re seeking to apply. + +DLM: Support from WH and great to hear support from one other person. + +CDA: Support from MM as well. + +DLM: EAO on the queue for support is three. The absence of any opposition, I think you have consensus. Would you like to do a summary and conclusion for the notes at this time or do it later? + +### Speaker's Summary of Key Points + +I was seeking a two-part consensus. The first part is that we should increase the limit of Intl mathematical value and the other is applying specifically the limit that is presented on the slide titled currently proposed solution. + +### Conclusion + +Both of those parts achieved consensus. + +## Continuation: Amount for Stage 2 (again) + +Presenter: Ben Allen (BAN) + +* [proposal](https://github.com/tc39/proposal-amount) +* [slides](https://docs.google.com/presentation/d/1cDQBcMzSAht9jZiuaMKAEIDlPmlSmjeBJ-sw23AySWI/edit?slide=id.g37deebb6a10_2_54#slide=id.g37deebb6a10_2_54) + +BAN: I wanted to start with a quick example that Jesse whipped up. It is possible to do arithmetic on amounts. And this is the including the two numeric methods. It is possible to do arithmetic on amounts but it is rather bulky. The main thing is it seems like the two open questions that are active on matrix and elsewhere are naming and whether or not we should include the numeric conversion methods so naming amount versus something other than amount or some sort of qualified amount. And then the question of whether or not the numerical conversion methods are something we want to include or not include. And I think maybe the most useful thing is—let me load up the queue, if folks want to get on the queue for the questions. + +NRO: For the Google team, the suggestion of maybe change the name does it actually move the needle when it talks to how the API may be used or just misinterpret that? It seems like the preference for the amount, calling this amount from other delegates. If changing the name is not the rest of the problem, we should stick with this name. + +SFC: I don’t know if OFR has a more authoritative answer to that question. I’m champion of the amount proposal and from wearing the hat as like representative from Google internationalization team I’m happy with the name Amount, but I’m not really confident saying one way or another whether other delegates from Google feel the same way if we address the other concerns of API design. + +DLM: For my side, I think amount is a better name than the options that were presented in the slides. + +OFR: I think I don’t have an answer at this point and we will just have to—I don’t have a better answer than SFC. + +MM: Can you go to the duck-typing slide. So I understand what is on the slide, both unit and fraction digits could be non-configurable and non-data-writing properties and as KG said the duck-typing approach would be silly to prior the testing for that as part of the duck type. But you’re not proposing that these be a method value such that on recognition you would invoke them rather than just reading them; is that correct? + +BAN: Yes. + +MM: Okay. Now, could you go back to the protocol slide, the full API slide. + +BAN: Can I show the full API slide from the previous presentation, but that is the one that’s still containing the numeric conversion. + +MM: That’s the one that I want to see, the full API slide. + +BAN: Give me a second to get that loaded up. + +BAN: This is the version from Monday. + +MM: So if we went for the duck-typing approach, let’s say the mixed duck-typing approach which is what we do for iterators such that we did provide a prototype that had helpers on it, which of these helpers would still be motivated enough in your opinion, the opinion of the advocates to include in the language? + +BAN: All right. This question to make sure that I don’t say things that are not correct, I’d like to punt to NRO on this one since he’s been working with it most recently. + +NRO: Sorry for mistyping, I have opinions on the queue. Personal opinion here. What will be helpful is the conversion between fractions and significant digits just to make sure if this is a plain object that has fractions and significant digits you could write where you started where it makes sense together and format it, you are working with one or the other. It’s helpful if the object defines one, there is an easy way to get the other. For all the rest, probably nothing. + +MM: Okay. And for that functionality, having it be functions that took a thing recognized by duck-typing to be the amount as an argument rather than inheriting method should be perfectly fine for that functionality; is that correct? + +NRO: Yeah, we need to figure out where to put it, though. + +MM: Okay. And that can come later just like iterator helper came after iterators had been duck-typed and standard to be duck-typed for a long time, and then various kinds of helpers were explored by libraries and established to be useful before they were moved into the language. The same process could apply to the ones here that if you are motivated enough to include in the language; is that correct? + +NRO: My answer is probably yes but I see other people on the queue disagree. + +WH: My answer to MM's query is that this wouldn’t work because an `Amount` is not like a tuple where you can combine arbitrary pairs of objects and that generally works. An essential part of an `Amount` is the invariants that it maintains and duck-typing it would violate those invariants. + +MM: Good answer. Thanks. + +KG: It’s true that we did later retrofit iterator to be a proper prototype with its own values and that worked out okay. But I think the situation there is kind of not great, because in the intervening years, users could make manual iterators that did not inherit from IteratorPrototype and that leads to confusion where if you get an iterator from somewhere you can’t trust it will be from the iterator prototype. We have the convenience method for fixing that and made the constructor available and easy to derive from it going forward to minimize the problem. But that situation isn’t good. If we intend to provide an Amount with various helper methods on the prototype in the future, it would be best to do that right away. + +MM: So Kevin, I understand that, good, good point. If the functionality that we expect helpers to provide could as easily be provided by helper functions with amounts as arguments or the helper functions recognize the amounts by the same duck-typing rules, that counter example would not apply, correct? + +KG: Yes, yes, definitely agreed. We can provide static methods that manipulate values and not run into this problem. + +MM: Okay. So the remaining objection to duck-typing would be WH’s? + +KG: Well, and—I think that static methods that manipulate values are sometimes suitable. But depending on what those methods are and where these instances are likely to be used, that can be more awkward than prototype methods. So I think we should consider what functionality we expect to provide in the future and if we really do expect to provide a bunch of helper methods that aren’t just like reading properties or whatever, then those things would probably best be done on a class. That is the reason to have a class, if we’re doing something that is more than just having a bag of values. + +MM: But for the functionality from this slide that the advocates said would still be considered useful if we were doing duck-typing, with that functionality doing them as functions rather than methods seems perfectly fine, do you agree? + +KG: Yes. In fact, e.g. toLocaleString is a function that basically already exists as a static function and just happens to also be a prototype method here. + +MM: Okay, thank you. + +DLM: On the queue we have JHD. + +JHD: I will state a much stronger position than KG and I don’t agree to the thing he just agreed to. There’s the few problems. Iterator was the failure case and huge failure we didn’t put iterator as the global in the first place and we got lucky we were able to do it because there’s a lot of web compat risk with something with the common name and trying to add it later. We squeaked by just barely with iterator. So I would say stronger than KG and ever add the helpers must be added right away as a global, not as a global but must be put somewhere immediately so we kind of stake a claim on the place that we want to put those helpers in the future. And then additionally, prototype methods versus static methods, we need these things to look at internal slots and if you want them to be arguments to functions that will be a need for more static methods that look at internal slots on the argument. That creates a bunch of complexity. + +KG: We already committed to not doing that. We have said things that consume these won’t look at internal slots. + +JHD: Something else that consumes it perhaps. But if we’re trying to add helpers for it, like, like the iterator helpers, for example, I think they should be looking at the internal slots and if they’re on the prototype, we don’t have to do anything weird there. But if they’re not, because they can look at the receiver and the practical memory and transparency concerns are satisfied there. + +KG: If they’re not on the prototype, they consume it the same way anything else would assume it. They can use the public interface. There's no reason to use internal slots. + +JHD: I don’t agree with that. + +MM: If there’s not a compelling reason to look at internal slots which it sounds just now that there is not, I certainly prefer solutions that do not introduce new internal slots for all the— + +JHD: I wasn’t aware, I guess, of what you’re describing as a decision. But I think that they should be using internal slots. The whole point is like of having an amount, having a reaffirmed thing is it can’t be made to not be the reified thing, and other things can’t pretend to be the reified thing and that’s what internal slots are for. + +MM: The equivalent of the class instance—I see, okay. I retract what I was about to say. + +DLM: We should go to the queue. I should say we have four minutes left. So topic was warning from Kris about time and then there’s the clarifying question from WH. + +WH: As far as I can tell, we haven't reached consensus on not using internal slots. + +JHD: Certainly hadn’t heard any. + +KG: We had consensus on, or I understood we had consensus on, not using the internal slots when consuming an amount as an argument to another function. I don’t think we have discussed what to do for the prototype methods. + +WH: I don’t think we reached consensus on that either, because that also creates the duck-typing problem of broken invariants. + +KG: Well, we talked about that in the past, and no one had objected and then when it was presented today, we said that’s what we were doing. If you want to revisit that, we can revisit that. This is still not Stage 2. But generally I want things consumed by the public interface. + +MM: I would insist on that because of the practical membrane transparency issues. Practical membrane transparency issues doesn’t argue against internal slot access by inherited built-in methods but strongly argue against it for amounts as arguments for things. + +JHD: But if those things are helper functions that are just for working with amount and those things thus belong attached to amount or `amount.prototype`, then that becomes a very different discussion. + +NRO: So I don’t remember the first time this was presented, it had a bunch other things and had conversion and kilometre and how many metres or miles is that to present to the user basically? And then a big proposals and extracted the system. I think that part is now pushed to part units. Argument on proposal but if you have an amount plus, it would be great to have on the prototype here even if they’re not currently part of this. So if we still want to do that for the position purposes to provide your conversion to it, we need to figure out where to put them if we don’t have an amount prototype. + +SFC: There’s definitely been—I don’t think it’s in the current proposal, but I would definitely like to see prototype functions on other types like number and potential feature decimal to create an amount to basically associate them with the precision or unit. If they return something, they should ideally return a named object and not just like a bag of options, because then you you can call the chain number dot with significant digits or with unit and dot to locale string. That’s the highly ergonomic. Not clear what is happening there. I don’t think that we should have a method that just returns an unnamed—an object without a prototype. + +JRL: Sorry, I have been doing other P0 work and haven’t fully followed the conversation. The topic that was just discussed between KG and MM and JHD, I think the precedent that was being referred here if you take the argument and the argument is amount, we expect to get the values off the argument via the public interface. If you have the amount class and you are invoking the method on the amount class, then it’s fine for your methods on the amount to access the this context internal slots. So if you’re operating on this, you can get the slots. If you’re operating on the argument of the amount passed through the method, you operate through the public APIs. Is there anything else to discuss with that? We decided all that with the set methods. + +MM: I’m happy with that position. If the internal slots don’t serve the purpose, in other words, if it there’s no cost to just doing everything with the public API, it can still be branded so—well, sorry, that was the—if it can all be done with the public API, it can still be effectively instance of the class and inheriting from prototype, that would be preferable, but, yeah, if the internal slots are only accessed by built-ins on this, I’m fine with that. + +KG: WH’s point is that the class maintains invariants and the only way for consumers to know, if the consumers need to know those invariants hold, is for there to be branding and internal slots. + +DLM: I have to interrupt, we’re at time. Three points of order. Basically the question is whether we want to have another continuation for this topic or if we’re happy with— + +BAN: I think maybe the thing to do is just to right now really quickly list the points that might be potential blockers. + +SFC: We don’t have time for that. I think we should have the continuation this afternoon to do that. + +BAN: Okay. + +CDA: I did mention in the point of order but I captured the queue from Monday, and then completely forgot to paste them in here. Some of them may have been covered as NRO pointed out, but I will add them in. Please remind me in case I forget again. I will add them in later this afternoon and if we skip over them because we covered it sufficiently, it’s fine. They’re all from SFC. Shall we break for lunch? + +## Continuation: Update on proposal-module-global + +Presenter: Kris Kowal (KKL) + +* [proposal](https://github.com/endojs/proposal-module-global) +* [slides](https://github.com/endojs/proposal-module-global/blob/main/slides/2025-09-stage1-update.pdf) + +KKL: All right. This is a continuation of the Q & A for proposal module global wherein we proposed to add a mechanism nominally for the moment the design we are entertaining is compartment as has been seen before with the additions from the new global proposal like the keys property that allows the constructor or the compartment to decide what properties of the host global to copy into the new compartment. And then the surrounding mechanisms for importing modules into the compartment. Does anybody have questions they wish to add to the queue or portions of the queue from carryover? + +CDA: I’m repopulating the queue. We were on when we stopped KM’s topic for national line security boundary and reply in the queue that I’m putting in now from Matthew. Is he Here? + +MAH: I’m here. I’m trying to page back to what—block analogy. KM, do you mind restating what your magic line concern was. + +KM: We had a lot of experience internally on it. This is source code not necessarily in JS, but I don’t imagine the problems substantially different once you get to the scale of things, you know, we’re talking about, for large applications where we try to create a boundary, but the boundary is known at creation time to be somewhat porous and in the end, it ends up not really being—it ends up being enormously expensive to maintain and not super effective anyway. People trying to get around it are perfectly able to. In this case, the example case would be hard is you have the page with, I don’t know, tens or hundreds of thousands of dependencies, each independent code, each in their own compartment and each talking to each other. You don’t know how they transfer data through each other to other dependencies. They may not directly be able to do it but might be code paths and slip data from not the direct dependency through to themselves. If you look hard enough. So if that isn’t effective, this is a lot of work for the engine, and I’m unconvinced, I guess, at this point that it would actually serve—how do we validate for ourselves that this is serving a large enough pool of people to be worthwhile without defining what large enough is, but how do we even prove to ourselves that that’s the case? + +MAH: Right. So what I wanted to say there is this feels like saying we shouldn’t try security because security is hard. What compartment is doing and when coupled with lock down is providing the foundation for being able to write defensive code. Today basically if you have the package, it’s almost impossible to write your code—if you have a library, it’s almost impossible to write it defensively because you can’t really rely on the environment not changing from under you through some weird pollution. Whether that’s the pollution of the globalThis or whether that’s the pollution of the intrinsics. Pollution of the globalThis is handled by compartments. Pollution by the intrinsics is handled by lockdown. So really what these two together do is provide a solid foundation so that now you can start writing code as you really intended saying that when you do my array instance .push, it is the push that is executed. When I refer to fetch, it’s fetch from the system and not whatever thing polluted the global environments. So, of course, alone it doesn’t prevent a malicious extension from abusing the access it already has to—malicious package from the access it will have from existing API or abusing the access it has to another package and try to coerce that package that is potentially badly written into doing something that it shouldn’t be doing. But it enables those package to be written in the future defensively so that those things don’t happen. What I wanted to say here is it seems we want to say that let’s not do this and keep all the doors open because some people might not lock their doors or so on. Right now it’s basically unable to install the lock on the package and write it defensively. + +KKL: Thank you. The phrase I keep coming back to, it is not the aim to solve all security problems, it is our aim to create a foundation where it becomes possible to solve this Problem. + +MAH: And I want to be clear here. Compartment on its own even without lockdown does reduce the blast radius already. Lockdown then you have malicious package that can try to escape the compartments fairly easily if they’re aware of compartments. Lockdown and with the repairs we talk about so that, for example, you can’t use `function.prototype.constructor` to escape your compartments provides you next level of reducing the blast radius so that you cannot willy-nilly modify the global execution environment and now you have to actually mount attacks if you managed to get a hold of a malicious—manage to make the package malicious. + +KKL: And to answer the point how do we vet this, we’ve had this in production for four years so far. It’s our intention to solicit feedback from as many people as possible and get this into production in advance of getting this into the standard. But not having compartments in the virtual underlying virtual machine creates friction that makes it somewhat difficult to adopt. It is our hope to make progress on that here. + +MM: Just like to add that in advancing proposals in front of a committee and a constraint that we are trying to satisfy that I think the committee would insist on even if we weren’t is each individual proposal should be minimal necessary to enable the rest of the solution to being in user code or in other proposals but where the proposal itself does enable the problem to be solved versus the imaginal line constraint is we have got to solve the entire problem all together in one proposal. And if that proposal is too complicated, then it simply cannot advance and solving the entire security problem in one proposal is just kind of a crazy requirement. So as KKL said, we have demonstrated by our own production use and metamask demonstrated by their own production use, moddable granted with a much more constrained system but also using this in production is that this is a foundation that can be used to write defensive code that holds up in very heavy security reviews which we will point at and to in some cases to formal analysis which we’ll also point out, so it is an enabler. I also wanted to expand on the topic which is the enabler only starts to defend against malice when combined with harden and lockdown, but that doesn’t mean the harden and lockdown again should be part of this proposal or even necessarily that they should be proposals in front of the committee. I still prefer them to be proposals that go in front of the committee but not part of this proposal, and if they don’t advance, they can be done fairly well in user code like our shim does whereas compartments cannot be done well in user code for all the reasons that KKL has mentioned. So the only critical enabler to solve the rest of the security problem in user code is compartments themselves. + +USA: Next we have an older topic from JFL that is new global and ShadowRealm better reconciled and one solution not two. End of message. + +KKL: This is an interesting one. There’s an issue on GitHub where we talk about comparing and contrast of ShadowRealm and compartment. And I am not satisfied with our answers, but the shape of them up to this point is that while they do address some of the same cases, they do not address them in the same way with and not all of them—essentially depending on how you’re using this, depending on how you’re using ShadowRealm and compartment, a lot of the cases where ShadowRealm is useful can be subsumed by compartment. It is only at the point where you are capturing something that requires the ability to own its intrinsics at which you need to retreat to the ShadowRealm. You do so unwillingly in the face of the decrease in performance that comes with having a full callable boundary membrane. And to that end, most of the problem that can be solved for supply chain attack resistance is exclusively the purview of compartments but doesn’t of ShadowRealm for plug ins where third party Mr. ins need to own their realm. Neither of these are— + +USA: Did we lose KKL? + +MAH: I had an answer that was—I will continue until KKL comes back. So as KKL was mentioning, the use cases are different. They operate at different levels. And I was contrasting that the other time by we have containers and we have VMs. Sure you can use a VM for doing some of the use cases that containers are, by why would you? It’s a lot more heavy machinery to do that. In this case, I would argue that ShadowRealm is not suitable for isolating your package dependencies. It is way too heavy weight for that. The same way that I saw a topic that is now removed like whether iframes and post message could be used, definitely not. That’s even worse. That’s asynchronous boundary. You cannot use those heavy complicated isolation mechanisms to do—to isolate lightweight dependencies like your packages that your application relies on. So simply different use cases. There are use cases for ShadowRealm, like, as KKL was mentioning, plug ins and things like that where you want to run more full application in its own environment and that application is a little bit more freedom to do what it wants such as mutating the global environment if it so desires. + +USA: All right. Do we have KKL back? + +KKL: Yeah. I have reauthenticated. + +USA: Would you like to add further, or should we move along the queue? + +KKL: I think Matthew addressed it. I said everything. + +USA: Let’s move on to mark. + +MM: So I also wanted to add that the question of efficiency across the callable boundary came up and some people were I think not taking it seriously enough. Callable boundary is only even a candidate for doing linkage of existing packages with a near membrane on both sides of the callable boundary. The near membrane necessarily involves proxies. Every implementation, every JITed implementation especially deoptimizes when it hits proxies. I have own one academic paper for a hypothetical language having a JITed implementation that JITs through proxies. I would be astonished if any of the JITed implementation would consider across JITed across proxies and JITed across the callable boundary is just not going to happen. I think that the efficiency argument for linking across the callable boundary is just fatal. The other thing is that if we could only pick one, I think as we have established, compartments solve a lot that ShadowRealms don’t. ShadowRealms solve one isolated problem that compartments don’t, but they’re an isolated problem at the heart of what Kevin presented which is avoiding global coordination mechanisms, but if we had compartments without ShadowRealms for the browser, the cases that Kevin is concerned about, we still have multiple same origin realms through same origin iframes. And that plus compartments within each of those is not a bad starting point for addressing the remaining problems if we had to give up on ShadowRealms. + +KKL: KM, your feedback had two prongs to it, and it was essentially that the value versus the cost. And I think we have spoken entirely about the value and inadequately against the cost. The olive branch I’m hoping to extend is that we wish to design the compartment going forward in order to minimize the cost to implementers on the web in particular where the entanglement with the global are precarious. It is possible that we might be able to – in any case, I would like to invite you to specifically come and educate us on all of the edge cases so we can address them on the case-by-case basis provided that we have convinced folks here that it’s worth the value—that the endeavour is worth the value, which I take as given personally. + +USA: So we have a reply that says we do some JITed but it’s hard because of the validation requirements. And next on the queue we have Matthew. + +MAH: Highlighting the cost of proxies is one of them, but there is bunch of other costs in the membrane that you need to put on top of the callable boundary and you need WeakMaps and a bunch allocated and every time you go across the membrane and that means a lot when you have basically a bunch of packages sharing data with each other. So is it possible, technically possible? I’m sure. It would be probably memory prohibitive and most likely potentially primitive as well. + +MM: And you would have to JIT across the callable boundary in addition to both sides of the membrane. Now, I just can’t imagine serious implementations actually doing that. Always deopt this stuff to some degree. + +USA: All right. Moving on. + +DLM: Yes. So we have heard a bit from KM and OLI about implementation concerns about complexity and performance. I should say that SpiderMonkey shares the concerns. It is difficult to get into specifics. There wasn’t slides beforehand. I’m not familiar with the compartment’s proposal. These are things we will look into more detail once it has been updated. I think the idea for us having multiple globals in the same realm is scary from the correctness point of view. And I guess that’s my point, I just don’t want KM and OLI to speak alone. We share the same concerns. + +KKL: Thank you DLM. We come to this conversation with that understanding. + +MAH: It’s actually something I would like to understand a little bit more. I mean, really what compartment and global does is introducing a new global scope not—sure, there are global objects there, but I suspect those global objects don’t really need any of the or much of the special treatment that the global object of the realm has. So I’d like to understand a little bit more why all the engine implementers seem to think that there is this complexity associated to this, because it’s mostly introducing a new global scope. + +KKL: I think I can actually for the purposes of making sure that the vendors—that the delegates representing the browser vendors have been heard. One detail that I know has been mentioned is that although in principle the global object and the base of the execution context are different things and that the end principle a lot of algorithms should be looking to the execution context for certain things, it’s as an expedient they sometimes look to the global object instead. Just because it’s the nature of having maintained the browser over decades long development of the web, the incidental reliability of the global context in global is reliable enough there are unknown corners of the code base where accidental dependence on the invariant works. I’m sensitive to this having failed to compiled the browser many times in the past just to be clear. + +MAH: That’s the correct concern at least at this point. + +USA: All right. If you like to speak further. A reply by KM. + +KM: Some ways maybe the example I’m to talk about with the window proxy is similar, like, might have solved some of the problems already, but there were also huge pains with the window proxy in trying to like adopt that when that became a thing, I guess, at some point. It sort of predates me. My understanding is that there were all kinds of problems with the window proxy and like it caused breakages all over the place trying to adopt it. May not be an issue here but dealing with the scopes and resolving it and having the correct one to resolve in any particular case is somewhat difficult and I believe has had all kinds of performance characteristic drop off in ways we didn’t expect and have problems. So, you know, I would expect—I would not be surprised, I guess I should say, if in the—if this proposal moves forward, you know, there will be some feedback that comes along of like just around in this area of like these things make it hard to maintain existing code’s performance. So I mean, it’s like the forewarning, I guess. It doesn’t necessarily block it but I expect there will be issues there because I have seen them. + +KKL: Yeah, for what it is worth, we expect as much and in for the long haul as well. Thank you. + +USA: We have a response by mark. + +MM: So first of all, no disagreement that we don’t know how hard it will be for the high speed JIT engines to implement this. You know, we have some intuition but we’re only going to find that out when the JIT implementers—from the JIT implementers themselves. I want to point out specifically this is one of the differences between compartments and ShadowRealms that the ShadowRealm of global was much more entangled with web concerns than the compartment global is. In fact, if you just take the same proposal and you just use terminology for the compartment global other than global such as scope terminator, it kind of makes it clear that most of what’s associated with the global over realm can stay associated with the global of the realm even when the realm contains multiple compartments. The exception to that which is an artifact of the shape of the proposal to get it through the committee is that we are proposing by default to copy the entire contents of the realm global on to the initial compartment global. And therefore to also copy the internal slots or at least the internal slots that are needed by the built ins on the realm global. And that does acknowledge, that does create additional complexity. However, we would be perfectly happy to omit the problematic elements of the realm global in the copy if we can get an agreement on that at which point it’s hard to imagine what the remaining bad entanglement constraints are with the compartment global. + +KKL: And if I can add on to that, it is the case for the compartment shim that we do not replace the true global. We have a scope that is rooted in the true global that is overshadowed by scope terminator, and then the compartment’s global. There are inaccuracies in our emulation of a non-compartment environment that flow from that which is one of the reasons for being here. So, again, what we want to do is create a proposal that is the most attractive to specifically browser vendors in the case of minimizing the complexity of implementation. We have a hypothesis, I have a hypothesis that making compartment globals resemble the realm global and makes it easier if it turns out that that deceit makes it more difficult, that is a dimension of the design we are very happy to change. + +USA: All right. With that, we have our next topic which is by OFR, are you there to speak for it? + +OFR: That was a very minor point yesterday, but just one of the motivations was that this current sand box that it is very scary to write or scary looking code and it kind of reminded me of linux containers and it is kind of similar, there is no “make a container” Linux syscall and there is scary syscalls that you have to string together to make the container. It’s not considered a problem because there’s one implementation or one or two implementations that do it and so even though it’s not nice to—this doesn’t have to be nice JavaScript, you write it once and everyone can use the bullet proof implementation. + +KKL: Yeah. And we agree, which is why we’re getting so much traction from this particular design in practice. I suppose that the slide mischaracterized the need. It is not the scariness of the implementation that we want to make progress on, so much as it is the fidelity of the emulation that we want to improve. Our scary mechanism has edge cases that are difficult to emulate properly or impossible to emulate properly at this point. It’s safe to say. And also this model because we do not have native support for module source in particular, because module source leads into compartments and makes it possible to—gives us the place to stand to eventually propose the mechanism for lifting text to plain module text from, for example, a zip file or some other kind of container and then load them or even appeal to the host module loader to parse and create a module source for us and inject that into a compartment. And what that would enable us to do is make it so that the conveyance of sources from the developer’s system to the production system would require no transforms, which means an improvement to the debugging experience which means an improvement to the transparency of the artifacts that need to be reviewed for security reasons as well. For example, if you look among the dark corners of our current implementation, you will note that our mechanism again only operates on eval-able strings which is to say we can’t eval a module source which means that we have to transform a module source—the source of a module into adjacent representation of a module that has all of the bindings preanalyzed and the body transformed via Babel into the form that can be linked through the combination of evals. All of this is yucky and scary. That’s an oversimplification of the concern. There are benefits that we see when we’re using a native implementation of compartment as it exists in Moddable XS where we can eliminate a lot of layers of complexity and make the resulting artifacts far more debuggable and auditable. + +MAH: Yeah, same thing, right now once it’s the correctness for the cases but really requires to transform module code ahead of time so that you can actually end up evaluating it inside of a compartment. So it’s just requires a lot of tooling around and you cannot just dynamic import something or anything like that. Actually dynamic import is one of the things that is prevented especially for that reason.. In the shim obviously. That’s what we want to fix with the native implementation. + +USA: Next we have a reply to OLI from Eric. Performance of the implementation of the—sorry. The performance of the implementation is a major concern for those of us using it. However, end of message. Just I phrased it weirdly. All right. Next we have the reply by MM. + +MM: One of the many impossible problems of getting good fidelity with the shim for module code is that the version of the syntax that you’re pre-compiling or compiling with a user parser to ensure that is the same as the syntax accepted by the target platform when the committee changes syntax over time is another just simply impossible problem. And when those disagree with each other, there’s all sorts of—there’s a whole wide space of potential hazards that come from that disagreement. And the compartment proposal is really the minimal thing that allows us to avoid having to precompile module code in order to get the benefits of the separation. When I say it’s minimal, I should say if we are perfectly happy would be overjoyed to entertain proposed simplifications that actually solve the problem and, you know—and that’s perfectly fine and part of what Stage 1 and Stage 2 are all about. So any one who has a way to simplify or further unbundle the proposal in such a way that it actually enables the problem to be solved, specifically enables the malicious supply chain problems to be solved, we would love to have this stuff be simpler. + +USA: Next we have a reply by KG. + +KG: I guess on that front, I didn’t fully understand what the necessity of the syntax transform you were talking about was. There was something about HTML comment. + +MM: Comments as an additional cost, additional fidelity problem that’s annoying but by itself is not insurmountable. The problem is simply we need separation of packages. We’re achieving that by separation of modules including of course ESM, ECMA script modules. The shim mechanism by which we’re doing that can only separate eval-able scripts. We have no way in the language to even shim separation of modules without compiling them to eval-able scripts. We simply don’t know of any way to do it other than by compiling them. If anyone does know of a way to do it or has more minimal proposal that would enable us to do it, we would love to hear about it. + +KG: Yes, all right. If the reason you need to do for the parsing is full transpilation maybe that’s not feasible. I was going to suggest that the problem is you need a parser that like matches what the browser has, it’s at least conceivable this could be exposed to user land. This is something that, you know, SpiderMonkey historically did. There's problems with that if you’re exposing the full power of the parser but literally just like, you know, "give me the string locations of the import declarations" or something like that, you know, that sort of thing is maybe feasible. But it sounds like the things that you’re doing might need more than that. So perhaps that wouldn't apply. + +KKL: Yeah, we need to fully emulate export name space objects among other things. + +KG: I mean, emulating export namespace objects doesn’t involve that much in the way of parsing. + +MM: All together when we take a look at the things that we feel like we need to do in the transpilation I’m glad to hear that but obviously offline we would welcome the attention of what the actually requirements of the transpilation are and seeing if there’s a lighter way to do that. But say I doubt it. + +MAH: Again, my problem is that it still requires transpilation ahead of time. I don’t see the world in which we can allow, for example, the dynamic import statement to survive and make sure whatever is dynamically imported ends up evaluated in the compartments. That’s why we need the engine to implement this. + +KG: So this is something of a digression, but I mean it's definitely imaginable that you could do that if there were built in utilities that allowed you to transform module text in the way that was required and if we had a constructible module source and you could find the dynamic imports, which is relatively easy to do, you could slice those out of the source and replace them with the wrapper which does the transpilation at run time. It’s technically possible. I’m not suggesting the right approach. This is kind of a digression. + +MAH: Just to be clear, all the transpilations, we are not doing them at run time, we do them ahead of time because all these transpilations are complex and don’t want to review the code whether it is safe and correct and include at run time. So let me rephrase that. The compartment shim is reviewed and sufficient to enforce all the security properties that we want off isolation. And it doesn’t rely on the correctness of the transformation of the module. The correctness of the transformation for the module is just there for keeping the correctness of the program being transformed, the code being transformed. We would prefer not to have to include all the codes of the transpiler in there. Any way… + +KKL: More to the point I think if we proposed it, this group would bulk for sure. + +MAH: Yeah. + +USA: Then we’re at the end of MM topic. Oh, GB, did you want to reply to this ongoing topic? + +GB: Yeah, just an extension of this topic just in regards to KG’s comment about the transpilation primitives and the module scrub (?) and I was aware in the meeting the module cross-examine (?) was inappropriate and introduce a new valuation primitive. When we were having that discussion is that in the context of that constraint not existing or is that still considered a constraint in the committee? + +KKL: I think to answer GB’s question from previous Q & A sessions that KG made clear that his previous objection to the motion in this proposal in the direction of introducing new paths to eval that is to say the existence of the eval function on new compartment globals was founded on the desire for the proposal not to frame that as the primary purpose of the proposal and that he does not object to them being there if it is natural and appropriate as long as you do not have to rely upon them to exist and usable in order for the feature to be usable. KG, does that sound correct to you? + +KG: Yes, that’s about right. I would maybe say slightly stronger, I am happiest if the normal way of using the proposal doesn’t rely on these things even if there are cases where some code doing something fancier would—your example with reading from a zip file, for example. That’s not something that most code is going to do but it's fine that cases which need it are able to do so. + +KKL: Right. As a baseline because we can import source using import source and dynamic import source instead of lifting from text, having that Avenue relaxes your constraints. + +KG: Yes, that’s right. + +KKL: Does that answer your question, GB? + +GB: Yeah, thank you for summarizing. + +USA: Sorry MM for holding off on your topic. Now we’re ready to go. + +MM: With LLMs, we’re in the eye of the supply-chain apocalypse. I get the sense that many people do not feel the same urgency about supply chain risk that we do. This is understandable, first, because we’re doing systems that if attacked through the supply chain, the result could be extremely damaging. Second, many of the supply chain attacks we see are against the kinds of systems we are building. In particular supply chain attacks against wallets explains why MetaMask feels this urgency. In order to communicate the urgency better, ZTZ’s talk emphasized how lucky the ecosystem has been at catching the supply chain attacks that we know about. So that leaves two things to consider. One is the old survivorship bias red dots on airplanes: What are all the supply chain attacks out there doing damage that we haven’t yet detected? We have no idea. Two is that LLMs are going to change the economics of writing code, writing attacks, and securing code against attacks. LLMs will amplify all sides of these arms races. Which things are amplified first, what the degree of amplification is, and how the timing works out are all largely unknown. Attacks don’t have to be reliable. If they usually work, that’s good enough to be a successful attack. Because of that, the unreliability of LLM-written code will not inhibit automating the creation of massive numbers of attacks, including supply-chain attacks. Across the software world, we should expect a flood of automatically written attacks. Because LLMs are currently not very good at writing code that can’t be attacked, we have a problem! Everything that we have talked about here only mitigates that problem, but cannot solve it by itself. These mechanisms taken together do set us on a path to continue to mitigate. But if we don’t do it now, we’re going to run into this apocalypse with no defenses. + +WH: I would like to second MM's position on this issue. + +MM: Thank you. + +WH: Things are about to get quite interesting. + +### Speaker's Summary of Key Points + +(summary of original topic covers all continuations) + +### Conclusion + +(conclusion of original topic covers all continuations) + +## Amount continuation + +Presenter: Ben Allen (BAN) + +* [Amount](https://github.com/tc39/proposal-amount) +* [slides](https://notes.igalia.com/p/2025-09-tc39-plenary-amount-continuation#/ ) + +CDA: SFC, I’m just going to quickly read off the items that I captured in the queue. I think some of them were covered earlier. I don’t want to dismiss them all in case they did need or want further discussion. So we had the topic at the time was from Kevin still would like to understand motivation for language level feature and there was a reply from Shane for protocol primordial and language format and Linga franka and 402 discoverability and we covered that and next items are—sorry, go ahead. + +SFC: I believe we covered all the items on the queue. + +CDA: The last one is w3c amount tags. Somebody was asking where that was. There was some reference for that. + +(?) I think there’s a link posted in matrix: https://github.com/mozilla/explainers/blob/main/amount.md + +CDA: Okay.: Then I have nothing to add to the queue from before. + +BAN: I will put up the short continuation of the continuation of the continuation slides sort of summarizing what we have accomplished with this one in this meeting and what are the things that we’re going to be working on or working towards for the next meeting. My sense is that the couple of things I will be listing is open questions aren’t things that are necessarily resolved in the next 45 minutes or whatever. And that we will be working with folks on them offline. Without further ado, let me do some sharing. Okay. So we have made a lot of progress on a lot of things. One thing, though, it seems like we have resolved the concerns about how significant digits are calculated. We are going to be going with that PR that Nic put up. We might be iterating on that on GitHub but seems like that resolves the problems. The things that we are working towards and I think we have made tremendous amount of progress this time, but probably won’t be able to resolve in the next hour is the diabolical naming question, IE, amount versus something other than amount. There’s a lot of active discussion going on the matrix channel right now that I’m picking up cords. which again we will work with the V8 team and others on that. The other major thing that we have discussed and are making progress on, it seems, is the question of whether or not to include the numeric conversion methods. And something that came up is if we don’t have them, users can just use `Number.parseFloat` instead. That might require toString. Parse code implicitly calls toString. Users have the way to make Amounts into Numbers. It’s the way that clearly indicates that this is going to a float and similar for BigInt. Those are the things, those two questions are the things we would be working on between now and the next plenary. + +WH: There have been questions about what this is useful for and why have this other than just an API. The main thing that this seems to be used for is for inoperability. If we’re using this for interoperability, frustrating users who want to interoperate this with Numbers is moving in the wrong direction. So removing the conversions from Amount to Number would defeat the whole purpose of having this. + +BAN: I think we have a reply from Shane on the queue. + +SFC: I think that the interop is more than just converting to and from numbers. The interop is about having a bag that contains a numeric thing which at the point at which it reaches amount is intended to have been already processed, right, into a numeric string annotated with the unit and precision that are useful for turning it into the human readable representation and having that interop be able to be used with, for example, passing between the library and the templating engine is I think a use case that is still covered even if we make a conversion back to the number have an extra step. + +NRO: So just that `Number.parseFloat` amount, I was expecting the conversion if we remove the toNumber to be much more complex. This actually works because `Number.parseFloat` has the unknown characters under the known implementation of the number which means even if there’s the unit `Number.parseFloat` will ignore it. This is actually not more annoying for users to use. It’s just that they need to grab the meta from a different place. So this is actually alleviate V8 concerns, but I think just a little bit of friction but not very much at all. + +CDA: There was a clarifying question from WH. + +WH: Are you sure about that? I’m thinking about cases where units might look like a continuation of a number. + +NRO: The units are the square bracket. + +WH: Always in square brackets? + +NRO: They’re wrapped in the square bracket. The stringified version wraps the unit in the bracket. + +WH: Okay. + +KG: You mentioned your open questions that you were planning on working on which is great. Please also spend some time laying out the case for why this should exist. And by this existing, I mean like the first class class and not just a protocol. I’m open to it existing. We’ve talked about some reasons to want a class today. WH had a point about representing trailing zeros that I didn’t fully understand and maybe that’s the sufficient reason for it to exist, or maybe it’s because you want to be able to trust certain invariants for the things, although if we're consuming with the protocol and not brands, maybe that’s not the case. Maybe the plan is to have helper methods, I don’t know. There’s lots of reasons to make a class. I have not understood what the reasons for making this class are thus far that aren’t satisfied by having the protocol and then having a page on MDN. Please write those down. Don’t just come with the next presentation with the answer for me. Please write it down. + +BAN: I will add it to the slide right now. + +EAO (via queue): Amount makes the rounding modes of Intl NumberFormat generally available. + +SFC: KG, do you agree that the protocol part should exist? In a hypothetical if we were to have a proposal proposing just the protocol in the Intl NumberFormat method, do you feel that that part is motivated? Are you convinced by that side of the argument? + +KG: Honestly I haven’t bothered to understand that side of the argument. I generally trust when people are asking things for Intl it's because those things need to exist. I’m happy to defer to you on the topic whether it is necessary for formatters which take both a value and a unit to exist. If you say they are, I take your word for it. + +SFC: Okay. That’s a valid answer. And my next queue item is I just want to flag that if for other delegates that share KG, I don’t know about the word to use, skepticism of the primordial, I want to make sure we know who those are so we continue to communicate and make sure that the argument that we put forth is compelling. I think that, yeah, we can talk with—we can make sure that we can put together an argument that maybe gets to help bridge the gap with KG and if there’s anyone else, I want to make sure that we do that. + +KG: I think a couple of people said something in matrix. I’m not sure if those people are in the meeting. You might want to review the logs as well. + +SFC: And to be clear, I’m not trying to single out KG, but I’m trying to make sure that all the perspectives are considered. Every delegate brings a certain angle of stakeholders and I think that KG represents obviously a certain perspective. And I just want to make sure that all other perspectives are covered. + +CDA: Message from JMN that there’s a biweekly JavaScript numeric call that is on the TC39 calendar. + +NRO: I guess this is a question KG mentioned there are future plans of things to this amount class to avoid making the like in quotes mistake where it was just the protocol and then working okay but could mean that class from the beginning. If there was a metric and something represented in the past and we left behind and we have to maybe present again and saying there’s the plan of a different proposal and motivation enough? In general, proposal trying to justify amount by itself without having the next proposal because a motivation of doing something. + +KG: Just saying that there’s another proposal I think would not be sufficient, but establishing that there is consensus that additional methods would be useful without necessarily nailing down what those specific methods are would be sufficient. + +NRO: Okay. + +CDA: That’s all for the queue. + +BAN: I want to thank everyone involved in the conversation for your time and feedback. This has been from our perspective at least tremendously useful. The plan is to respond to these open questions offline and then come back with this in the next plenary. + +CDA: All right. + +SFC: I will just add my feeling of my best attempt to try to sense the temperature of the room is that my read of the temperature of the room and if you disagree with this, this is maybe a good time to say that is that largely the delegates—is that largely the use case motivated by Intl by `messageformat` seems to be something that has at least some amount of consent or understanding amongst the delegates. And the idea for whether this type called amount has use cases beyond Intl is a point for which multiple delegates have strong opinions on one side or another and I think one of the focuses over the next cycle before we get to the next plenary is going to be to attempt to find common ground on the balance between what is the intended use case versus what developers might assume the use case might be and doing our best to bridge that as well as working out some of the nitpicks around significant digits. Don’t mean to call nitpicks, the sort of significant details around significant digits handling of infinities, what happens with significant digits that are less than integer digits and so on as well as possibly some of the concerns about the handling of polymorphic numbers, the handling of numeric types as numerical values as opposed to retaining the input number as part of the data model. So that’s my understanding of where we currently stand. And I think that the champions have their work cut out for them. I do feel that this is—the time we spent during plenary this week, I really appreciate that the chair is giving us multiple extensions. I feel like every time we had an extension, it has been productive to help us sort of narrow in on what the core conflicts are and my hope is that next time we come back, we’ll be at the state where those core contentions have been resolved. So thank you all for your time during this plenary to iterate on this proposal. And thank you for BAN and everyone else for keeping on top of everything that’s been going on. + +CDA: Awesome. Thank you SFC, thank you BAN. Thanks everyone. + +### Speaker's Summary of Key Points + +* Resolved significant digits issue via [compute significant digits from fraction digits](https://github.com/tc39/proposal-amount/pull/66) +* Open questions: + * Name: Amount or something more qualified + * Include numeric conversion methods? + * Reasons for more than just a protocol + +### Conclusion + +* Continue to iterate offline and return in Tokyo + +CDA: That is the last topic that we have unless anybody has… so unless somebody has any last-minute topic they want to discuss, that would be the end of plenary. I will remind everybody to please review the notes particularly for comments attributed to you, check them for correctness, accuracy, as well as presenters, please make sure there is a coherent summary and conclusion for your topics. Also helpful if there’s missing links to your slides or to the proposal links and PR links and the like, great to get those fixed up and reminder we have the upcoming plenary from Tokyo. Please remember that if you are wanting to attend in person, you must complete the in-person registration form which is linked in the reflector issue. That is separate that the original survey that is the interest survey. That is not enough to register your attendance. You have to complete the actual in-person registration form. NRO is asking is there a way to check if you completed the form? Yes, by asking us. + +NRO: Okay. After every single plenary and every time I struggle to remember. + +RPR: Google form option emails you after wards and future people that register, the answer will be check your email. And then your question specifically Nicolo is that, yes, you are on there. + +CDA: Okay. Anyone else wants to double check, you can do that now.