2024-12-01 [17:39:23.0455] Anyone know how to update this site? https://tc39.es/proposal-decorators/ data seems behind [19:52:36.0275] nullvoxpopuli: all of the `tc39.es/proposal-whatever` sites are served from github pages from the corresponding github repo, in this case https://github.com/tc39/proposal-decorators [19:52:39.0259] so you could open an issue there [19:53:33.0814] or just send a PR to the gh-pages branch which replaces it with a meta redirect to the spec PR, I guess https://github.com/tc39/ecma262/pull/2417 2024-12-03 [21:01:57.0924] Hi all, new guy here! I'm entertaining the idea of opening a proposal but wanted to get a feeling on how the community feels about the concept before I commit a weekend to writing it out - haha. Generally, it's the concept of the incorporation of Rust-inspired callable macros ```javascript // Parse tokens and generate React.createElement() calls const Foo = () => jsx!(
Hello World
) // Create a getter/setter for a property class Bar { #[observe(push)] baz = undefined } ``` While I know ECMAScript is not a compiled language, there are interesting use cases like; - Enabling support for jsx/other templating languages without an external preprocessor/eval - Macro annotations that facilitate more ergonomic mutation observability leading to more ergonomic syntax for frameworks like Vue/Angular - A reduced need for custom compilers (ngc, .vue, .svelte) - More ergonomic usage for tools like protobuf - etc This would also lead to support in preprocessors like TypeScript which _could_ statically compile macros - offering an interesting/novel hybrid macro system. Any thoughts on the concept? [21:02:36.0297] * Hi all, new guy here! I'm entertaining the idea of opening a proposal but wanted to get a sense on how the community feels about the concept before I commit a weekend to writing it out - haha. Generally, it's the concept of the incorporation of Rust-inspired callable macros ```javascript // Parse tokens and generate React.createElement() calls const Foo = () => jsx!(
Hello World
) // Create a getter/setter for a property class Bar { #[observe(push)] baz = undefined } ``` While I know ECMAScript is not a compiled language, there are interesting use cases like; - Enabling support for jsx/other templating languages without an external preprocessor/eval - Macro annotations that facilitate more ergonomic mutation observability leading to more ergonomic syntax for frameworks like Vue/Angular - A reduced need for custom compilers (ngc, .vue, .svelte) - More ergonomic usage for tools like protobuf - etc This would also lead to support in preprocessors like TypeScript which _could_ statically compile macros - offering an interesting/novel hybrid macro system. Any thoughts on the concept? [21:03:55.0863] * Hi all, new guy here! I'm entertaining the idea of opening a proposal but wanted to get a sense on how the community feels about the concept before I commit a weekend to writing it out - haha. Generally, it's the concept of the incorporation of Rust-inspired callable macros ```javascript // Parse tokens and generate React.createElement() calls const Foo = () => jsx!(
Hello World
) // Create a getter/setter for a property class Bar { #[observe(push)] baz = undefined } ``` The syntax itself can be changed, I am just defaulting to Rust syntax for familiarity, but something like `jsx!()` could be `@jsx()` or anything, I'm not opinionated here While I know ECMAScript is not a compiled language, there are interesting use cases like; - Enabling support for jsx/other templating languages without an external preprocessor/eval - Macro annotations that facilitate more ergonomic mutation observability leading to more ergonomic syntax for frameworks like Vue/Angular - A reduced need for custom compilers (ngc, .vue, .svelte) - More ergonomic usage for tools like protobuf - etc This would also lead to support in preprocessors like TypeScript which _could_ statically compile macros - offering an interesting/novel hybrid macro system. Any thoughts on the concept? [21:05:02.0762] * Hi all, new guy here! I'm entertaining the idea of opening a proposal but wanted to get a sense on how the community feels about the concept before I commit a weekend to writing it out - haha. Generally, it's the concept of the incorporation of Rust-inspired callable macros ```javascript // Parse tokens and generate React.createElement() calls const Foo = () => jsx!(
Hello World
) // Create a getter/setter for a property class Bar { #[observe(push)] baz = undefined } ``` The syntax itself can be changed, I am just defaulting to Rust syntax for familiarity, but something like `jsx!()` could be `~jsx()` or anything (to avoid conflicts), I'm not opinionated here While I know ECMAScript is not a compiled language, there are interesting use cases like; - Enabling support for jsx/other templating languages without an external preprocessor/eval - Macro annotations that facilitate more ergonomic mutation observability leading to more ergonomic syntax for frameworks like Vue/Angular - A reduced need for custom compilers (ngc, .vue, .svelte) - More ergonomic usage for tools like protobuf - etc This would also lead to support in preprocessors like TypeScript which _could_ statically compile macros - offering an interesting/novel hybrid macro system. Any thoughts on the concept? [21:07:05.0075] * Hi all, new guy here! I'm entertaining the idea of opening a proposal but wanted to get a sense on how the community feels about the concept before I commit a weekend to writing it out - haha. Generally, it's the concept of the incorporation of Rust-inspired callable macros ```javascript // Parse tokens and generate React.createElement() calls const Foo = () => jsx!(
Hello World
) // Create a getter/setter for a property class Bar { #[observe] baz = undefined } ``` The syntax itself can be changed, I am just defaulting to Rust syntax for familiarity, but something like `jsx!()` could be `~jsx()` or anything (to avoid conflicts), I'm not opinionated here While I know ECMAScript is not a compiled language, there are interesting use cases like; - Enabling support for jsx/other templating languages without an external preprocessor/eval - Macro annotations that facilitate more ergonomic mutation observability leading to more ergonomic syntax for frameworks like Vue/Angular - A reduced need for custom compilers (ngc, .vue, .svelte) - More ergonomic usage for tools like protobuf - etc This would also lead to support in preprocessors like TypeScript which _could_ statically compile macros - offering an interesting/novel hybrid macro system. Any thoughts on the concept? [21:08:27.0765] * Hi all, new guy here! I'm entertaining the idea of opening a proposal but wanted to get a sense on how the community feels about the concept before I commit a weekend to writing it out - haha. Generally, it's the concept of the incorporation of Rust-inspired callable macros. No macros would be shipped in ECMAScript, all macros would be defined by users as functions that take tokens and return valid ECMAScript. ```javascript // Parse tokens and generate React.createElement() calls const Foo = () => jsx!(
Hello World
) // Create a getter/setter for a property class Bar { #[observe] baz = undefined } ``` The syntax itself can be changed, I am just defaulting to Rust syntax for familiarity, but something like `jsx!()` could be `~jsx()` or anything (to avoid conflicts), I'm not opinionated here While I know ECMAScript is not a compiled language, there are interesting use cases like; - Enabling support for jsx/other templating languages without an external preprocessor/eval - Macro annotations that facilitate more ergonomic mutation observability leading to more ergonomic syntax for frameworks like Vue/Angular - A reduced need for custom compilers (ngc, .vue, .svelte) - More ergonomic usage for tools like protobuf - etc This would also lead to support in preprocessors like TypeScript which _could_ statically compile macros - offering an interesting/novel hybrid macro system. Any thoughts on the concept? [21:12:12.0696] For the "create a getter/setter" example, that looks like it can be accomplished via a decorator https://github.com/tc39/proposal-decorators [21:16:57.0299] Hi and welcome btw 👋🏻 [21:22:43.0841] Yes, I do agree that this idea shares utility with decorators however, due to the static nature of macros (accept tokens, return ecmascript), it expands the concept in various ways, like; - Opening a path for preprocessors (like TypeScript, swc) to do AoT compilation leading to applications being faster at runtime - Incorporating the annotatability of variable assignments as well as class properties - Runtime (and AoT) transformation of tokens which allows parsing (and LSP support) for jsx, templating languages and even data formats like yaml, protobuf, etc ```javascript const vueComponent = { template: vue!(
Hello {data.world}
), data: () => ({ world: 'World' }) } const contents = yaml!(foo: 'bar') ``` [21:24:20.0592] * Yes, I do agree that this idea shares utility with decorators however, due to the static nature of macros (accept tokens, return ecmascript), it expands the concept in various ways, like; - Opening a path for preprocessors (like TypeScript, swc) to do AoT compilation leading to applications being faster at runtime - Incorporating the annotatability of variable assignments as well as class properties - Runtime (and AoT) transformation of tokens which allows parsing (and LSP support) for jsx, templating languages and even data formats like yaml, protobuf, etc - Many more use cases ```javascript const vueComponent = { template: vue!(
Hello {data.world}
), data: () => ({ world: 'World' }) } const contents = yaml!(foo: 'bar') ``` [21:25:48.0976] * Yes, I do agree that this idea shares utility with decorators however, due to the static nature of macros (accept tokens, return ecmascript), it expands the concept in various ways, like; - Opening a path for preprocessors (like TypeScript, swc) to do AoT compilation leading to applications being faster at runtime - Incorporating the annotatability of variable assignments as well as class properties - Runtime (and potentially AoT) transformation of tokens which allows parsing (and LSP support) for jsx, templating languages and even data formats like yaml, protobuf, etc - Many more use cases ```javascript const vueComponent = { template: vue!(
Hello {data.world}
), data: () => ({ world: 'World' }) } const contents = yaml!(foo: 'bar') ``` [21:28:21.0106] * Hi all, new guy here! I'm entertaining the idea of opening a proposal but wanted to get a sense on how the community feels about the concept before I commit a weekend to writing it out - haha. Generally, it's the concept of the incorporation of Rust-inspired static macros. Macos are functions that take tokens and return static ECMAScript, similar to a template literal + eval however with LSP support and without the need for eval. All macros would be defined by users as functions that take tokens and return valid ECMAScript. ```javascript // Parse tokens and generate React.createElement() calls const Foo = () => jsx!(
Hello World
) // Create a getter/setter for a property class Bar { #[observe] baz = undefined } ``` The syntax itself can be changed, I am just defaulting to Rust syntax for familiarity, but something like `jsx!()` could be `~jsx()` or anything (to avoid conflicts), I'm not opinionated here While I know ECMAScript is not a compiled language, there are interesting use cases like; - Enabling support for jsx/other templating languages without an external preprocessor/eval - Macro annotations that facilitate more ergonomic mutation observability leading to more ergonomic syntax for frameworks like Vue/Angular - A reduced need for custom compilers (ngc, .vue, .svelte) - More ergonomic usage for tools like protobuf - etc This would also lead to support in preprocessors like TypeScript which _could_ statically compile macros - offering an interesting/novel hybrid macro system. Any thoughts on the concept? [21:29:10.0070] * Hi all, new guy here! I'm entertaining the idea of opening a proposal but wanted to get a sense on how the community feels about the concept before I commit a weekend to writing it out - haha. Generally, it's the concept of the incorporation of Rust-inspired static macros. Macos are functions that take tokens and return static ECMAScript, similar to a template literal + eval however with LSP support and without the need for eval. All macros would be defined by users, none defined in the language. ```javascript // Parse tokens and generate React.createElement() calls const Foo = () => jsx!(
Hello World
) // Create a getter/setter for a property class Bar { #[observe] baz = undefined } ``` The syntax itself can be changed, I am just defaulting to Rust syntax for familiarity, but something like `jsx!()` could be `~jsx()` or anything (to avoid conflicts), I'm not opinionated here While I know ECMAScript is not a compiled language, there are interesting use cases like; - Enabling support for jsx/other templating languages without an external preprocessor/eval - Macro annotations that facilitate more ergonomic mutation observability leading to more ergonomic syntax for frameworks like Vue/Angular - A reduced need for custom compilers (ngc, .vue, .svelte) - More ergonomic usage for tools like protobuf - etc This would also lead to support in preprocessors like TypeScript which _could_ statically compile macros - offering an interesting/novel hybrid macro system. Any thoughts on the concept? [21:31:51.0416] * Yes, I do agree that this idea shares utility with decorators however, due to the static nature of macros (accept tokens, return ecmascript), it expands the concept in various ways, like; - Opening a path for preprocessors (like TypeScript, swc) to do AoT compilation leading to applications being faster at runtime - Incorporating the annotatability of variable assignments as well as class properties - Runtime (and potentially AoT) transformation of tokens which allows parsing (and LSP support) for jsx, templating languages and even data formats like yaml, protobuf, etc - Many more use cases ```javascript const vueComponent = { template: vue!(
Hello {data.world}
), data: () => ({ world: 'World' }) } const contents = yaml!(foo: 'bar') // Expands to // const contents = { foo: 'bar' } ``` [21:34:16.0216] * Yes, I do agree that this idea shares utility with decorators however, due to the static nature of macros (accept tokens, return ecmascript), it expands the concept in various ways, like; - Opening a path for preprocessors (like TypeScript, swc) to do AoT compilation leading to applications being faster at runtime - Incorporating the annotatability of variable assignments as well as class properties - Runtime (and potentially AoT) transformation of tokens which allows parsing (and LSP support) for jsx, templating languages and even data formats like yaml, protobuf, etc - Many more use cases ```javascript const vueComponent = { template: vue!(
Hello {data.world}
), data: () => ({ world: 'World' }) } const contents = yaml!(foo: 'bar') // Expands to // const contents = { foo: 'bar' } ``` LSPs would see the expanded form of macros and the macro implementation would define where errors occur in a transformation [21:36:23.0223] * Yes, I do agree that this idea shares utility with decorators however, due to the static nature of macros (accept tokens, return ecmascript), it expands the concept in various ways, like; - Opening a path for preprocessors (like TypeScript, swc) to do AoT compilation leading to applications being faster at runtime - Incorporating the annotatability of variable assignments as well as class properties - Runtime (and potentially AoT) transformation of tokens which allows parsing (and LSP support) for jsx, templating languages and even data formats like yaml, protobuf, etc - Many more use cases ```javascript const vueComponent = { template: vue!(
Hello {{data.world}}
), data: () => ({ world: 'World' }) } const contents = yaml!(foo: 'bar') // Expands to // const contents = { foo: 'bar' } ``` LSPs would see the expanded form of macros and the macro implementation would define where errors occur in a transformation [21:41:15.0645] * Yes, I do agree that this idea shares utility with decorators however, due to the static nature of macros (accept tokens, return ecmascript), it expands the concept in various ways, like; - Opening a path for preprocessors (like TypeScript, swc) to do AoT compilation leading to applications being faster at runtime - Incorporating the annotatability of variable assignments as well as class properties - Runtime (and potentially AoT) transformation of tokens which allows parsing (and LSP support) for jsx, templating languages and even data formats like yaml, protobuf, etc - Many more use cases ```javascript const vueComponent = { template: vue!(
Hello {{data.world}}
), data: () => ({ world: 'World' }) } const contents = yaml!(foo: 'bar') // Expands to // const contents = { foo: 'bar' } ``` LSPs would see the expanded form of macros and the macro implementation would define where errors occur in a transformation. In the Rust world, this has been used to great effect to create GUI frameworks with vastly different approaches without preprocessors, some resembling React, others Vue: - Web wasm frameworks [[1]](https://yew.rs/) [[2]](https://leptos.dev/) - Native desktop frameworks [[1]](https://github.com/Relm4/Relm4?tab=readme-ov-file#a-simple-counter-app) [[2]](https://iced.rs/) [21:42:19.0644] * Yes, I do agree that this idea shares utility with decorators however, due to the static nature of macros (accept tokens, return ecmascript), it expands the concept in various ways, like; - Opening a path for preprocessors (like TypeScript, swc) to do AoT compilation leading to applications being faster at runtime - Incorporating the annotatability of variable assignments as well as class properties - Runtime (and potentially AoT) transformation of tokens which allows parsing (and LSP support) for jsx, templating languages and even data formats like yaml, xml, protobuf, etc - Many more use cases ```javascript const vueComponent = { template: vue!(
Hello {{data.world}}
), data: () => ({ world: 'World' }) } const contents = yaml!(foo: 'bar') // Expands to // const contents = { foo: 'bar' } ``` LSPs would see the expanded form of macros and the macro implementation would define where errors occur in a transformation. In the Rust world, this has been used to great effect to create GUI frameworks with vastly different approaches without preprocessors, some resembling React, others Vue: - Web wasm frameworks [\[1\]](https://yew.rs/) [\[2\]](https://leptos.dev/) - Native desktop frameworks [\[1\]](https://github.com/Relm4/Relm4?tab=readme-ov-file#a-simple-counter-app) [\[2\]](https://iced.rs/) [21:57:55.0701] * Yes, I do agree that this idea shares utility with decorators however, due to the static nature of macros (accept tokens, return ecmascript), it expands the concept in various ways, like; - Opening a path for preprocessors (like TypeScript, swc, probably v8) to do AoT compilation leading to applications being faster at runtime - Incorporating the annotatability of variable assignments as well as class properties - Runtime (and potentially AoT) transformation of tokens which allows parsing (and LSP support) for jsx, templating languages and even data formats like yaml, xml, protobuf, etc - Many more use cases ```javascript const vueComponent = { template: vue!(
Hello {{data.world}}
), data: () => ({ world: 'World' }) } const contents = yaml!(foo: 'bar') // Expands to // const contents = { foo: 'bar' } ``` LSPs would see the expanded form of macros and the macro implementation would define where errors occur in a transformation. In the Rust world, this has been used to great effect to create GUI frameworks with vastly different approaches without preprocessors, some resembling React, others Vue: - Web wasm frameworks [\[1\]](https://yew.rs/) [\[2\]](https://leptos.dev/) - Native desktop frameworks [\[1\]](https://github.com/Relm4/Relm4?tab=readme-ov-file#a-simple-counter-app) [\[2\]](https://iced.rs/) [21:58:51.0375] I am curious how things like auto-complete and refactoring work within Rust macros. These both sound easier for tooling to implement when the language extension is concrete, instead of being defined as a macro. [22:21:11.0512] For autocomplete, the LSP expands the macro and uses the output ```javascript const foo = add!(1, 1) ``` Would expand into ```javascript const foo = 2 ``` So consumers see it as a `number` rather than a macro. It simply replaces itself with the transformed result. As for refactoring; when someone implements a macro, they parse "tokens" which they move around into the equivalent valid language syntax. The editor can unwind that to determine what to update - though there are some macros that are too complex and refactoring tools bail out. Implementers can also throw during parsing, which bubbles up to the editor, showing errors in the correct place in the macro [22:21:30.0686] * For autocomplete, the LSP expands the macro and uses the output ```javascript const foo = add!(1, 1) ``` Would expand into ```javascript const foo = 2 ``` So consumers see it as a `number` rather than a macro. It simply replaces itself with the transformed result. As for refactoring; when someone implements a macro, they parse "tokens" which they move around into the equivalent valid language syntax. The editor can unwind that to determine what to update - though there are some macros that are too complex and refactoring tools bail out. Implementers can also throw during parsing, which bubbles up to the editor, showing errors in the correct place in the macro [22:31:50.0199] For auto-complete, when the macro is expanded, how is the IDE cursor position preserved? [22:32:36.0884] the position could expand to a much larger block of code [22:34:29.0276] the expanded code would also lack type annotations, meaning TypeScript would only be able to infer what might be a correct auto complete [22:38:06.0853] > These both sound easier for tooling to implement when the language extension is concrete Macros are more of a preprocessing step rather than dynamic functionality (like decorators) - but there's a chicken and egg problem. The best supported illustration of macros in tooling is the built-in transformation of jsx in TypeScript. There are many GitHub issues requesting to expand this to offer custom programmatic macros however it's has been deemed out of scope. Attempts by other toolmakers to implement the functionality that macros offer have resulted in fragmented, brittle implementations (.vue files and .svelte files - custom tooling, editor plugins and poor support for test runners, linters, formatters, etc). Decorators also feel like an attempt at addressing the same problem albeit at runtime. After spending some time in the Rust world, I personally feel that there is evidence that ES world shows a need for this type of functionality [22:40:37.0041] > the expanded code would also lack type annotations, meaning TypeScript would only be able to infer what might be a correct auto complete It's likely that TypeScript would compile macros into TypeScript at compile time (and in the editor), where the emitted runtime code would be the static JavaScript equivalent [22:41:26.0548] * > the expanded code would also lack type annotations, meaning TypeScript would only be able to infer what might be a correct auto complete It's likely that TypeScript would compile macros into TypeScript at compile time (and in the editor), where the emitted runtime code would be the static JavaScript equivalent That said, if they simply strip the types and leave macros to be evaluated at runtime, then perhaps type information would be an issue [22:42:07.0899] * > the expanded code would also lack type annotations, meaning TypeScript would only be able to infer what might be a correct auto complete It's likely that TypeScript would compile macros into TypeScript at compile time (and in the editor, like they do with jsx), where the emitted runtime code would be the static JavaScript equivalent That said, if they simply strip the types and leave macros to be evaluated at runtime, then perhaps type information would be an issue [22:42:18.0701] I'm also wondering for tools like swc, it wouldn't be able to execute the macros natively in Rust. It would need to delegate to JS adding overhead that negates the tools desire for speed [22:43:03.0864] Unlike with Rust macros where the compiler naturally understands how to execute Rust [22:45:38.0773] > I'm also wondering for tools like swc, it wouldn't be able to execute the macros natively in Rust. It would need to delegate to JS adding overhead that negates the tools desire for speed Haha, this is an area I've been working in quite a bit. swc can call into Nodejs to delegate evaluation - it's not as fast as rust native code and there is overhead in crossing from Rust to Nodejs, but it wouldn't be a substantial performance hit. There are also options like [QuickJS](https://bellard.org/quickjs/) and [llrt](https://github.com/awslabs/llrt) for macros that don't use nodejs functionality [22:46:41.0025] * > I'm also wondering for tools like swc, it wouldn't be able to execute the macros natively in Rust. It would need to delegate to JS adding overhead that negates the tools desire for speed Haha, this is an area I've been working in quite a bit. swc can call into Nodejs to delegate evaluation - it's not as fast as rust native code and there is overhead in crossing from Rust to Nodejs, but it wouldn't be a substantial performance hit. There are also options like embedding Deno, [QuickJS](https://bellard.org/quickjs/) or [llrt](https://github.com/awslabs/llrt) for macros that don't use niche nodejs functionality [22:50:33.0217] * > I'm also wondering for tools like swc, it wouldn't be able to execute the macros natively in Rust. It would need to delegate to JS adding overhead that negates the tools desire for speed Haha, this is an area I've been working in quite a bit. swc can call into Nodejs to delegate evaluation - it's not as fast as rust native code and there is overhead in crossing from Rust to Nodejs, but it wouldn't be a substantial performance hit (hundreds of millions of nodejs calls per second). There are also options like embedding Deno, [QuickJS](https://bellard.org/quickjs/) or [llrt](https://github.com/awslabs/llrt) for macros that don't use niche nodejs functionality [22:51:50.0383] Also, I really appreciate you entertaining this idea and asking such great questions 🙂 [22:53:07.0967] > For auto-complete, when the macro is expanded, how is the IDE cursor position preserved? If I get to the point where a proposal is actually on the cards, I will certainly dig deeper into how Rust does this to provide a clearer answer [22:55:23.0177] * > These both sound easier for tooling to implement when the language extension is concrete Macros are more of a preprocessing step rather than dynamic functionality (like decorators) - but there's a chicken and egg problem. The best supported illustration of macros in tooling is the built-in transformation of jsx in TypeScript. There are many GitHub issues requesting to expand this to offer custom programmatic macros however it's has been deemed out of scope. Attempts by other toolmakers to implement the functionality that macros offer have resulted in fragmented, brittle implementations (.vue files and .svelte files - custom tooling, editor plugins and poor support for test runners, linters, formatters, etc). Decorators also feel like an attempt at addressing the same problem albeit at runtime. After spending some time in the Rust world, I personally feel that there is evidence that the ES world shows a need for this type of functionality [23:06:37.0379] * > I'm also wondering for tools like swc, it wouldn't be able to execute the macros natively in Rust. It would need to delegate to JS adding overhead that negates the tools desire for speed Haha, this is an area I've been working in quite a bit. swc can call into Nodejs to delegate evaluation - it's not as fast as rust native code and there is overhead in crossing from Rust to Nodejs, but it wouldn't be a substantial performance hit (hundreds of millions of nodejs calls per second). There are also options like embedding Deno, [QuickJS](https://bellard.org/quickjs/) or [llrt](https://github.com/awslabs/llrt) for macros that don't use niche nodejs functionality Alternatively, if macros are defined using a templating syntax (like callable macros in Rust) then an interpreter could be written (or taken directly from an existing JS engine) and run entirely in Rust [example](https://doc.rust-lang.org/book/ch19-06-macros.html#declarative-macros-with-macro_rules-for-general-metaprogramming) [23:08:52.0547] * > I'm also wondering for tools like swc, it wouldn't be able to execute the macros natively in Rust. It would need to delegate to JS adding overhead that negates the tools desire for speed Calling into Nodejs is not as fast as running Rust native code directly and there is overhead in crossing from Rust to Nodejs, but it wouldn't be a substantial performance hit (can do hundreds of millions of nodejs calls per second). There are also options like embedding Deno, [QuickJS](https://bellard.org/quickjs/) or [llrt](https://github.com/awslabs/llrt) for macros that don't use niche nodejs functionality If callable macros are defined using a templating syntax (like in Rust) then an interpreter could be written (or borrowed directly from an existing JS engine) and run entirely in Rust [example](https://doc.rust-lang.org/book/ch19-06-macros.html#declarative-macros-with-macro_rules-for-general-metaprogramming) [23:32:06.0686] rust macros have some really unfortunate limitations tho - in particular, it seems they can't handle anywhere close to the level of expressiveness and dynamism i'd expect coming from JS [08:58:28.0857] there are two kinds of rust macros; for the cases where the normal macros aren't expressive enough, you can write a procedural macro to literally bypass the parser and operate on a stream of tokens from the lexer [08:59:08.0856] (and the JS equivalent of _that_ starts to sound like https://github.com/tc39/proposal-binary-ast) [09:02:29.0670] and even with that i still haven't been able to get what i want to do, done :-/ i'm sure it's a mix of lacking capability in both self and language [09:03:22.0832] what haven't you been able to do with proc macros? [09:05:46.0664] David Alsh: macros in the language mean moving complexity and build time from the developer's computer to the user's computer, which I think is very negative [09:05:59.0577] as a developer it's obviously appealing, but in terms of its effects on the world I think it would be very bad [09:06:10.0720] Proc macros can generate arbitrary Rust code by executing arbitrary Rust code taking arbitrary lexer tokens as inputs. They are turing complete - there is nothing they can not do [09:07:58.0443] they can't properly attribute error locations :P [09:08:08.0623] They can in nightly :D [09:08:20.0627] > <@lucacasonato:matrix.org> Proc macros can generate arbitrary Rust code by executing arbitrary Rust code taking arbitrary lexer tokens as inputs. They are turing complete - there is nothing they can not do there's a ton of things that Turing machines cannot do [09:09:12.0180] Sure. Then more accurately: they have all the same capabilities as any other Rust program (because they are a special kind of Rust program) [09:09:56.0252] then that probably means that rust's type system can't do what i want 🙎‍♂️ [09:10:07.0062] * then that probably means that rust's type system can't do what i want :-p [09:48:27.0292] I started looking at code generation (like C#) before to see how it could function in JS and it has similar issues as it moves what might be build-time to the user's computer which as mentioned is contentious. [13:01:03.0245] Hey all, just doing some sketching out on a proposal for a `Promise.lazy(fn)` API (https://github.com/jasnell/proposal-promise-lazy) ... the idea is to simplify a pattern that is currently possible to do with custom thenables with a fair amount of boilerplate. This is very early and not yet ready for in committee discussion but I'd like to invite folks to take a look at provide early feedback. I'll likely raise it for in-committee discussion for the next scheduled plenary [13:16:03.0140] > <@jasnell:matrix.org> Hey all, just doing some sketching out on a proposal for a `Promise.lazy(fn)` API (https://github.com/jasnell/proposal-promise-lazy) ... the idea is to simplify a pattern that is currently possible to do with custom thenables with a fair amount of boilerplate. This is very early and not yet ready for in committee discussion but I'd like to invite folks to take a look at provide early feedback. I'll likely raise it for in-committee discussion for the next scheduled plenary neat [13:18:38.0300] one thing to consider: `Promise.resolve` (which is also used in the `await` syntax) has a special case for "real" promises (which works by checking the `.constructor` property), and you'd need to consider whether this would count [13:18:51.0942] I am pretty sure this is a not going to be acceptable to some delegates, including us. Promises are one-way, and I don't want them to grow a back channel [13:19:23.0818] > <@mhofman:matrix.org> I am pretty sure this is a not going to be acceptable to some delegates, including us. Promises are one-way, and I don't want them to grow a back channel it's not adding them to actual promises [13:19:29.0903] it's making a new kind of thenable, which you could already do in userland [13:19:54.0096] We actually go through a great extent to protect against promises with a custom then or then getter in our environment. [13:20:52.0003] Right, nothing prevents anyone from making a new thenable, but that's not a promise [13:21:16.0270] well, ok, call it LazyPromise and make it a subclass instead of `Promise.lazy` then [13:21:22.0004] that seems like a stage 2 concern [13:21:38.0226] `Promise.resolve()` which is extensively used to check if something is a promise would trigger the laziness [13:22:37.0356] ¯\_(ツ)_/¯ seems fine [13:22:54.0481] I very much contest "extensively" though [13:23:02.0573] I would buy "extensively in our extremely unusual codebase" [13:23:27.0374] Right I might have a miopic vision here. [13:23:46.0872] `await` does feel like the wrong place to start work [13:24:05.0164] fwiw that's how it works in Rust and it's fine [13:24:59.0783] Well, to be clear, per the proposal, the work is not started on the `await` (or the .then call). The actual work is deferred as a microtask. [13:25:26.0640] oh I jumped to the wrong assumption [13:26:02.0369] Mathieu Hofman: ... I get the objection... if these were brand-checkable would that make a difference to you? e.g. something like `const p = Promise.lazy(...); if (Promise.isLazy(p) { ... }` [13:26:10.0723] FYI that `class LazyPromise` example doesn't work correctly when `then` is called multiple times (which re-inforces that it's hard to actually get this right) [13:26:51.0745] > <@mhofman:matrix.org> FYI that `class LazyPromise` example doesn't work correctly when `then` is called multiple times (which re-inforces that it's hard to actually get this right) yep, it was a quick example. didn't go through all the necessary boilerplate to make to complete [13:27:15.0970] deferred to a microtask after the await or then call, right ? [13:27:58.0614] > <@mhofman:matrix.org> deferred to a microtask after the await or then call, right ? Yes. When the reaction is added, the callback is deferred to the microtask at that point [13:29:54.0232] do you have some example use cases for this? [13:31:53.0089] > <@devsnek:matrix.org> do you have some example use cases for this? I need to work up a few more of the examples but the use case that prompted this for me was a work queue built around custom thenables that ended up having a fair amount of boilerplate to get right (for instance, Mathieu Hofman's comment about about the LazyPromise example needing a lot more to be safe) [13:32:22.0374] > <@devsnek:matrix.org> do you have some example use cases for this? (there is one example at the bottom of the readme, in case you didn't see that it down there) [13:33:19.0422] i feel like this example is a meta example [13:33:38.0588] now i'm curious what you would put into this queue in practice [13:33:39.0875] would a generator help there? [13:33:51.0799] yeah that example is lacking. One of my key tasks before bringing this officially to the committee is to expand the use cases and examples [13:34:06.0146] > <@jasnell:matrix.org> I need to work up a few more of the examples but the use case that prompted this for me was a work queue built around custom thenables that ended up having a fair amount of boilerplate to get right (for instance, Mathieu Hofman's comment about about the LazyPromise example needing a lot more to be safe) how about we just provide a built-in work queue instead? [13:34:34.0472] (I am already considering doing this btw) [13:35:01.0795] > <@aclaymore:matrix.org> would a generator help there? This was about 8 months or so ago that I first started thinking about this and at the time we did end up moving the code to a generator to improve things but even then it was still a fair amount of boilerplate. It definitely helped that case tho [13:35:23.0537] i look forward to seeing some. i don't think i have ever (in js or rust) needed to take advantage of executor lazyness in a way that semantically matters. [13:35:40.0239] > <@devsnek:matrix.org> i look forward to seeing some. i don't think i have ever (in js or rust) needed to take advantage of executor lazyness in a way that semantically matters. Fair [13:35:40.0875] only for performance optimization [13:36:22.0596] I'll have that (better documenting the use cases) at the top of the todo list then before moving this forward [13:37:57.0225] they feel similar to a memo'ed lazy 0-arg async function. Where the trigger is an explicit call. The lazy promise looks useful when the code needs to be general. Would be good to see an example where the code wouldn't have worked with a regular lazy function [13:38:25.0695] * they feel similar to a memo'ed lazy 0-arg async function. Where the trigger is an explicit call. The lazy promise looks useful when the code needs to be generic. Would be good to see an example where the code wouldn't have worked with a regular lazy function [13:38:49.0429] this sort of reminds me about the disagreement over iterator reuse, though maybe i'm over-generalizing. [13:40:58.0339] as long as we don't add a `.then` to async functions so that we have both Then-ables and Then-ors [13:46:17.0676] > <@aclaymore:matrix.org> as long as we don't add a `.then` to async functions so that we have both Then-ables and Then-ors oh no no no... definitely not [13:47:06.0699] thenorators [13:47:10.0544] the motivation for this is not to add anything that cannot currently already be done with custom thenables, it's just to make it easier to implement that pattern more correctly and with less boilerplate [13:47:12.0852] * thenerators [13:48:32.0961] i think you can trick v8 into emitting negative line numbers [13:48:36.0359] I'm trying to work out if it composes with https://github.com/tc39/proposal-concurrency-control, but it's been a long day [13:49:19.0213] ljharb: https://gc.gy/5787c068-7972-4c5b-bc38-dc39b7704308.png [14:01:46.0432] > <@aclaymore:matrix.org> I'm trying to work out if it composes with https://github.com/tc39/proposal-concurrency-control, but it's been a long day I have a secret prototype task queue that I will share some day [14:02:42.0151] > <@aclaymore:matrix.org> I'm trying to work out if it composes with https://github.com/tc39/proposal-concurrency-control, but it's been a long day I think so? Something like ``` let governor = new CountingGovernor(5); let tasks = data.map(x => Promise.resolve(x).lazyThen(async () => { using _ = await governor.acquire(); return await doWork(x); }); ``` this gives you: task are not started until first polled, and no matter how many are being polled no more than 5 can be doing work at a time [14:02:59.0535] but a real task queue would still be better I expect [14:04:03.0738] > <@aclaymore:matrix.org> I'm trying to work out if it composes with https://github.com/tc39/proposal-concurrency-control, but it's been a long day * I think so? Something like ``` let governor = new CountingGovernor(5); let tasks = data.map( x => Promise.resolve(x) .lazyThen(async () => { using _ = await governor.acquire(); return await doWork(x); } ); ``` this gives you: task are not started until first polled, and no matter how many are being polled no more than 5 can be doing work at a time 2024-12-04 [02:10:00.0287] I ran into a question I thought people here might know the answer to immediately: IEE754 can exactly represent integers in the inclusive range [-2^53, 2^53] yet JavaScript's safe integer min and max define the range as exclusive at both ends; why is that? [02:19:47.0662] Because 2^53 and 2^53+1 are the same number, so you don't know if when you have a 2^53 you actually meant 2^53. It's not enough to be able to represent a number exactly, you want that representation to also not represent other numbers [06:59:04.0835] Ah, makes sense. Thank you. 2024-12-07 [12:46:09.0240] Does anyone here know how to propose changes to the table at ? It's missing Ladybird's LibWasm. and that's kind of sad because we're passing 100% of the tests from WebAssembly/testsuite [12:48:59.0890] NVM found it at WebAssembly/website on GitHub 😅 2024-12-09 [18:07:17.0022] as I'm going through the State of JS data, I see people mention Type Annotations, Type as Comments, and then also just "native types" [18:08:06.0448] I want to add links to help people understand the nuances between all these, but I wanted to double-check here first [18:08:14.0621] - Type Annotations: https://github.com/tc39/proposal-type-annotations [18:09:36.0974] now what's confusing to me is that "Type as Comments" seems to refer to the same proposal; even though I would've assumed based on the name something more like JSDoc with actual comments [18:10:10.0597] so are there competing proposals? or just that one? [18:23:06.0191] Types as Comments was renamed to Type Annotations. They mean the same thing. Both mean static types that do not affect the runtime execution. "Runtime types" or "native types" are not found in any proposal. Anyone requesting that probably wants runtime type checking which is not something being considered. [18:25:52.0393] Typescript already supports a form of type-checking type annotations authored in JSDoc format inside JS comments. Some people may also wish for an evolution of that to use Terser syntax within JS comments. That is purely a TS consideration/feature. JS does not need to change - it already supports comments. [18:38:02.0483] thanks, that makes sense! [20:28:46.0409] Sacha Greif: https://github.com/Devographics/surveys/issues/62#issuecomment-1297815074 You had the same issue before. [20:29:09.0309] Also there's an ECMAScript FAQ somewhere that covers this. Forgot where it is. [22:13:05.0662] oh you're right! although in the message above I was referring to the terms other people used in freeform answers, not the terms I use in the survey itself [22:14:15.0301] this is what the wording in the survey itself looks like [22:14:47.0988] now I see that the following question is confusing though [22:15:18.0775] I should not have mentioned "comments" [22:18:50.0537] I made a note to review this for next time: https://github.com/Devographics/surveys/issues/255#issuecomment-2527034916 [22:23:26.0210] anyway sorry about the poor wording, maybe I can add a note in the results about not misinterpreting the data [22:46:08.0933] Unless you look at the Extractors proposal and squint a bit. https://github.com/tc39/proposal-extractors/issues/20 [03:52:17.0055] Given the champion's pushback on the idealness of using that feature for that use-case in that thread, I think for Sasha's purpose (mass comms to the wider ecosystem) it should not be highlighted. (However I appreciate that whenever we provide hooks to run arbitrary code, all sorts of use-cases are possible) 2024-12-11 [21:16:25.0629] Sacha Greif: i clicked on the survey link, and it says "This survey closed on December 13, 2024.". but that's 3 days from now. is the survey still open? [10:29:25.0837] have to account for timezones. It's already Dec 13th in NeoTokyo 2024-12-12 [02:34:24.0647] yeah the time-space continuum has been a bit hectic ever since that giant baby exploded… [02:35:21.0562] but also I decided to close the survey early to publish the results as soon as possible; I didn't want to get too close to the holidays [02:35:31.0534] in fact a preview is available: https://state-of-js-2024.onrender.com/en-US/ [09:03:51.0435] ah ok, that's too bad [09:05:41.0334] I'm very surprised that 10% of people in that survey are already using Temporal... A proposal not shipped anywhere yet and for which there are no up-to-date polyfills 😅 [09:09:02.0745] * I'm very surprised that 10% of people in that survey are already using Temporal... A proposal not shipped anywhere yet 😅 [09:10:14.0563] lol yeah that seems very very suspicious [09:10:39.0601] like if it's actually 10% of the survey respondents then i think that's strong evidence that the respondents aren't representative of the ecosystem :-/ [10:14:50.0480] I use it in chats all the time to show how much better things will be when we have it [10:26:00.0836] isn't there a polyfill [10:32:06.0633] and it is being used https://www.npmjs.com/package/temporal-polyfill?activeTab=dependents [10:32:32.0357] i'd suspect that's what the respondents meant then? [10:43:41.0908] I share ljharb's suspicion of the results [10:43:49.0450] they are still interesting! [10:45:31.0353] 0% uses TLA 💀 [10:45:39.0582] why did we build it dawg [10:54:38.0514] > <@shuyuguo:matrix.org> 0% uses TLA 💀 I know that at least 3 of those 13 people are in this room — react to this message to see if all of the 13 that voted for it are here! [13:30:48.0416] I am not ashamed [14:09:50.0982] how do you feel being part of the 100th percentile [14:36:00.0188] statistically insignificant 2024-12-13 [16:17:13.0019] 88K weekly downloads is usage, but it's nowhere near even 0.1% of the JS ecosystem [16:17:50.0821] i mean, i use TLA in application entrypoints [16:18:05.0313] but only there, and only rarely [17:19:28.0923] > <@ljharb:matrix.org> i mean, i use TLA in application entrypoints 👆 this is the use case [18:46:19.0417] i still think we should somehow restrict TLA to entrypoints [01:26:50.0387] > <@ljharb:matrix.org> i still think we should somehow restrict TLA to entrypoints We discussed that alternative and decided against it. For Bloomberg (and lots of AMD usage), TLA in non-entrypoints is really useful in certain cases, to avoid races against modules not yet being initialized. [01:27:39.0867] (But it’s not like we invented TLA—it was already there in the ecosystem for this reason) [12:31:54.0018] > <@nicolo-ribaudo:matrix.org> I'm very surprised that 10% of people in that survey are already using Temporal... A proposal not shipped anywhere yet 😅 I'm using it in probably 10 projects. I know we're not supposed to use it in production, but it's so much nicer. I even have a Temporal rrule library I made that's 80% working. (Turns out I didn't need it, so I haven't worked on it more). I should probably open source that. 2024-12-17 [17:09:59.0872] officially live. thanks for all the feedback and help! https://2024.stateofjs.com/en-US 2024-12-19 [10:04:09.0476] https://es.discourse.group/ seems down. Or just me? [14:57:01.0752] it was but seems to be back 2024-12-20 [08:16:35.0500] can we get a TypedArray.prototype.equals [09:12:44.0148] do you ever need that outside of tests? [09:12:48.0697] I have never needed that outside of tests [09:44:42.0886] what's it matter? people writing tests are valid users of the language [10:22:09.0792] I am generally opposed to adding features to browsers which will not be used in browsers [10:22:32.0542] I am happy to have it in WinterCG or whatever [10:29:17.0800] does WinterCG add prototype methods to existing built-ins? [10:36:20.0151] no reason they couldn't but also they could just add `assert.equals` and define its behavior for TAs [10:36:33.0114] or whatever [13:47:37.0867] we could certainly add a feature as normative optional, and HTML could make it prohibited in browsers [14:47:30.0792] > <@bakkot:matrix.org> I have never needed that outside of tests Yes, I wrote one a few months ago that’s used in prod. We’re doing basic crypto signature validations [15:41:09.0899] Or there’s another usecase for comparing values tired in History State, to detect the changing UI values. [15:41:29.0887] So used it twice in prod in the last 3 months. [15:41:43.0631] * Or there’s another usecase for comparing values stored in History State, to detect the changing UI values. [15:51:56.0438] (Android Element client is dropping content. Jordan's message is missing. I can observe it only if I click into it to reply.) [15:52:37.0946] (And I swear I have not muted anyone) [15:53:59.0267] Yeah I can confirm that 2024-12-21 [16:32:56.0786] weird. text is > we could certainly add a feature as normative optional, and HTML could make it prohibited in browsers [16:49:44.0927] Yep. When I long press to reply, I see that text. [16:51:34.0679] I am super curious to discover what programmatic condition causes this erasure. [16:53:09.0159] Please could someone else with an iPhone reply to the original message with the same text as Jordan. [09:12:36.0957] it would be very handy in this webtransport implementation i wrote. but fyi on "will not be used in browsers", browsers already have indexeddb.cmp which can compare typedarrays so its not new functionality for them. [09:13:26.0399] my problem is actually that i'm not in a browser, or i'd just use that [09:15:00.0820] it's true that browsers already contain a lot of garbage, indexeddb included, but I don't want that to be a reason to add more [09:15:58.0605] ("I have an actual use case in the browser" is a perfectly good reason, to be clear; I'm not saying I'm opposed to this, just that I want it to have at least some compelling motivating use cases in browsers specifically) [09:19:09.0977] huh does using not work with destructuring :( [09:20:05.0581] i don't know that the use cases are browser or non-browser specific. do you mean a use case specifically tailored to a browser or just something that isn't explicitly not browser? [09:20:27.0219] not explicitly not browser [09:20:39.0697] specifically I think of tests as being specifically not-browser [09:20:54.0163] * specifically I think of tests as being specifically not-browser (for example) [09:20:55.0083] or at least not-production-browser [09:21:05.0154] * specifically I think of tests as being not-browser (for example) [09:21:50.0480] there are a lot of results for indexedDB.cmp on github [09:22:15.0168] though a good portion of them do actually involve real indexeddb usage 2024-12-23 [08:50:01.0256] ah webtransport [08:50:07.0484] someday you will be standarized [08:50:18.0089] for now i use webrtc datachannels which are a paiiin to set up 2024-12-30 [07:19:27.0539] I've been interested in doing more struct-of-arrays type of JS coding recently, and there a TypedArray.equals check might be useful at times... Though I don't have a direct use case at hand. [09:17:15.0787] The question of standardizing deep equality begins with the question, “which deep equality?”, since it is not a problem that generalizes well. I think there’s an opportunity to introduce a protocol that looks like: `[Symbol.equals](other, childEquals=Object.is, seen=new Map)` but if I were holding my breath, I would have died ten years ago. [09:20:34.0786] This is the direction I would go because it doesn’t require the standardization of one deep equality with a name like `Object.equals` or `Reflect.equals`, but does allow for any kind of transitive equality, including shape validation, including graphs with cycles. [09:22:43.0657] And progress on such an effort could begin with a ponyfill. I am fond of how Python’s operators like `len` close over all the behavior of intrinsics and defer to to a “protocol” only when the type passes no known “brand check”. That would allow the language to evolve under the ponyfill. And it’s not a big ponyfill. [10:55:03.0010] even equality of TAs is tricky to define (at least for the float ones) because of NaN [10:56:27.0970] equality of ABs is at least straightforward, and there's a reasonable performance case for doing it natively (i.e. it can be implemented as memcmp) [10:58:44.0451] I have been kind of kicking around the idea of adding more built-in comparators, so could do `arrayOfNumerics.sort(Number.compare)` or `.sort(Compare.numeric)` or whatever, and an AB comparator could live there I guess [10:59:29.0586] (https://es.discourse.group/t/proposal-idea-descending-sort/889/7) [11:02:44.0267] Clearly, `.sort(Number.negate.compose(Number.compare))` #tdz [12:07:35.0316] > <@bakkot:matrix.org> even equality of TAs is tricky to define (at least for the float ones) because of NaN even w/o NaN, float equality is basically useless in practice except when some tolerance is introduced, and that tolerance tends to vary a lot from use case to use case