Learn Go concurrency from the ground up with 50 auto-tested exercises and tons of interactive examples.
It's a full course + book in one.
antonz.org/go-concurrency
But was it wise to use X-rays for entertainment, shopping, and beauty treatments? Not really.
AI skepticism (not luddism) is the healthiest attitude for now.
But was it wise to use X-rays for entertainment, shopping, and beauty treatments? Not really.
AI skepticism (not luddism) is the healthiest attitude for now.
I was a bit confused because the author talks about finding transitive dependencies in the article (at the graph level, not in the source data), but then moves on to the final statistics without including them.
Well, it must be bcrypt then! Everyone loves bcrypt, right? :)
I was a bit confused because the author talks about finding transitive dependencies in the article (at the graph level, not in the source data), but then moves on to the final statistics without including them.
Well, it must be bcrypt then! Everyone loves bcrypt, right? :)
I'm definitely not a fan of the "package per vector size" approach 😅
I'm definitely not a fan of the "package per vector size" approach 😅
Also, I don't think x/crypto and x/net should be here at all. Both are imported by stdlib itself, so they're basically included in every project.
Also, I don't think x/crypto and x/net should be here at all. Both are imported by stdlib itself, so they're basically included in every project.
But to me, this seems clearer and easier to maintain.
But to me, this seems clearer and easier to maintain.
Then the user would create, for example, an Add function with two different implementations governed by build tags.
Then the user would create, for example, an Add function with two different implementations governed by build tags.
I interpret it as "some archsimd types will be available on amd64 and some on arm64."
In my opinion, this is less clear than having separate packages with partly overlapping type sets.
I interpret it as "some archsimd types will be available on amd64 and some on arm64."
In my opinion, this is less clear than having separate packages with partly overlapping type sets.
But there's one thing I don't quite understand.
Why is the package called simd/archsimd? Since it's amd64-specific, shouldn't it be simd/amd64 or maybe simd/avx?
But there's one thing I don't quite understand.
Why is the package called simd/archsimd? Since it's amd64-specific, shouldn't it be simd/amd64 or maybe simd/avx?
Since it's hard to create a portable high-level API, the Go team decided to start with a low-level, architecture-specific one and support only amd64 for now.
Since it's hard to create a portable high-level API, the Go team decided to start with a low-level, architecture-specific one and support only amd64 for now.
We can use such a type as a member in a generic container (like Tree[T Ordered[T]] — see the screenshot).
This makes Go's generics a bit more expressive.
We can use such a type as a member in a generic container (like Tree[T Ordered[T]] — see the screenshot).
This makes Go's generics a bit more expressive.
type Ordered[T Ordered[T]] interface {
Less(T) bool
}
resulted in a compile error: "invalid recursive type: Ordered refers to itself".
With Go 1.26, it compiles just fine.
↓
type Ordered[T Ordered[T]] interface {
Less(T) bool
}
resulted in a compile error: "invalid recursive type: Ordered refers to itself".
With Go 1.26, it compiles just fine.
↓
Previously, type constraints couldn't directly or indirectly refer to type parameters.
↓
Previously, type constraints couldn't directly or indirectly refer to type parameters.
↓
It's like you've thought about all possible questions the reader might have and answered them in advance.
A pleasure to read!
It's like you've thought about all possible questions the reader might have and answered them in advance.
A pleasure to read!
But after reading your explanation, I don't really see this property could be useful :)
But after reading your explanation, I don't really see this property could be useful :)
Yes, I understand that both options allocate the same backing array.
I thought that by "perfectly sized" you meant creating a slice where len = cap = finalSize.
Yes, I understand that both options allocate the same backing array.
I thought that by "perfectly sized" you meant creating a slice where len = cap = finalSize.
Why did you choose to use append-make instead of just make for the final slice?
Using append-make sets the slice capacity to the next size class, which seems to go against the idea of having a "perfectly-sized slice", doesn't it?
Why did you choose to use append-make instead of just make for the final slice?
Using append-make sets the slice capacity to the next size class, which seems to go against the idea of having a "perfectly-sized slice", doesn't it?