Felix
banner
fxwin.bsky.social
Felix
@fxwin.bsky.social
mostly correct, glad to be proven wrong
At this point one should just completely avoid block lists for this exact reason tbh
December 19, 2024 at 8:16 PM
On the other hand, a problem like that is very hard to even approach whereas a simple LLM based solution can literally just be implemented with a single prompt, and (with some tweaking) produce somewhat solid results.
December 17, 2024 at 9:40 AM
platform's terms of service or community guidelines, there isn't really a good way to influence how strong (or weak) the impact of different offenses should be treated besides just telling the LLM to "please do this, please do that,....", which is rather unsatisfying.
December 17, 2024 at 9:38 AM
This is very much in line how i feel working with them too. In "classical" i.e. task-specific NLP it is fairly easy to influence behavior in somewhat nuanced ways by adjusting e.g. class weights, but let's say we use an LLM to evaluate some text against a social media
December 17, 2024 at 9:38 AM
'translation is left as an exercise to the reader'
December 12, 2024 at 8:08 AM
Not sure what the point of this response is since it doesn't apply to me, but you're right. I should just call everyone dim-witted and close-minded for not agreeing with me so nobody takes me for a (god forbid) "AI fanboy".
December 5, 2024 at 11:06 AM
i don't see why one couldnt extract enough abstract structure from text to arrive there (at the very least this is a complicated question, and the engagement of both the OP and my replies is quite unsatisfying to me).
December 5, 2024 at 11:00 AM
That's fair, but the original claim i was interested in was that "turbocharged predictive text becoming sentient is a notion only possible in science fiction", and while i agree that current models wouldn't qualify as sentient,
December 5, 2024 at 11:00 AM
Why? What do you think is a necessary component that a sentient AI must have that LLMs (of this generation or the next) lack?
December 5, 2024 at 8:13 AM
I don't disagree, but note how what you're mentioning (limited size, no persistent memory) are at most technical trivialities, and not fundamental, insurmountable barriers to consciousness, which is what i was looking for (and haven't seen concrete evidence for besides snark in my replies)
December 5, 2024 at 8:11 AM
which is why i was asking for reasons why i should believe one side over the other. Until now i've exclusively received snark and "Heh, do you really think parrots can be HUMAN?"-tier responses, so maybe that was a mistake.
December 5, 2024 at 8:08 AM
I don't known whether this is possible via extrapolation from language alone (though i can see how one can build rather complex internal models based off of language alone), but if it is possible, then yes. I don't really have strong feelings either way at the moment,
December 5, 2024 at 8:08 AM
There's a lot loaded into that question, but i don't see a reason why a sufficiently complex "thing" (e.g. (but not necessarily) simulating a human mind) can't attain its own "subjective experience".
December 5, 2024 at 8:06 AM
Do you think any technological/artificial structure can be sentient?
December 4, 2024 at 9:09 PM
That seems like a rather complex question about what it means to be sentient, and not something that is currently at all obvious
December 4, 2024 at 8:44 PM
Nothing, hence why i don't show a lot of conviction for either side
December 4, 2024 at 8:43 PM
Backed up by lack of evidence and emotionally charged language in the OP, but you may call it that
December 4, 2024 at 8:42 PM
The confident assertion without evidence, which also seemed rather spite- than data-driven, yes
December 4, 2024 at 8:39 PM
Nothing, hence why i don't show a lot of conviction for either side
December 4, 2024 at 8:32 PM
That sounds like a lot of conviction, anything to back up the claim that it's "only possible in science fiction" or is it just a gut feeling?
December 4, 2024 at 6:17 PM
While I'm not convinced that datasets like this inherently violate privacy laws/bluesky/huggingface ToS, to me it seems fairly indisputable that the developer guidelines should apply to them, and an anonymous user hosting these data on huggingface seems to be inadequate with respect to them.
November 27, 2024 at 9:31 PM
The way i would interpret the "users own their posts" part of the ToS is that it is not bsky's responsibility to ensure that data sourced from their API is used in accordance with privacy laws. This does not mean that anyone can use said data however they like.
November 27, 2024 at 9:29 PM
That's more of a pragmatic argument rather than a legal one, and we are still at a point where there is precedent to be set wrt to third party websites hosting user content without their consent (unlike the disney situation).
November 27, 2024 at 9:29 PM
How is providing the dataset on a third party website not an app or service in this sense?
Also you do own your posts, as per Bsky's ToS
November 27, 2024 at 9:18 PM
Nobody is upset about "people seeing the public posts on a public webbsite", they are upset about the systematic gathering of public content on one website for a third party website they didn't sign up for, and for purposes they didn't consent to (via ToS or other means)
November 27, 2024 at 8:13 PM