Recently, I made an Arduino UNO that I showed to have better switching characteristics than a commercial board. It was a great project to help me understand how seemingly inconsequential routing practices can lead to issues down the line.
You should be able to improve it further with a 4 layer board, as you can have the signal lines closer to their reference planes, and can more easily avoid signals crossing each other and breaking reference planes.
Liquid crystal elastomers will most likely never be used in humans because, in order to drive the phase transition (mematic mesogens going from isotopic to anisotropic phase) necessary for macro scale work, the LCE has to be heated well beyond 100C. Even in non-thermal contexts, you need kilovolts to influence a doped bulk LCE. I just don't see it happening.
to be fair, the approach is usually covered in snowpack for most of the year, so impact is minimal by foot traffic. However, most of the protection is fixed, which could have lasting effects if something were to rip out.
For other mountains with dry summits in the summers, I would agree: the effects of erosion are frightening
Today I scheduled a dentist appointment over the phone with an LLM. At the end of the call, I prompted it with various math problems, all of which it answered before politely reminding me that it would prefer to help me with "all things dental."
It did get me thinking the extent to which I could bypass the original prompt and use someone else's tokens for free.
And this is another easily solved problem by someone who knows what they are doing…
Voice -> speech to text engine -> LLM creates JSON that the orchestrator understands -> JSON -> regular code as the orchestration -> text based response -> text to speech
Notice that I am not using the LLM to produce output to the user and if the orchestrator (again regular old code) doesn’t get valid input, its going to error. Sure you can jailbreak my LLM interpretation. But my orchestrator is going to have the same role based permission as if I were using the same API as a backend for a website. Because I probably am
Source: creating call centers with Amazon Connect is one of my specialties
Sure - but does this have the context of the original question that the user asked? If not it seems that it isn’t really conversational and more of a “compiler”.
How would something like “I want an appointment either on Monday afternoon after 4pm or one on Tuesday before 11am” work?
Unless all the parameters given by the user fit within the constraints of the json format then the LLM would need the context of the request and the results to answer properly, would it not?
This is a constrained space. I would do the naive implementation at first and then talk to the humans (like you) and then my JSON definition would include a timespan type field.
My orchestrator would then say “I have these times available [list of times]. What time would you like?” and then return a specific LLM prompt to parse the information I need once the user responds. But I would send that exact text to the user. Yes I’m purposefully constraining the implementation where the LLM is never used for output and never directly controls the backend
There is also the concept of “semantic alignment” where you ask the LLM to generically answer the question - “does the users answer make sense with regard to the question” as a first level filter that only returns true or false. This is again a constrained function that you pass in the question and answer to the LLM and if you get something besides true or false your code errors.
The purpose of an LLM or even before that an old school intent based system (see my link) isn’t perfection it’s “deflection”. The more that you can handle through automation the less you have to bring a human in. An American based call center when a person is an agent costs from $3–$7 a call fully allocated. An automated call can costs tenths of a penny.
Of course that doesn’t include the cost of the accepting a call in the first place over a 1-800 number and in my case the price that AWS charges per minute for Amazon Connect
> This is again a constrained function that you pass in the question and answer to the LLM and if you get something besides true or false your code errors.
Code erroring is fine for code, but what is the user experience here? Some sort of “computer says no” generic response, or something more contextual?
I’m trying to picture what the user says and hears as a response to an off-the-beaten-path question. Is it just “I don’t understand, here’s how to phrase it?”.
If there is an issue, they are transferred to a human operator. “I’m having trouble understanding you, let me transfer you to someone who can help”. On the CSRs screen, they will see the conversation that has taken so far.
There is also sentiment analyst built into the prompt so it can detect a negative sentiment and automatically short circuit the process and transfer to a human.
NLP doesn’t have world knowledge and with one prompt, I can support almost any language. Of course the speech to text engine is specific to the language
haha for sure some one has made a little aggregator for this and saving tokens. I bet you gotta dig for a while though before you find a company exposing Opust 4.6 to customers and not flash 2.5 lite
What do you see as the bad part of this? That the user is trying to farm points by copying patterns of upvote-winning users, or that there's a flood of inauthentic new users? Genuinely asking.
I suppose that's why the harpejji [0] has recently gained popularity? I too have wished for an isomorphic keyboard. All of the non-stacked ones become either too wide or the keys become too skinny. Example: Dodeka Keyboard [1]. I know that the Lumatone [2] exists too, but it is too progressive for my taste :)
As a side note, the traditional keyboard size is not representative of the average pianist's hand size. David Steinbuhler [3] has been making modified traditional keyboard layouts
by varying the width of the keys slightly, and people rave about it. I've had the chance to visit his shop in Titusville, Pennsylvania, where he designs them. It's a totally enhanced playing experience, even for someone like me who can play a 10th without difficulty.
This is orders of magnitude more complicated and risk prone than wire wrapping due to the possibility of cold joints, but as I understand it, this look is what people dig these days (just watch any EE youtuber). I too used to think that soldering on porto board was a great way to go about prototyping sans SBB, but you can't ignore the bomber connections that wire wrapping gives you.
Might be a dumb question, but isn’t the risk of cold joints proportional to your skill in soldering in general? Important context: I am definitely a noob to soldering
It is, yes. After some practice, you will not get cold joints. Or when there is a danger of a cold joint due to massive heat sinking around, you will know and be extra careful
http://www.simonjjones.com/#/posts/golden-arduino
reply