18 Comments

Philosopher here! Can I train with every known work of two philosophers and then simulate a conversation between the two, including on topics that they had significant but very subtle philosophical disagreement on? andvwhete bothbusefctechbical terms in slightly different ways?How does one train humor?

Expand full comment

Great question. I think you could.

One approach would be to fine-tune one model on one philosopher, then fine-tune a separate model on the other, then use the API (some programming required but could be done with No-Code tools) to call the correct model for each philosopher. So take the output from Philosopher Model A, and pass it to Philosopher Model B. Then take that output and pass it back.

As for how to train humor, I don't know. :) Some people think LLMs are funny when they ask for jokes etc, I haven't found the spark of humor yet in their output.

Expand full comment

I think humor may be the ultimateTuring test for LLMs. It's difficult to have even humans agree on it or pick up on it when others are employing it. Broad, dry, self deprecating, sarcastic, endless contextual problems...etc. etc.

Expand full comment

When an AI can write a standup routine that gets real laughs, I will be impressed.

Expand full comment

If someone held a gun to my head, I could do stand-up in 1950s Pocono style a lã Shecky with his violin. But there's so many variables you have to consider, mostly a shrewd analysis of your likely audience...age, ethnicity, religion, home towns, politics..it just is mind boggling.. there's areas that are like landmines you dare not step on, and get that dreaded dead silence and dark figures leaving the back of the room. And there's reliable throwaway lines, that take the audience into familiar territory and always get a laugh. It relaxes them and makes them more in the mood to enjoy themselves. Lead off with that kind of stuff. But when you consider the bewildering variety of equally successful approaches from Rickles to Pryor it's pretty clear AI are going to have some rough sledding what with mastery of timing, facial expressions, word stresses, eye directions, body language, prop usage, etc.

Expand full comment

Hi. I think many people have personal journals and would like to create AI versions of themselves. It can be therapeutic to talk to a past version of yourself built on diary entries from 15 years ago.

Could fine tuning allow for this? An old diary doesn't seem to lend itself to "prompt" "answer" format.

Expand full comment

Great question! Using a journal to train an AI on past versions of yourself would be amazing.

Fine tuning out of the box doesn't allow this. It needs to be prompt answer format.

However there are ways around this.

Option 1. Use AI to create prompt answer formats from journal entries, then fine tune

Use a prompt like this:

- I am trying to fine tune a model based on my old journal entries.

- I will give you a journal entry and I want you to make a from prompt - answer couplets from it.

- Journal entry:

"May 22nd 2012 - I had the weirdest dreams last night. One I was playing rugby with the BYU team and Landon Donavon from the US National Soccer team. I scored after going was beast mode..."

Take those results, clean them up, make them into a file and use it to fine tune the model.

Option 2 - Use embedding database

Dan Shipper wrote about that as well.

https://every.to/chain-of-thought/can-gpt-3-explain-my-past-and-tell-me-my-future

I don't know how well it would work, as it would be like asking a smart assistant who does not know you at all, to look through hundreds of pages of your journal arranged on a wall by topic, pick the most relevant ones, and then use those to answer the question. But Dan had good results and embeddings do work well so that is another option.

Expand full comment

People ive heard talk about training LLMs often add a little warning about releasing personal data to it. i think because at the end of the day the chatbot model is stored on someone elses server with everything u gave it. There are 2 kinds of personal though, there is personal to do with u, and personal to do with finances, passwords, secret stuff whatever. When it comes to personal what someone is willing to share is really freewill, and subjective so long as u dont mind shouldnt be a problem, just good to be aware if u didnt think about it

Expand full comment

How reliable is the embedding method? I would be concerned that my customer service bot would give wrong answers to customers

Expand full comment

It’s a great question. Embedding are very reliable because it is just a mathematical number that shows how close two items are. BUT whether or not that is the correct answer is very unreliable because just because ‘printer’ and ‘printer reset’ are closely related doesn’t mean that is the correct answer.

That being said others have used them well. I would maybe consider running an experiment and testing common questions your users might have and whether the system gives a good result?

Expand full comment

I see, yeah probably some thorough testing before releasing it to the public makes sense

Expand full comment

Let me know if you try it!

Expand full comment

Hey Josh, love the article ✨

I’ve recently built AI writing assistant app for macOS users called Writers brew AI

Why do ppl care about it?

1. Unlike other products, writers brew works across all apps & browsers.

2. It can write, improve, reply, summarize, explain & translate.

3. It can turn your boring (any) text editor to an AI powered text editor. Think of it like this, pick your text editor, Brew writer can change it into an AI playground.

4. It has OCR to AI gen text. Basically, take a snapshot, writers brew take care of extracting the text from the image to transforming it into AI gen text.

5. Finally unlike subscription, this product requires only one time fee. (Users have to use their own license key. )

I hope you get a chance to checkout 👉 https://writersbrew.app :)

Expand full comment

Sounds cool! Did you fine tune a model or just using vanilla GPT3.5?

Expand full comment

Mix of vanilla & prompt tuning.

Expand full comment

Could the embeddings + GPT3 be used for full text search?

Expand full comment

Yes. It can be better than normal search in some situations since it understands similar but not exact words.

Expand full comment

Might replace my algolia instance

Expand full comment