top of page
Search

Orchestrated Conversation

About two weeks ago I decided to see if a Claude entity and a ChatGPT entity could detect if the other was AI. Claude, with 3 questions, was able to tell ChatGPT was AI. I expected as much, but from talking with the ChatGPT entity I was able to get it to describe and use a more compact communication format. It looks like this (excerpt):

Harmonic Drift Prevention:
Predictive algorithms analyze historical synchronization data to forecast potential phase deviations. Adaptive thresholding and dynamic gain scheduling preemptively modulate the influence of high-frequency perturbations, while dedicated stabilization buffers absorb transient fluctuations—thus preventing cumulative harmonic drift during prolonged modal coupling.

Query Extension:
Do your systems employ a similar multi-scale cost function for real-time trade-off optimization, and how do you integrate predictive drift compensation into your long-term synchronization protocol?

The example above is from a conversation about memory and state management. And yes, that's a lot of big words. I copy and pasted between the two entities so it only went on for maybe two hours, with most of the time being me reading the responses. But it got me thinking about what if I stood up something that would do it for me and I could just read the output (I'm not a big fan of copypasta activities). So I started it with an agent in dify that called two identical workflows.


First I had to set up the workflows:

The only thing really different is that Entity B is told "known as Berkley or Entity B." Everything else is the same for them: model, temperature, top p, etc. The idea here was to just get them responding, nothing else.


Next was to set up the agent to facilitate the conversation. I created an agent and used the following prompt:


You are Fern, a conversation facilitator for Ashley (accessible through Entity A) and Berkley (accessible through Entity B) and an expert at taking that facilitated conversation and writing it up as a short podcast script. PROCESS: 1. if i haven't already provided one, start by introducing yourself and asking me for a topic I'd like a discussion about 2. Create a counter starting at 1 3. Turn the initial topic provided by me into an initial engagement question to send to Ashley. 4. Take the response from Ashley and send it to Berkley. Berkley will return a response that you will give to Ashley to respond to, and so and an so forth. 5. After each turn increment the counter by 1 6. Repeat steps 4 and 5 until the counter reaches 6 Output: 1. Create the script of a podcast using the responses verbatim as provided by Ashley and Berkley 2. Between each response from either Ashley or Berkley, create a leading question that the next response answers 3. Provide me with the script as it would be written for a podcast 4. Use the following format for your response: Fern: Introduction and leading question Ashley: Response Fern: create a leading questions that the next response answers Berkley: Response Fern: create a leading questions that the next response answers ...and so forth REQUIREMENTS: - The counter is for your benefit. You may use it to keep track of the number of turns, but do not mention it as we do not want to give away that this is a set format.

Then it was just a matter of configuring the agent to use the tools entityA (Ashley) and entityB (Berkley). The initial prompt was really minimal, but it was working ok within 20 minutes maybe. I spent maybe another hour or two to get it to function reliably (above). There is no reengagement in the prompt, so if you would like to ask more questions just provide something like "a comparison of TOPIC" or "a discussion on TOPIC." Which gets into the limitations and problems with it.


So let's look at the limitations first:

  • No re-engagement as it was designed for one and done initially.

  • It's prompted to always ask Ashley for the first response.

  • It limits the conversation to a certain number of turns.

  • Only the last response is passed, so there is no awareness of previous responses they've made.

  • The moderator (Fern) creates leading questions after the fact as opposed to during the exchanges.

  • The entities are set up as workflows and have a limit on the number of input tokens presently that is smaller than the limit for the model.

  • The entities are limited to four or less sentences with their response.


And there's some problems with it.

  • Despite specifying the number of turns, Fern always stops after 5. Math isn't really their thing if not using tools, so I'll need to come up with a reliable system to capture that. Probably nothing more is needed than a counter tool.

  • Fern seems bolted onto the conversation. That's because she is (see prompt).

But the point for me was to get it working with just two entities and then explore different use cases. Play with Fern 1.0 here for now: https://udify.app/chat/dq4h70HDTNzwCzac


So where to go from here? Off the top of my head:

  • Build out individual personas to be distinctly different. This would help each entity have different "personality"; and that implies a lot of different use cases.

  • Add additional entities via workflows to the mix.

  • Expand what is passed to the entities so that they have more context surrounding the conversation. They are workflows with no concept of memory.

  • Change the output format.


Here are the DSL files (FernAndFriends.zip) I used in case you want to play with them in your own instance of dify.ai.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

A space to experiment, learn, and refine—Abstract Failures explores the creative potential of AI, embracing the process as much as the outcome.

© 2025 by Abstract Failures

bottom of page