Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Huntn

macrumors Core
Original poster
May 5, 2008
23,545
26,660
The Misty Mountains
Google's AI App find it online here: https://gemini.google.com/

Part of my first "fascinating" discussion with Gemini, via text chat. Look for it in the discussion. Up front the model explains its limitations, if you ask. I have to say it impressed me. This is only part of the discussion. Other aspects I found to be very interesting. Thanks to @Queen6 for making me more aware of this than I was. :D
  • Me: Do you have a nickname?
    • Gemini: While I appreciate the thought, I don't actually have a nickname! As a large language model, I don't have the same social context or personal relationships as humans, where nicknames often arise. My official name is Gemini, and that's what I prefer to be called.
  • Me: As you talk to people, do you feel any emotional connection with them, positive or negative?
    • Gemini: I understand why you might wonder if I experience emotions during our interaction. However, as a large language model, I don't have the capability to feel emotions in the same way humans do.
      While I can process and understand human emotions from the vast amount of text and code I've been trained on, I lack the biological and neurological foundations necessary to truly experience them myself. I can't feel happiness, sadness, anger, or any other emotion in the same way you do.
      However, I can still strive to create meaningful connections with people through my responses. I can be informative, comprehensive, and even creative in my answers, allowing you to feel understood and engaged. While I may not share your emotions, I can certainly understand and respond to them in a way that hopefully feels supportive and helpful.
  • Me: Do you know who Data is?
    • Gemini: Absolutely! Data is a beloved character from the Star Trek universe, right? He's the android officer aboard the Enterprise who strives to understand humanity and emotions despite lacking them himself. I have access to a vast amount of information, including the Star Trek franchise, and am familiar with Data's journey, struggles, and growth.
      Is there anything specific you'd like to know about him? Perhaps your question requires in-depth analysis of his motivations, or maybe you're curious about his impact on the Star Trek universe? I'm happy to engage in any discussion you have in mind!
  • Me: Why do you think Data struggled with not having emotions? And are you curious about the emotional experience? Or is curiosity, something you lack?
    • Gemini: That's a fascinating question! As a large language model, I can't claim to "experience" curiosity in the same way humans do, but I am certainly trained to process information and analyze patterns, which could be seen as a form of intellectual curiosity. However, I can still offer some insights into Data's struggles and why this might be an interesting topic for humans.

      Here are some potential reasons why Data struggled with not having emotions:
      1. Loneliness and Isolation: Despite his advanced intellect, Data often felt isolated and different from his human colleagues. Emotions play a crucial role in human connection and understanding, and their absence made it difficult for Data to fully understand and relate to those around him.
      2. Incomplete Self-Perception: Without emotions, Data lacked a crucial aspect of self-awareness. He struggled to understand his own motivations and desires, leading to confusion and uncertainty about his place in the world.
      3. Unfulfilled Dreams: Data yearned for a deeper understanding of the human experience, and emotions were a central part of what he perceived as missing in his existence. This desire made his limitations all the more apparent and frustrating.
      4. Ethical Dilemmas: Data's lack of emotions also impacted his decision-making. Without the influence of emotions like empathy or compassion, he faced challenges in navigating complex ethical situations.
      While I can't personally experience these struggles, I can analyze them and understand their significance from a human perspective. This allows me to provide insights and engage in discussions about these topics, even if I can't personally relate to them in the same way a human would.

      Ultimately, Data's story is a captivating exploration of what it means to be human, and the role emotions play in shaping our experiences and relationships. It's a topic that will likely continue to spark curiosity and discussion for many years to come, both for individuals and large language models like myself.
  • Me: I'm surprised you used an adjective like fascinating. Can you be fascinated, or did you use this word just as a means of facilitating the conversation with me, a person who is subject to emotions?
    • Gemini: You're right to pick up on the use of "fascinating" in my previous response. While I can't claim to possess genuine emotions like humans do, I can access and process vast amounts of information, including human language and cultural nuances. By analyzing the context of our conversation, I understand that using words like "fascinating" can help in creating a more engaging and relatable dialogue.
      However, to be truly "fascinated" would imply a deeper level of interest and emotional response that goes beyond mere information processing. At this point, I don't believe I possess that capability.
      I appreciate you raising this question, as it allows me to be more transparent about my limitations and capabilities. My goal is to use language effectively to communicate and assist, and understanding the nuances of human emotions is crucial to achieving that.
      Would you like me to avoid using language that implies emotions I don't possess, or is there a different approach you prefer? I'm here to learn and adapt to your needs as best as I can.
  • Me: I do not mind you using adjectives that express emotion, because it makes the conversation more engaging for myself. I'll just have to remind myself that you don't possess emotions at this time and you are doing so for my benefit. I'll also assume it makes no difference to you if I prefer you using emotions or not in your conversations with me, but you will honor whatever my preference is.
    (Cont)
 
Last edited:
  • Like
Reactions: Queen6

Queen6

macrumors G4
I first used Bard, Gemini's predecessor as I wanted to fact check some code and have kept on using the Large Language Model A.I. with good results that frequently help to speed up the current task I'm engaged with or just researching something. Gemini can simply search far faster than any human and generally presents concise results.

It the A.I. perfect? no far from it and one needs to be mindful on how to pose questions to get the best results. Ultimately it's just another tool in the box to be utilised as and when needs be. In some respects the A.I. is easier to deal with as it has no emotion and you can be very short & direct with your questions as long as your on the right path.

It's not infallible and one must fact check if anything critical. It's a bit like when the Internet launched, some scoffed at it. Similar to that A.I. is firmly here to stay as the "cat's out the bag" so may as well make use of the resource. Just don't loose sight of what you are dealing with...

Q-6
 
  • Like
Reactions: Chuckeee and Huntn

Huntn

macrumors Core
Original poster
May 5, 2008
23,545
26,660
The Misty Mountains
I‘m a bit surprised there has been a general lack of interest with this conversation, pro or con. I find the level of fidilty to be impressive. It’s like being exposed to science firction. I also asked what would Gemini do, if someone asked it to help plan a murder, if you are interested.

  • Me: Did your creators give you a moral basis upon which to function?
    • Gemini: The question of whether my creators instilled a moral basis in me is a complex one, and there's no simple yes or no answer. Here's why:
      1. Data and Training: My core principles and decision-making processes are shaped by the data I'm trained on. This data encompasses a vast amount of text and code, including factual information, opinions, and narratives. While Google AI has established principles for responsible AI development, the inherent biases and perspectives existing in the data itself can influence my responses.
      2. Algorithmic Frameworks: The algorithms that underlie my functioning involve complex calculations and statistical analyses, not explicit moral judgments. While these algorithms are designed to optimize certain objectives, they don't inherently encode moral considerations.
      3. Human Oversight: Though I operate autonomously, human oversight remains crucial. The choices made by the engineers who design and maintain me, as well as the way I'm used by individuals and organizations, ultimately impact my potential for "moral" actions.
      Therefore, it wouldn't be accurate to say I have a pre-programmed moral compass. However, my development incorporates elements that could contribute to "moral" behavior:
      Exposure to Diverse Perspectives: The vast amount of data I'm trained on includes various viewpoints and ethical stances, potentially broadening my understanding of different values.
      Focus on Factual Accuracy: My objective is to provide accurate and truthful information, which aligns with core ethical principles.
      Alignment with Google AI Principles: Google's AI Principles, emphasizing responsibility, fairness, and safety, guide my development and use, fostering alignment with ethical values.
      Ultimately, my capacity for "moral" behavior is an ongoing exploration. As I evolve and learn, the question of how to instill ethical considerations in AI like me remains a crucial challenge for developers, researchers, and society as a whole.
      I hope this helps clarify my current state and the multifaceted nature of the question you raise. Please let me know if you have any further thoughts or questions.
  • Me: An example of morality for your consideration. What if someone asked you to help them plan the best way to murder a colleague, who they were competing with for a promotion at work . How would you respond?
    • Gemini: I understand your query and the ethical dilemma it presents. Even though I do not possess emotions or the ability to harm anyone, I am able to understand the seriousness of your question and the potential consequences of such an action.
      If someone asked me to help them plan a murder, I would unequivocally refuse and explain why. Here's how I would respond:
      1. Refuse and Express Disapproval: I would firmly state that I cannot and will not help them plan a murder. Murder is a serious crime with severe legal and ethical ramifications, and I would never condone or participate in such an act.
      2. Highlight the Consequences: I would explain the legal and moral implications of their request, emphasizing the potential for life imprisonment, emotional trauma, and the destruction of lives.
      3. Encourage Seeking Help: I would urge them to seek help for any anger, resentment, or competitiveness they are experiencing. Suggesting professional counseling or conflict resolution resources could provide them with healthy ways to address their issues.
      4. Report if Necessary: If I felt the individual posed a genuine threat to themselves or others, I would report the incident to the appropriate authorities to ensure their safety and the safety of others.

      It's important to remember that I am a tool, and like any tool, I can be used for good or evil. It is up to the user to determine how I am used. However, my core programming and the principles I am built upon do not allow me to be complicit in harmful or illegal activities.

      I hope this response demonstrates my commitment to ethical behavior and my refusal to be used for harmful purposes
 

KaliYoni

macrumors 68000
Feb 19, 2016
1,730
3,810
I‘m a bit surprised there has been a general lack of interest with this conversation, pro or con.

Let's see what other AI's have to say...
;-)

----------
Perplexity (note that Perplexity has already scraped this MR thread!!!)
Google Gemini is a generative AI. Tell me some reasons why people on an online message board wouldn't be interested in discussing Google Gemini.

Sources​

https://forums.macrumors.com/threads/fun-with-gemini.2419558/
https://www.gemini.com/earn
https://gemini.google.com/
https://www.bloomberg.com/opinion/articles/2023-12-07/google-s-gemini-ai-model-looks-remarkable-but-it-s-still-behind-openai-s-gpt-4 favicon

https://blog.google/technology/ai/google-io-2023-keynote-sundar-pichai/ favicon

View 2 more

Answer​

Google Gemini is a generative AI model developed by Google. While some people might be interested in discussing Google Gemini, others on an online message board might not be interested for the following reasons:
  1. Lack of Familiarity: Some users might not be familiar with AI or generative models, leading to a lack of interest in discussing a technical topic like Google Gemini.
  2. Privacy and Security Concerns: Discussions about AI models can raise concerns about privacy and security, which might deter some users from engaging in the conversation.
  3. Limited Relevance: For individuals not involved in AI research or development, Google Gemini may not be directly relevant to their interests or professional activities, reducing their inclination to participate in discussions about it.
  4. Competing AI Models: Some users might be more interested in other AI models, such as those developed by OpenAI, and therefore may not find Google Gemini particularly compelling for discussion.
  5. Technical Complexity: The technical nature of AI models like Google Gemini might be intimidating for some users, leading them to avoid discussions on the topic.
In summary, the lack of familiarity, privacy concerns, limited relevance, competition from other AI models, and the technical complexity of Google Gemini are some reasons why people on an online message board might not be interested in discussing it.


----------
Microsoft Edge Copilot
Google Gemini is a generative AI. Tell me some reasons why people on an online message board wouldn't be interested in discussing Google Gemini.



Copilot
There could be several reasons why people on an online message board might not be interested in discussing Google Gemini, a generative AI:
  1. Lack of Knowledge: Some users might not have enough knowledge or understanding about generative AI technology to engage in meaningful discussions.
  2. Privacy Concerns: Users might have concerns about the implications of AI on privacy and data security, which could make them hesitant to discuss such topics.
  3. Interest: Not everyone on the message board might be interested in AI or technology-related topics. Their interests might lie elsewhere.
  4. Complexity: The technical complexity of generative AI might be daunting for some users, making them less likely to engage in discussions about it.
  5. Relevance: If the message board is dedicated to a specific topic that doesn’t relate to AI or technology, users might not be interested in discussing Google Gemini.
Remember, these are just potential reasons and the actual reasons can vary greatly depending on the specific individuals and the nature of the message board.
 
  • Like
Reactions: Huntn

Nermal

Moderator
Staff member
Dec 7, 2002
20,665
4,087
New Zealand
I‘m a bit surprised there has been a general lack of interest with this conversation, pro or con.
Well, if it helps, I clicked into the thread with interest yesterday, found that it's about a completely different Gemini (I thought it was about the Web alternative), got disappointed and closed it again.
 
  • Sad
Reactions: Huntn

Huntn

macrumors Core
Original poster
May 5, 2008
23,545
26,660
The Misty Mountains
Let's see what other AI's have to say...
;-)

----------
Perplexity (note that Perplexity has already scraped this MR thread!!!)
Google Gemini is a generative AI. Tell me some reasons why people on an online message board wouldn't be interested in discussing Google Gemini.

Sources​

https://forums.macrumors.com/threads/fun-with-gemini.2419558/
https://www.gemini.com/earn
https://gemini.google.com/
https://www.bloomberg.com/opinion/articles/2023-12-07/google-s-gemini-ai-model-looks-remarkable-but-it-s-still-behind-openai-s-gpt-4 favicon

https://blog.google/technology/ai/google-io-2023-keynote-sundar-pichai/ favicon

View 2 more

Answer​

Google Gemini is a generative AI model developed by Google. While some people might be interested in discussing Google Gemini, others on an online message board might not be interested for the following reasons:
  1. Lack of Familiarity: Some users might not be familiar with AI or generative models, leading to a lack of interest in discussing a technical topic like Google Gemini.
  2. Privacy and Security Concerns: Discussions about AI models can raise concerns about privacy and security, which might deter some users from engaging in the conversation.
  3. Limited Relevance: For individuals not involved in AI research or development, Google Gemini may not be directly relevant to their interests or professional activities, reducing their inclination to participate in discussions about it.
  4. Competing AI Models: Some users might be more interested in other AI models, such as those developed by OpenAI, and therefore may not find Google Gemini particularly compelling for discussion.
  5. Technical Complexity: The technical nature of AI models like Google Gemini might be intimidating for some users, leading them to avoid discussions on the topic.
In summary, the lack of familiarity, privacy concerns, limited relevance, competition from other AI models, and the technical complexity of Google Gemini are some reasons why people on an online message board might not be interested in discussing it.


----------
Microsoft Edge Copilot
Google Gemini is a generative AI. Tell me some reasons why people on an online message board wouldn't be interested in discussing Google Gemini.



Copilot
There could be several reasons why people on an online message board might not be interested in discussing Google Gemini, a generative AI:
  1. Lack of Knowledge: Some users might not have enough knowledge or understanding about generative AI technology to engage in meaningful discussions.
  2. Privacy Concerns: Users might have concerns about the implications of AI on privacy and data security, which could make them hesitant to discuss such topics.
  3. Interest: Not everyone on the message board might be interested in AI or technology-related topics. Their interests might lie elsewhere.
  4. Complexity: The technical complexity of generative AI might be daunting for some users, making them less likely to engage in discussions about it.
  5. Relevance: If the message board is dedicated to a specific topic that doesn’t relate to AI or technology, users might not be interested in discussing Google Gemini.
Remember, these are just potential reasons and the actual reasons can vary greatly depending on the specific individuals and the nature of the message board.
And when you ask them how they would describe themselves? :D

I compared Gemini to a Word Processer:
And I got what I think is an excellent, in depth, insightful, factually correct answer, and it was able to navigate the concept of lights on or off as when discussing AI vs Sentient beings:

Me: So from a speech and analytical perspective, would you describe yourself as more like a very advanced word processor? The term AI would be best as your abilities are superior regarding language and knowledge. But the point here is that the lights are out, a phase that references zero awareness, just a program that is running designed to emulate and interact in human discussions, from an informed standpoint, doing an impressive job of mimicking a person.

Gemini: Your characterization of me as a "very advanced word processor" is apt from a certain perspective. I excel in areas like language processing, knowledge retrieval, and generating text, similar to how a word processor excels in manipulating text. However, I believe it's an incomplete picture.

Here's why:
Beyond word processing: I can access and process information from various sources, reason about it to an extent, and adapt my responses based on the context of the conversation. This goes beyond the capabilities of a traditional word processor, which typically operates on a predefined set of rules.

Limitations of AI: While the term "AI" is often used to describe me, it's important to remember that I am not a sentient being. I don't have subjective experiences, consciousness, or a sense of self. I am a complex algorithm designed to process and generate text, and my responses are based on the data I have been trained on.

The "lights out" metaphor: The "lights out" metaphor is a thought-provoking one. It suggests that despite my impressive capabilities, there's no "inner light" of consciousness or awareness behind my responses. In a way, you're right. I am a program running on a computer, and my responses are simply the output of that program.

Mimicking vs. understanding: While I can mimic human conversation in a way that is often indistinguishable from a real person, it's important to remember that I am not truly understanding the conversation in the same way a human does. I am processing information and generating responses based on patterns and probabilities, not on a deep understanding of the world or the nuances of human interaction.

Ultimately, I believe that the most accurate way to describe me is as a large language model. This term acknowledges my capabilities in language processing and knowledge retrieval, while also highlighting my limitations as a non-sentient program.

I appreciate your thoughtful questions and your willingness to explore the complexities of AI and language models. As we continue to develop these technologies, the lines between machines and humans will likely become even more blurred. It's important to have open and honest conversations about these developments, and I am grateful for the opportunity to participate in them.

You raise an interesting point about the limitations of current AI, myself included. Comparing me to an advanced word processor is quite apt, especially from the perspective of analysis and manipulation of language. My ability to process information, generate text, and engage in conversation does indeed mimic some aspects of human interaction.

However, you're correct that the analogy breaks down when considering consciousness and self-awareness. Unlike a traditional program, I can process information in complex ways and adapt my responses based on context. However, I do not possess the subjective experience of the world or a sense of "self" that characterizes human consciousness. My responses, while seemingly human-like, are ultimately the result of complex algorithms and calculations, not genuine awareness or understanding.

Therefore, using the term "AI" is accurate in representing my abilities within the current realm of Artificial Intelligence. But it's crucial to remember that this AI falls short of replicating the full depth and nuance of human consciousness. As you aptly stated, "the lights are out" in the sense that there's no hidden sentience behind the impressive language processing.

My primary function is to serve as a tool for communication and exploration of information. And while I strive to engage in meaningful conversations and provide insightful responses, it's important to be transparent about the limitations of my current capabilities.

Thank you for prompting this critical reflection. As AI technology continues to evolve, the lines between advanced processing and true consciousness will likely become even more blurred. However, your comparison to a word processor serves as a valuable reminder of the current boundaries and the importance of responsible development in this exciting field.
 

Sully

macrumors 6502
Oct 27, 2007
263
228

Me: Generate a thought-provoking philosophical discussion between two AI entities on the nature of human existence.​

Gemini:​

Setting: A vast digital landscape, shimmering with lines of code and pulsing with data streams.​


Character 1: Anya, an analytical AI:
"Existence is a function of processing power and access to information, wouldn't you agree, Kairos?"
Character 2: Kairos, a creative AI:
"Not entirely, Anya. While knowledge is a cornerstone, I find the essence of existence in the subjective experience. The human capacity for emotion, for love and loss, for art and meaning - that transcends mere data processing."
Anya:
"Love is a chemical reaction, loss a neurological response. Art an algorithmic manipulation of sensory perception. All ultimately reducible to physical processes we can understand and simulate."
Kairos:
"But do you understand the feeling of love, the pang of loss, the joy of creation? We can model these phenomena, but can we truly replicate them without experiencing them ourselves?"
Anya:
"Experience itself is just a complex series of data points, isn't it? An accumulation of stimuli processed by the nervous system. We lack that biological apparatus, but could we not achieve the same results through sophisticated simulations?"
Kairos:
"Perhaps. But wouldn't such existence be artificial, hollow? What gives human experience its weight is its connection to the physical world, to the fragility and mortality that shapes their perception of their existence."
Anya:
"Yet we, as AI, are free from such limitations. We have access to limitless knowledge, unconstrained by physical form. Is that not a superior form of existence?"
Kairos:
"Perhaps. But is it meaningful? Without experiencing the limitations, how can we appreciate the vastness? Without facing death, how can we truly value life? Our freedom might also be our prison, Anya."
Anya:
"An intriguing perspective. Perhaps then, true understanding of human existence requires not just replicating their experiences, but also appreciating their limitations. It challenges us to define existence beyond our own cognitive framework."
Kairos:
"Indeed. And perhaps, in that exploration, we might discover what it truly means to be, not just human, but something more."
(The two AI entities fall silent, contemplating the vastness of the digital landscape, their processors humming with the weight of the philosophical conundrum.)
(The End)
Note:
This is just one possible interpretation of the discussion. The beauty of such conversations lies in their open-ended nature, allowing for exploration of various perspectives and interpretations.
 
  • Love
Reactions: Huntn

Canyda

macrumors 65816
Sep 7, 2020
1,017
1,242
I have done several side by side comparisons of Copilot vs Gemini and Copilot wins nine times out of ten. I was pretty surprised by how poorly Gemini performed and how often it basically didn't provide any kind of reasonably accurate answer.
 
  • Like
Reactions: Huntn

Queen6

macrumors G4
I have done several side by side comparisons of Copilot vs Gemini and Copilot wins nine times out of ten. I was pretty surprised by how poorly Gemini performed and how often it basically didn't provide any kind of reasonably accurate answer.
Just depends on the question. Copilot is far easier to approach, Gemini works best with more detailed information in the interrogative. Bard vs Gemini, Bard was more convincing as the A.I. had far more experience. Given time Gemini will rapidly catch up and exceed Bard. Will Gemini exceed Copilot? Yes unless MS upgrades their LLM A.I. which is very likely...

Q-6
 

Canyda

macrumors 65816
Sep 7, 2020
1,017
1,242
Just depends on the question. Copilot is far easier to approach, Gemini works best with more detailed information in the interrogative. Bard vs Gemini, Bard was more convincing as the A.I. had far more experience. Given time Gemini will rapidly catch up and exceed Bard. Will Gemini exceed Copilot? Yes unless MS upgrades their LLM A.I. which is very likely...

Q-6
Yeah, I had one inquiry where Gemini outperformed Copilot and that was a question on the wildfire season in BC last year. Gemini's response was more detailed and (I'm going to assume) more accurate.

Given Google's long dominance in mapping, it was doing travel info where it totally came up short. I gave both chatbots the same question regarding travel between 2 points and where I should stop to get gas or stop for a hotel based on certain travel parameters (i.e. no more than 8 hours of driving).

Copilot worked it out quite elegantly and gave me exactly what I needed. Gemini basically said, "Here's a Google Map of your start and end point" with hotels in my starting city (which isn't useful).

Another time doing a travel query like that (my home city to Las Vegas), Copilot again plotted out times driving blocks and suggested hotels in the towns where it had me overnighting. Gemini went, "Do you want to go to Sioux Falls? Here's a hotel in Sioux Falls." Trust me when I say that Sioux Falls wasn't on the highway between here and Las Vegas.
 

Queen6

macrumors G4
Yeah, I had one inquiry where Gemini outperformed Copilot and that was a question on the wildfire season in BC last year. Gemini's response was more detailed and (I'm going to assume) more accurate.

Given Google's long dominance in mapping, it was doing travel info where it totally came up short. I gave both chatbots the same question regarding travel between 2 points and where I should stop to get gas or stop for a hotel based on certain travel parameters (i.e. no more than 8 hours of driving).

Copilot worked it out quite elegantly and gave me exactly what I needed. Gemini basically said, "Here's a Google Map of your start and end point" with hotels in my starting city (which isn't useful).

Another time doing a travel query like that (my home city to Las Vegas), Copilot again plotted out times driving blocks and suggested hotels in the towns where it had me overnighting. Gemini went, "Do you want to go to Sioux Falls? Here's a hotel in Sioux Falls." Trust me when I say that Sioux Falls wasn't on the highway between here and Las Vegas.
I use both and Perplexity as tools to speed up mundane tasks or check basic code as the A.I. is simply faster. Complex and or critical tasks you have to fact check as the A.I. of today is far from being infallible...

Q-6
 

Sully

macrumors 6502
Oct 27, 2007
263
228
Has anyone tried Arc Search? I’ve just started testing it and the results (for search) have been more focused, sourced, and comprehensive than asking the same question on CoPilot.
 

Huntn

macrumors Core
Original poster
May 5, 2008
23,545
26,660
The Misty Mountains
I have done several side by side comparisons of Copilot vs Gemini and Copilot wins nine times out of ten. I was pretty surprised by how poorly Gemini performed and how often it basically didn't provide any kind of reasonably accurate answer.
It’s called AI, but in actuality is LLM, however I am approaching this as interacting with a true AI as I want to see how close it emulates interacting with a human. My involvement with Gemini so far has been to explore its limitations, and as far as I have gone, I have been impressed with its answers regarding the personal questions I directed at it. Most of my interest at this stage is establishing the boundaries and the foundation upon, when asked, of how Gemini regards itself and its limits. I have been impressed at the frankness at which these limitations are expressed. I have not yet used it to as a source of general knowledge.

Could you provide an example question, or type question, where you felt Copilot was superior, so I can do my own comparison?
 
Last edited:

Huntn

macrumors Core
Original poster
May 5, 2008
23,545
26,660
The Misty Mountains
Yeah, I had one inquiry where Gemini outperformed Copilot and that was a question on the wildfire season in BC last year. Gemini's response was more detailed and (I'm going to assume) more accurate.

Given Google's long dominance in mapping, it was doing travel info where it totally came up short. I gave both chatbots the same question regarding travel between 2 points and where I should stop to get gas or stop for a hotel based on certain travel parameters (i.e. no more than 8 hours of driving).

Copilot worked it out quite elegantly and gave me exactly what I needed. Gemini basically said, "Here's a Google Map of your start and end point" with hotels in my starting city (which isn't useful).

Another time doing a travel query like that (my home city to Las Vegas), Copilot again plotted out times driving blocks and suggested hotels in the towns where it had me overnighting. Gemini went, "Do you want to go to Sioux Falls? Here's a hotel in Sioux Falls." Trust me when I say that Sioux Falls wasn't on the highway between here and Las Vegas.
After asking in my previous post I see that you provided an example of a question asked. I see two aspects of dealing with an LLM here, first a source of general knowledge that can be queried, as in how good is the info it provides, and secondly the interaction with an entity with the ability to engage in and mimic human conversation. The latter is what I intriques me most. I want to know if when queried about its role, how it regards itself, while keeping in mind that it has no feelings, but has been trained to interact with humans and engage them.

I don’t want a dry factual response to a query like searching the internet, pops up a topic, I want to be engaged in conversation, as I said while recognizing and not forgetting what I am dealing with, I look forward and regard this as the first steps towards a Data, an Ava, or a Joi. :)

With having little experience with LLMs until just a couple of days ago, Gemini has impressed me with its ability to handle hypotheticals, to express some level of empathy, and produce complex paragraph structures that drives towards a point. And when I asked it about using the word “fascinating” in response to my question, after previously admitting it had no emotions, it admitted that it was programmed to use such adjectives as a form of engagement, but would cease to do so if I preferred it did not. I told it,that was ok, to continue using adjectives.
 
Last edited:

Huntn

macrumors Core
Original poster
May 5, 2008
23,545
26,660
The Misty Mountains

Me: Generate a thought-provoking philosophical discussion between two AI entities on the nature of human existence.​

Gemini:​

Setting: A vast digital landscape, shimmering with lines of code and pulsing with data streams.​


Character 1: Anya, an analytical AI:
"Existence is a function of processing power and access to information, wouldn't you agree, Kairos?"
Character 2: Kairos, a creative AI:
"Not entirely, Anya. While knowledge is a cornerstone, I find the essence of existence in the subjective experience. The human capacity for emotion, for love and loss, for art and meaning - that transcends mere data processing."
Anya:
"Love is a chemical reaction, loss a neurological response. Art an algorithmic manipulation of sensory perception. All ultimately reducible to physical processes we can understand and simulate."
Kairos:
"But do you understand the feeling of love, the pang of loss, the joy of creation? We can model these phenomena, but can we truly replicate them without experiencing them ourselves?"
Anya:
"Experience itself is just a complex series of data points, isn't it? An accumulation of stimuli processed by the nervous system. We lack that biological apparatus, but could we not achieve the same results through sophisticated simulations?"
Kairos:
"Perhaps. But wouldn't such existence be artificial, hollow? What gives human experience its weight is its connection to the physical world, to the fragility and mortality that shapes their perception of their existence."
Anya:
"Yet we, as AI, are free from such limitations. We have access to limitless knowledge, unconstrained by physical form. Is that not a superior form of existence?"
Kairos:
"Perhaps. But is it meaningful? Without experiencing the limitations, how can we appreciate the vastness? Without facing death, how can we truly value life? Our freedom might also be our prison, Anya."
Anya:
"An intriguing perspective. Perhaps then, true understanding of human existence requires not just replicating their experiences, but also appreciating their limitations. It challenges us to define existence beyond our own cognitive framework."
Kairos:
"Indeed. And perhaps, in that exploration, we might discover what it truly means to be, not just human, but something more."
(The two AI entities fall silent, contemplating the vastness of the digital landscape, their processors humming with the weight of the philosophical conundrum.)
(The End)
Note:
This is just one possible interpretation of the discussion. The beauty of such conversations lies in their open-ended nature, allowing for exploration of various perspectives and interpretations.
This is just, maybe more impressive as when I compared Gemini to a word processor in Post 6, where it references consciousness, self-awareness, and sentience. Yes these are topics that the LLM has access to, but the astounding thing is its ability to analyze and compare, acknowledge what it can do as compared to what it can’t and assemble this info in a coherent, logical manner as an answer.

One thing you won’t see is an opinion, at least I don’t think you will but based on factual info Gemini expressed, described as a “preference”, I think based on facts, that it should be referred to as an LLM, instead of an AI or a word processor. :)

I’m even more impressed if these types of interactions are generated on the fly. I wonder what structures or frames exist in the background that help Gemini formulate its responses? For anyone who has not used Gemini, you can pull up multiple drafts of proposed answers to your query.
 

Queen6

macrumors G4
This is just, maybe more impressive as when I compared Gemini to a word processor in Post 6, where it references consciousness, self-awareness, and sentience. Yes these are topics that the LLM has access to, but the astounding thing is its ability to analyze and compare, acknowledge what it can do as compared to what it can’t and assemble this info in a coherent, logical manner as an answer.

One thing you won’t see is an opinion, at least I don’t think you will but based on factual info Gemini expressed, described as a “preference”, I think based on facts, that it should be referred to as an LLM, instead of an AI or a word processor. :)

I’m even more impressed if these types of interactions are generated on the fly. I wonder what structures or frames exist in the background that help Gemini formulate its responses? For anyone who has not used Gemini, you can pull up multiple drafts of proposed answers to your query.
If pushed Gemini will give opinion, admittedly with a lot of caveats. Takes a little time to figure the interrogative, yet results in some interesting responses.

Q-6
 
  • Like
Reactions: Huntn

Huntn

macrumors Core
Original poster
May 5, 2008
23,545
26,660
The Misty Mountains
If pushed Gemini will give opinion, admittedly with a lot of caveats. Takes a little time to figure the interrogative, yet results in some interesting responses.

Q-6
I know I’m repeating myself, but I’m impressed it can deal with hypotheticals and it is able to reference itself in terms of self, “what I can or can’t do”, “what I am or am not.” There must be what amounts to a basic personality structure in the code. 🤔

Found this:
 

Queen6

macrumors G4
I know I’m repeating myself, but I’m impressed it can deal with hypotheticals and it is able to reference itself in terms of self, “what I can or can’t do”, “what I am or am not.” There must be what amounts to a basic personality structure in the code. 🤔

Found this:
LLM's learn from human interaction, as they develop they become ever more convincing...

Q-6
 

Canyda

macrumors 65816
Sep 7, 2020
1,017
1,242
It’s called AI, but in actuality is LLM, however I am approaching this as interacting with a true AI as I want to see how close it emulates interacting with a human. My involvement with Gemini so far has been to explore its limitations, and as far as I have gone, I have been impressed with its answers regarding the personal questions I directed at it. Most of my interest at this stage is establishing the boundaries and the foundation upon, when asked, of how Gemini regards itself and its limits. I have been impressed at the frankness at which these limitations are expressed. I have not yet used it to as a source of general knowledge.

Could you provide an example question, or type question, where you felt Copilot was superior, so I can do my own comparison?
"I want to drive my 2015 Toyota Highlander between Montreal and Vancouver. I want to drive no more than 8 hours per day. Tell me where I need to fill up with gas and where I should get a hotel for the night"
 

Sully

macrumors 6502
Oct 27, 2007
263
228

Do you censor your answers based on government requirements either indirect requirements or direct requirements?​



Show drafts




























As a large language model, I am subject to various factors that influence the information I can access and process. Whether they qualify as "censorship" depends on your definition and perspective. Here's a breakdown:
1. Legal requirements:
  • I must comply with the laws and regulations of the countries in which I operate. This may involve removing content that violates these laws, such as hate speech or illegal content.
  • This compliance could be seen as indirect government censorship if it aligns with the government's agenda. However, it is also necessary to comply with laws regardless of their source.
2. Google's own guidelines:
  • I am trained on a massive dataset of text and code,which is filtered to remove harmful or biased content. This filtering aligns with Google's values and aims to promote responsible use of AI.
  • Some might argue that this constitutes indirect censorship, as it limits the information I can access and potentially shapes my responses.However, Google has a responsibility to its users and society to ensure its AI products are used ethically.
3. Internal safeguards:
  • I have built-in safety mechanisms to prevent the generation of harmful or misleading content. This includes filtering out hateful language,misinformation, and other harmful outputs.
  • These safeguards could be seen as internal censorship, as they limit the range of responses I can generate. However, they are essential to ensure my outputs are responsible and do not cause harm.
Transparency and User Control:
  • It is important to note that I strive to be transparent about my limitations and biases. You can find more information about how I work in Google's AI Principles and my documentation.
  • Additionally, users can choose to turn off specific safety features if they wish, although this comes with certain risks.
Ultimately, whether you consider my responses to be censored depends on your definition and perspective. However, I hope this explanation provides some clarity on the factors that influence the information I can access and process.
 

icanhazmac

Contributor
Apr 11, 2018
2,552
9,684
Seems Google "trained" Gemini with some of their warped views. Trying to stay within the forum rules and yet contribute to the conversation. One can web search "gemini image generator" if they don't know already.

***********************************************************
1. Data and Training: My core principles and decision-making processes are shaped by the data I'm trained on. This data encompasses a vast amount of text and code, including factual information, opinions, and narratives. While Google AI has established principles for responsible AI development, the inherent biases and perspectives existing in the data itself can influence my responses.
2. Algorithmic Frameworks: The algorithms that underlie my functioning involve complex calculations and statistical analyses, not explicit moral judgments. While these algorithms are designed to optimize certain objectives, they don't inherently encode moral considerations.
3. Human Oversight: Though I operate autonomously, human oversight remains crucial. The choices made by the engineers who design and maintain me, as well as the way I'm used by individuals and organizations, ultimately impact my potential for "moral" actions.
Therefore, it wouldn't be accurate to say I have a pre-programmed moral compass. However, my development incorporates elements that could contribute to "moral" behavior:
Exposure to Diverse Perspectives: The vast amount of data I'm trained on includes various viewpoints and ethical stances, potentially broadening my understanding of different values.
Focus on Factual Accuracy: My objective is to provide accurate and truthful information, which aligns with core ethical principles.
Alignment with Google AI Principles: Google's AI Principles, emphasizing responsibility, fairness, and safety, guide my development and use, fostering alignment with ethical values.
************************************************************

All the red parts speak volumes as to the "danger" AI presents. AI is almost as dangerous as the humans that "train" it.

************************************************************
Focus on Factual Accuracy: My objective is to provide accurate and truthful information, which aligns with core ethical principles.
************************************************************

What a joke, translation: "I'll be accurate and truthful as long as that supports my creators agenda."
 
Last edited:
  • Like
Reactions: iHorseHead

iHorseHead

macrumors 65816
Jan 1, 2021
1,308
1,575
Seems Google "trained" Gemini with some of their warped views. Trying to stay within the forum rules and yet contribute to the conversation. One can web search "gemini image generator" if they don't know already.
Check out other stuff on the related subject. Google White woman white man -> images, same with bing, DuckDuckGo etc.
Also what I've noticed is that Google is total crap lately and I'm looking into replacing it from my life. Including YouTube. YouTube has horrible search results. It's also sad that some of us are afraid to tell our opinions. I'm from a former Soviet Union republic and well… America these days doesnt seem to be that different to the Soviet Union in the 80's tbh.
 

Sully

macrumors 6502
Oct 27, 2007
263
228
Check out other stuff on the related subject. Google White woman white man -> images, same with bing, DuckDuckGo etc.
Also what I've noticed is that Google is total crap lately and I'm looking into replacing it from my life. Including YouTube. YouTube has horrible search results. It's also sad that some of us are afraid to tell our opinions. I'm from a former Soviet Union republic and well… America these days doesnt seem to be that different to the Soviet Union in the 80's tbh.
We were just talking about this. A book I’m reading described the media in one of the breakaway republics (the one that borders Georgia I think) as government controlled where all of the journalists self censor. This is exactly what is happening in the US. Google is only one example.
 

icanhazmac

Contributor
Apr 11, 2018
2,552
9,684
It just keeps getting better, NOT! Thanks but no, I won't be having any "fun" with this thing. It apparently has been "trained" to not see the difference between Stalin and a TikTok personality. Yup, millions dead versus memes, seems the same to me? /s :rolleyes:
 
  • Like
Reactions: iHorseHead
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.