top of page

Navigating the Intricacies of Creator AI: A Practical Perspective on GLLMM


 In the realm of advanced AI, Gollem AI, or Generative Large Language Multi-Modal Models (GLLMM), represents a significant leap forward. Its capabilities extend beyond mere information processing to creative generation in various domains such as music, artwork, and literature. However, this advancement brings a spectrum of societal and regulatory challenges that merit careful consideration.


The Golem Analogy and Societal Entanglement: "Gollem" is drawn from Jewish folklore, referencing a clay sculpture animated through magic, symbolizing the dual potential for societal benefit and harm inherent in GLLMM technology. This analogy highlights the precarious balance between utility and risk, particularly the risk of societal entanglement. GLLMM AI, as a form of "creation AI," surpasses first-generation "curation AI" in terms of both potential benefits and harms. [Ref]


Regulatory Velocity and Corporate Responsibility: The rapid evolution of AI technologies, such as GLLMM, presents a significant regulatory challenge. The need for protective measures against potential harm must be balanced with the encouragement of innovation. Eric Schmidt, former Google Executive Chairman, emphasizes the difficulty of government-based AI oversight, suggesting a preference for corporate self-regulation. However, this approach has limitations, which are evident in online harms like privacy invasion and misinformation. A more structured regulatory framework, balancing innovation with public safety, is essential. [Ref]


The Spectrum of AI Regulation: Regulating AI, particularly advanced models like GLLMM, requires a nuanced approach. AI's multifaceted nature means "one-size-fits-all" regulation could be either too restrictive or lenient, depending on the context. AI applications range from benign uses in entertainment to potentially dangerous ones in critical infrastructure or public safety. Hence, regulation must be risk-based and targeted.[Ref]


Societal Impact and Legal Challenges: GLLMM's societal impact, particularly its potential to exacerbate societal issues like inequality, erosion of democratic processes, or rule of law, raises complex legal challenges. Existing legal frameworks, primarily focused on individual rights and harms, may be insufficient to address the broader societal impacts of AI. This gap necessitates rethinking legal structures to incorporate a societal perspective in AI governance. [Ref]


Practical Examples of Societal Harm: Concrete instances of societal harm by AI, such as biased facial recognition, voter manipulation, and AI-assisted public decision-making, illustrate the multidimensional nature of these challenges. These examples highlight the need for regulatory frameworks that address individual and collective rights and consider the long-term societal implications of AI deployment. [Ref]


The emergence of Gollem AI presents a new frontier in technological advancement with significant societal implications. As we embrace its potential, we must also be vigilant about its risks. This calls for a dynamic regulatory approach that adapts to the fast-paced evolution of AI, ensuring both innovation and societal welfare. The conversation around GLLMM must continue, involving diverse stakeholders, from policymakers to technologists and civil society, to navigate this complex landscape effectively.


Take, for example, that when first released in March of 2023, GLLMMs included in their training data research papers that could teach the masses how to make nerve gas from commonly purchasable products from hardware stores. Over 100 million users were granted access to this information with a tool that could break the process down step-by-step, reducing the barrier to entry significantly. Worse still, researchers couldn't review the encoded LLM and articulate what data existed and did not once encoded.


Due to Dynamic Learning and Updating techniques, Data Anonymization and Privacy Techniques, the Transformative Learning Process itself, the Complexity of the encoding mechanisms, and the sheer Diversity and Volume of training data it's possible to have a general understanding of the types of data and sources used to train an LLM, identifying exact data points within the model is not feasible due to the complexities of the training process, the nature of the encoding mechanisms, and ongoing updates and transformations in the model. In short, once the GLLMM is built, it becomes more opaque over time, and our ability to assess the risks before they're exposed is just as opaque.

5 views0 comments

Comments


bottom of page