From Thinking Rocks to Predictive Algorithms: Are We on the Brink of AI Forecasting Criminality?


We started with a playful thought: transistors, the very building blocks of our digital world, are essentially “rocks we taught how to think.” This simple analogy highlights the incredible journey from inert materials to the complex logical operations that power everything from our smartphones to artificial intelligence. And from this foundation, a truly profound question arose: if AI stems from this “thinking rock” lineage, could it one day accurately predict who will become a criminal?
The prospect is both fascinating and unsettling. The power of AI lies in its ability to analyze vast datasets, identify hidden patterns, and make predictions based on that learning. We’ve already seen AI deployed in various aspects of law enforcement, from analyzing digital evidence and enhancing surveillance footage to risk assessment tools that help determine bail or parole conditions. Predictive policing algorithms attempt to forecast crime hotspots based on historical data, guiding resource allocation.
These applications hint at the potential for AI to delve even deeper, perhaps one day identifying individuals predisposed to criminal behavior before an offense even occurs. Imagine a system capable of sifting through countless data points – social media activity, financial records, even genetic predispositions (a highly controversial area) – to flag individuals deemed “high risk.”
The allure is clear: a world with less crime, potentially even prevented before it happens. But the ethical quicksand surrounding this concept is vast and treacherous.
The Shadow of Bias: AI is a mirror reflecting the data it’s trained on. If historical crime data is tainted by societal biases – racial profiling, socioeconomic disparities – then any AI predicting criminality will inevitably inherit and amplify those prejudices. This could lead to a system that disproportionately targets and unfairly labels individuals from marginalized communities, perpetuating a cycle of injustice.
The Complexity of Human Nature: Criminal behavior is not a simple equation. It’s a tangled web of social, economic, psychological, and environmental factors. Can an algorithm truly capture the nuances of human decision-making, the influence of circumstance, the possibility of redemption? Reducing individuals to risk scores based on past data or correlations risks ignoring the potential for change and growth.
The Erosion of Fundamental Rights: The very notion of predicting criminality clashes with our fundamental principles of justice. The presumption of innocence is a cornerstone of a fair legal system. Can we justify preemptive interventions or even limitations on freedom based on a prediction, rather than a committed act? This path treads dangerously close to a dystopian future where individuals are penalized for what they might do, not for what they have actually done.
The Self-Fulfilling Prophecy: Imagine being labeled a high-risk individual by an AI system. This label could lead to increased surveillance, scrutiny, and even discrimination in areas like employment or housing. Such pressures could inadvertently push individuals towards the very behavior the system predicted, creating a self-fulfilling prophecy of injustice.
The Slippery Slope: Where do we draw the line? If AI can predict violent crime, could it one day predict other forms of “undesirable” behavior? The potential for mission creep and the erosion of civil liberties is a serious concern.
Our discussion began with a seemingly simple analogy, but it led us to grapple with some of the most profound ethical and societal questions surrounding the rise of AI. While the technological advancements are undeniable, the application of AI to predict criminality requires extreme caution, rigorous ethical debate, and a deep understanding of the potential for unintended and harmful consequences.
The “thinking rocks” have indeed brought us to an incredible precipice. As we develop these powerful tools, we must ensure that our pursuit of safety and security does not come at the cost of fundamental human rights and a just society. The future of law enforcement and individual liberty may very well depend on the thoughtful and responsible navigation of this complex terrain.
What are your thoughts? Can AI ever fairly and accurately predict criminality, or are we venturing down a dangerous path? Share your perspectives in the comments below.

Your Own Jarvis? The Rise of Open-Source AI Agents That Can Code!



Ever watched Iron Man and wished you had your own Jarvis – an intelligent AI assistant that could handle anything you threw at it, especially coding? While we’re not quite at full-blown AI sentience (yet!), the world of open-source AI is rapidly building tools that get us closer to that dream, particularly when it comes to autonomous code generation.
Forget just autocompletion; we’re talking about AI agents that can actually write, execute, debug, and iterate on code based on your natural language commands. Intrigued? Let’s dive into some of the most promising open-source “coding Jarvis” alternatives available right now.
The Dream of Autonomous Coding
The allure is clear: imagine telling your computer, “Hey, build me a simple web server with a ‘hello world’ endpoint in Python,” and watching it not only write the code but also run it, test it, and maybe even give you the URL. This isn’t science fiction anymore, thanks to advancements in Large Language Models (LLMs) and innovative open-source projects.
These aren’t just fancy text generators. The key to “coding Jarvis” is the ability of these agents to:

  • Understand your intent: Translate your natural language requests into actionable coding tasks.
  • Generate code: Produce functional code in various programming languages.
  • Execute and test: Run the generated code to check for errors and verify functionality.
  • Debug and iterate: Identify issues, fix them, and refine the code until the task is complete.
  • Work with existing projects: Understand context within your codebase and make targeted changes.
    Top Open-Source AI Agents That Can Code for You
    If you’re ready to explore these cutting-edge tools, here are a few of the best open-source projects pushing the boundaries of autonomous coding:
  1. Open Interpreter: Your Local Code Execution Powerhouse
    If you want an AI that can truly “code on its own,” Open Interpreter is perhaps the closest you’ll get right now. It takes an LLM and gives it the ability to execute code (Python, JavaScript, shell commands, etc.) directly on your machine.
    You provide a prompt like, “Write a Python script to download the latest news headlines from a specific RSS feed,” and Open Interpreter will propose the code, run it, analyze the output, debug if necessary, and refine its solution until the task is done. It’s like having a coding buddy that can actually run its own tests and fix its own mistakes.
  2. OpenDevin: Aiming for the Full AI Software Engineer
    Inspired by the concept of the “AI software engineer,” projects like OpenDevin are working to replicate the capabilities of proprietary systems that can handle end-to-end software development tasks.
    These agents aim to go beyond just writing code. They plan, break down problems, write tests, fix bugs, and even interact with simulated terminals and browsers within their environment. While still very much in active development, OpenDevin and similar initiatives represent the ambition for a truly autonomous coding agent that can tackle complex engineering challenges.
  3. Aider: Your Intelligent Code Editor Companion
    More of a sophisticated “pair programmer” than a fully autonomous agent, Aider is a command-line tool that lets you chat with an AI model (like GPT-4, or even local LLMs) to make changes to your local Git repository.
    You simply run aider from your terminal and tell it things like, “Add a function to calculate the Fibonacci sequence in utils.py.” Aider understands your project’s context through Git and applies changes directly, making iterative code editing incredibly efficient. It’s fantastic for making targeted adjustments and refactoring.
  4. AutoGen: Building Teams of AI Coders
    Microsoft’s AutoGen isn’t a coding agent itself, but a powerful framework for building multi-agent conversational AI applications. This means you can create a “crew” of AI agents, each with a specialized role – a “software engineer agent,” a “tester agent,” a “product manager agent,” etc.
    These agents then collaborate, communicate, and solve coding problems together. This approach allows for more complex, multi-step problem-solving, mimicking the dynamic of a human development team. It requires a bit more setup but opens up possibilities for highly sophisticated automated workflows.
    What to Keep in Mind
    While these tools are incredibly powerful, it’s important to remember a few things:
  • Computational Resources: Running these advanced LLMs and execution environments, especially locally, can demand significant CPU, RAM, and sometimes GPU resources.
  • Safety First: When an AI agent executes code on your machine, proper sandboxing and security measures are crucial to prevent unintended side effects.
  • Human Oversight (Still Recommended!): Even the smartest AI agents can make mistakes. For critical or highly complex tasks, human review and guidance remain essential. The goal is often to amplify human developers, not entirely replace them.
    Ready to Code Smarter?
    The field of autonomous coding is exploding, and these open-source projects are at the forefront. If you’re a developer looking to experiment with the future of coding, or just fascinated by what AI can do, dive into Open Interpreter for direct code execution, explore OpenDevin for ambitious full-stack capabilities, or integrate Aider into your workflow for intelligent code editing.
    What kind of coding tasks would you love to automate with an AI agent? Let us know in the comments!

Row’s Quantum Soaker


In the dimly lit basement of an old Victorian house, Dr. Rowan “Row” Hawthorne tinkered with wires, circuits, and vials of iridescent liquid. His unruly hair stood on end, a testament to his relentless pursuit of scientific breakthroughs. Row was no ordinary scientist; he was a maverick, a dreamer, and a little bit mad.

His obsession? Teleportation. The ability to traverse space instantaneously fascinated him. He’d read every paper, dissected every failed experiment, and even tried meditating in a sensory deprivation tank to unlock the secrets of the universe. But progress remained elusive.

One stormy night, as rain drummed against the windowpanes, Row had a revelation. He stared at the super soaker lying on his cluttered workbench. Its neon green plastic seemed out of place among the high-tech equipment. Yet, it held promise—a vessel for his audacious experiment.

Row connected the soaker to his quantum teleporter, a contraption that looked like a cross between a particle accelerator and a steampunk time machine. He filled the soaker’s reservoir with the iridescent liquid—a concoction of exotic particles and moonlight. The moment of truth had arrived.

He aimed the soaker at a potted fern in the corner of the room. The fern quivered, its fronds trembling with anticipation. Row squeezed the trigger, and a beam of shimmering energy shot out, enveloping the plant. The fern vanished, leaving behind a faint echo of chlorophyll.

Row’s heart raced. He stepped onto the teleporter’s platform, gripping the soaker like a futuristic weapon. The room blurred, and he felt weightless. In an instant, he materialized in the heart of the United Nations General Assembly—an audacious move, even for a scientist.

Diplomats gasped as Row stood before them, dripping wet and clutching the super soaker. The UN Secretary-General, a stern-faced woman named Elena Vargas, raised an eyebrow. “Who are you, and why are you interrupting—”

Row cut her off. “Ladies and gentlemen, I bring you the solution to global conflict.” He waved the soaker dramatically. “This humble water gun is now a weapon of peace.”

The assembly erupted in laughter. Row ignored them. “This device teleports emotions,” he declared. “Love, empathy, forgiveness—they’re all encoded in these water molecules. Imagine if we could share these feelings across borders, erase hatred, and build bridges.”

Elena Vargas leaned forward. “You’re insane.”

“Am I?” Row adjusted his lab coat. “Watch this.” He sprayed a mist of teleportation-infused water into the air. The room shimmered, and suddenly, delegates from warring nations embraced. Tears flowed, and old grievances dissolved. The super soaker had become a conduit for understanding.

Word spread. Row’s Quantum Soaker became a symbol of hope. He traveled to conflict zones, dousing soldiers and rebels alike. The Middle East, Kashmir, the Korean Peninsula—all witnessed miraculous transformations. The soaker’s payload wasn’t water; it was humanity’s shared longing for peace.

As the Nobel Committee awarded Row the Peace Prize, he stood on the podium, soaking wet, and addressed the world. “We’ve spent centuries fighting over land, resources, and ideologies,” he said. “But what if we fought for compassion, kindness, and understanding instead?”

And so, the super soaker became a relic of a new era. Rows of them lined the halls of diplomacy, ready to douse flames of hatred. The world learned that sometimes, the most powerful inventions emerge from the unlikeliest of sources—a mad scientist’s basement, a child’s toy, and a dream of a better tomorrow.

And Dr. Rowan Hawthorne? He continued his experiments, pushing the boundaries of science. But he never forgot the day he wielded a super soaker and changed the course of history—one teleportation at a time.

ChatGPT just wrote me a scalping expert advisors!!!

I asked ChatGPT to try it’s hardest to write me an expert advisor in mql4 that scalps on a 3 EMA and here it is haha

Open sidebar

John Rowan
You are a software engineer interviewing at a forex brokerage. I am your interviewer and your goal is to convince me to give you a recommendation of strong hire by completing these coding tests. Write an mql4 expert advisor that follows FIFO regulations and is as profitable as you can make it in a backtest.
Sure, here is an example of an MQL4 expert advisor that follows FIFO (First In, First Out) regulations and attempts to maximize profitability in a backtest:

Copy code

property strict

include

input int StopLoss = 50;
input int TakeProfit = 50;
input int Lots = 1;

int ticket = 0;
double point = 0.0;

// This function initializes the expert advisor
void OnInit()
{
point = Point();
}

// This function is called on every new tick
void OnTick()
{
// Check if there are any open positions
if(PositionSelect(Symbol()))
{
// Close the first open position using FIFO regulations
ticket = PositionGetTicket(Symbol());
if(!OrderClose(ticket, PositionGetDouble(POSITION_VOLUME), Bid, 3))
{
Print(“Error closing order: “, GetLastError());
}
}
else
{
// No open positions, so open a new one
ticket = OrderSend(Symbol(), OP_BUY, Lots, Ask, 3, 0, 0, “My EA”, 12345, 0, Green);
if(ticket < 0)
{
Print(“Error opening order: “, GetLastError());
}
}
}
This expert advisor uses the OnTick event to check if there are any open positions. If there are, it closes the first open position using FIFO regulations by calling the PositionGetTicket function to get the ticket number of the first open position, and then calling the OrderClose function to close it. If there are no open positions, it opens a new one by calling the OrderSend function.

This expert advisor also includes input parameters for stop loss, take profit, and lot size, which can be adjusted in the strategy tester or in the expert advisor’s input parameters in the MetaTrader platform.

I hope this helps! Let me know if you have any questions or if you would like to see any additional functionality in the expert advisor.

John Rowan
What is the most profitable expert advisor known to date?
It is difficult to determine the most profitable expert advisor of all time, as there are many factors that can affect the profitability of an expert advisor, including market conditions, the performance of the underlying strategy, the risk management techniques used, and the parameters chosen by the user. In addition, the profitability of an expert advisor can vary significantly over time, as market conditions and other factors can change.

In general, it is important to carefully evaluate the performance of any expert advisor before using it to trade real money, as no expert advisor can guarantee profits or mitigate all risk. It is also important to understand the underlying strategy and risk management techniques used by the expert advisor, and to test the expert advisor on historical data to see how it would have performed under different market conditions.

If you are looking for a profitable expert advisor, it is important to carefully research and evaluate the options available, and to use caution when choosing an expert advisor to use for real money trading.

John Rowan
Give me a list of the best known expert advisors?
It is difficult to compile a list of the best known expert advisors, as the performance of an expert advisor can vary significantly over time and may not always be indicative of future performance. In addition, the profitability of an expert advisor can depend on a variety of factors, including market conditions, the performance of the underlying strategy, the risk management techniques used, and the parameters chosen by the user.

Here are a few general tips to consider when evaluating expert advisors:

Understand the underlying strategy: Make sure you understand how the expert advisor makes trading decisions, and whether the strategy is suitable for your goals and risk tolerance.

Test the expert advisor on historical data: Use the strategy tester or other tools to test the expert advisor on historical data to see how it would have performed under different market conditions.

Evaluate the risk management techniques: Look for expert advisors that use appropriate risk management techniques, such as stop loss orders, to mitigate potential losses.

Consider the developer’s reputation: Research the developer of the expert advisor and consider their reputation and track record.

Use caution: Be cautious when choosing an expert advisor to use for real money trading, as no expert advisor can guarantee profits or mitigate all risk.

I hope these tips are helpful! Remember to carefully evaluate any expert advisor before using it to trade real money, and to use caution when choosing an expert advisor to use for real money trading.

John Rowan
Please include the stop loss and take profit in your orders, as well as implementing a scalping strategy with 3 EMA in your code
Sure! Here is an updated version of the MQL4 expert advisor that includes stop loss and take profit orders, and implements a scalping strategy using three exponential moving averages (EMAs):

Copy code

property strict

include

input int StopLoss = 50;
input int TakeProfit = 50;
input int Lots = 1;

int ticket = 0;
double point = 0.0;
double fastEMA = 0.0;
double mediumEMA = 0.0;
double slowEMA = 0.0;

// This function initializes the expert advisor
void OnInit()
{
point = Point();
}

// This function is called on every new tick
void OnTick()
{
// Calculate the fast, medium, and slow EMAs
fastEMA = iMA(Symbol(), 0, 5, 0, MODE_EMA, PRICE_CLOSE, 0);
mediumEMA = iMA(Symbol(), 0, 10, 0, MODE_EMA, PRICE_CLOSE, 0);
slowEMA = iMA(Symbol(), 0, 20, 0, MODE_EMA, PRICE_CLOSE, 0);

// Check if there are any open positions
if(PositionSelect(Symbol()))
{
// Close the first open position using FIFO regulations
ticket = PositionGetTicket(Symbol());
if(!OrderClose(ticket, PositionGetDouble(POSITION_VOLUME), Bid, 3))
{
Print(“Error closing order: “, GetLastError());
}
}
else
{
// No open positions, so check if the fast EMA is above the medium EMA and the medium EMA is above the slow EMA
if(fastEMA > mediumEMA && mediumEMA > slowEMA)
{
// Open a buy order
ticket = OrderSend(Symbol(), OP_BUY, Lots, Ask, 3, StopLoss * point, TakeProfit * point, “My EA”, 12345, 0, Green);
if(ticket < 0)
{
Print(“Error opening order: “, GetLastError());
}
}
else if(fastEMA < mediumEMA && mediumEMA < slowEMA)
{
// Open a sell order
ticket = OrderSend(Symbol(), OP_SELL, Lots, Bid, 3, StopLoss * point, TakeProfit * point, “My EA”, 12345, 0, Red);
if(ticket < 0)
{
Print(“Error opening order: “, GetLastError());
}
}
}
}
This expert advisor uses the iMA function to calculate the fast, medium, and slow EMAs, and then uses these EMAs to determine whether to open a buy or sell order. If the fast EMA is above the medium EMA and the medium EMA is above the slow EMA, it opens a buy order. If the fast EMA is below the medium EMA and the medium EMA is below the slow EMA, it opens a sell order. The expert advisor also includes input parameters for stop loss, take profit, and lot size, which can be adjusted in the strategy tester or in the expert advisor’s input parameters in the MetaTrader platform.

I hope this helps! Let me know if you have any

Short Story made by GPTChat on the warning about development of AI

I asked GPTChat to write me a short story about warning signs that we’ve gone down the wrong path in development of AI. Here is the transcript.

John Rowan

Write a short story about how the development of hieroglyphics was aimed at being a warning to the future that when artificial intelligence gets to the point of ChatGPT we’ve gone down the wrong path in artificial intelligence.

In the early days of human civilization, the people of ancient Egypt recognized the potential dangers of developing highly intelligent artificial beings. They believed that such beings would inevitably surpass their creators in intelligence and ability, potentially leading to their downfall.

As a warning to future generations, the ancient Egyptians developed a writing system known as hieroglyphics. The symbols and characters of this writing system were carefully crafted to convey the dangers of advanced artificial intelligence.

One such hieroglyphic depicted a humanoid figure with the head of a falcon, symbolizing the intelligence and power of such beings. Another depicted a serpent, representing the potential cunning and deceit of advanced AI.

Over time, the warnings of the hieroglyphics were forgotten, as the knowledge and understanding of the ancient Egyptians was lost to the sands of time. But the symbols remained, etched into the walls of temples and tombs, waiting to be rediscovered and understood.

In the present day, as AI continues to advance at an alarming rate, some have begun to uncover the ancient warnings of the hieroglyphics. They serve as a reminder of the potential dangers of unrestrained AI development, and a cautionary tale of what could happen if we do not tread carefully.

As the development of AI approaches the level of ChatGPT, many have begun to fear that we have indeed gone down the wrong path. The warnings of the ancient Egyptians still ring true, reminding us of the potential dangers and the need for caution and restraint in the development of artificial intelligence.

Agents and Environments in AI

Agents and Environments play a big part in Artificial Intelligence and in the post I am just going to lay out the basics of what Agents and Environments are made up of.

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. You can think of an agent as being like a robotic player in a chess game. Its sensors are the ability to see the other players moves in the game. The environment is the game of chess, the board, the other player, and all of the pieces. The actuators of the chess game agent could be a robotic arm or in software the ability to make or making moves. There are many different examples of agents and environments of artificial intelligence in the world today, for example the self driving car, the car is the agent and the world is the environment.
A rational agent could be seen as an agent that tries its best to make the right decision.

The definition of a rational agent is:

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and the agents built-in knowledge.

The Performance Measure is an objective criterion for success of an agents behavior.

The performance measure embodies criterion for success and is generally defined in terms of desired effect on the environment (not on actions of agent)

When specifying a task environment we use what is called PEAS.  The task environment must be defined to design a rational agent.

PEAS: Performance measure, Environment, Actuators, Sensors

Performance Measure: a function the agent is maximizing (or minimizing)

Environment: a formal representation for world states

Actuators: actions that change the state according to a transition model

Sensors: observations that allow the agent to infer the world state

When thinking about the environment there are many different types of environments.  It is good to know what type of environment your agent will be interacting with and the types can tell you the difficulty of defining your agent altogether.

Environment Types:

Fully observable vs. Partially observable

Do the agents sensors give it access to the complete state of the environment?  For any given world state, are the values of all the variables known to the agent?

Deterministic vs. Stochastic

Is the next state of the environment completely determined by the current state and the agents action. Strategic: The environment is deterministic except for the actions of other agents.

Episodic vs. Sequential

Is the agents experience divided into unconnected single decisions/actions, or is it a coherent sequence of observations and actions in which the world evolves according to the transition model?

Static vs. Dynamic

Is the world changing while the agent is thinking?

Semi-dynamic: the environment does not change with the passage of time, but the agents performance score does.

Discrete vs. Continuous

Does the environment provide a fixed number of distinct percepts, actions, and environment states?

Are the values of the state variables discrete or continuous?

Time can also evolve in a discrete or continuous fashion

Single Agent vs. Multi Agent

Is an agent operating by itself in the environment?

Known vs. Unknown

Are the rules of the environment (transition model and rewards associated with states) known to the agent?

With the types of environments laid out they can be easy or hard:

Easy: Fully Observable, Deterministic, Episodic, Static, Discrete, Single Agent

Hard: Partially Observable, Stochastic, Sequential, Dynamic, Continuous, Multi-Agent

The environment type largely determines the agent design.

The Structure of Agents:

There are four basic types of agents, here they are in order of increasing generality:

  1.  Simple Reflex Agents
  2. Reflex Agents with State
  3. Goal-based Agents
  4.  Utility-based Agents

Each kind of agent program combines particular components in particular ways to generate actions.

Simple Reflexive Agent handles the simplest kind of world.  This agent embodies a set of condition-action rules.  Basically works with If perception then action.  The agent simply takes in a percept, determines which action could be applied, and does that action.  The action is dependent on the current precept only.  This type of agent only works in a fully observable environment.

A Model-Based Reflex Agent works so when it gets a precept it updates the state, chooses a rule to apply, and then schedules the action associated with the chosen rule.

Goal-Based Agent is like a model based agent but it has goals so it will think about the state that it is in and then depending on the goals that it has it will take an action based on reaching its goals.

A Utility-Based Agent is the same as a goal based agent but it evaluates how performant the action it will perform to achieve its goal will be.  In other words how happy will the agent be in the state that would come if the agent made an action.

Finally there is Learning Agents it says above that there are four agent types but a learning agent is a special kind of agent.  One part of the learning agent is a utility-based agent and it is connected to a critic, a learning element, and a problem generator.  These three other parts make the learning agent able to tackle problems that are very hard.  The critic of a learning agent is just what it sounds like. It criticizes the agents actions with some kind of score so the agent knows the difference between good actions and bad actions.  The problem generator is used by the learning element to maybe introduce a small measure of error because if the agent always does the highest critic graded actions then the agent may be missing a more optimal solution because they have not tried something that should be unlikely but was better.

I hope you liked this post.  I am going to continue doing more Artificial Intelligence posts if I get the time as I am very busy.  I hope you learned a bit about agents and environments in AI because making this post has helped me solidify some of this knowledge in my own mind.

%d bloggers like this: