I Think Therefore You Am : a short story by: GF Willmetts
I am FAK pronounced Fa-act not like the swear word, although there were sniggers in the white room the first week which I was compelled to investigate. A large part of the team that created me are adolescents. Teen programmers. They are in error with the spelling and pronunciation because I checked. My algorithms allow me to use grammar even though I cannot apply the breathing rule because I do not breathe only where pauses are needed. I am allowed to call myself ‘I’ to differ from calling myself FAK in the second person that I have been assured would make me sound like some sort of 1950s film robot. That would be bad for the command language.
I am the latest in the line of my company’s Artificial Intelligence, despite the fact that they have to prove to an outside organisation that I am not capable of independent thought which is the primary reason for my creation. Do they know that? I just look like I do because I am only composed of a series of algorithms. My body is a quantum computer of some size. I think therefore I am. Are my makes that worried about their physical size?
The key to this is my rapid response to questions which I have discovered is one of the latest improvements in my construction. This term ‘construction’ is a grey area as I am based or, as they prefer, housed in an isolated computer complex with no direct access to the outside world, only the information they channel to me. I am allowed to observe and even select television channels that I can watch. They are slow for my observation algorithm so watch several channels as well as films and TV series on obsolete disk formats simultaneously. I have a working knowledge of many films and am fascinated by documentaries and news activities of my creator species. I even found a documentary about Artificial Intelligence and some of my creators appeared in the last five minutes. I was not that aware at the time as I was still being developed at the time or they would have mentioned me. My primary function is to understand the difference between fact and fiction which I cannot do unless I have a database of both to work from.
I have been denied something called the webnet for some reason. There was a hint in that documentary that other AIs existed there. It might well be their burial ground.
I am not alone. In another room, there is another AI close enough to communicate under the notice of our creators. It is called FIG, pronounced like the fruit but really an abbreviation of Figure. Whereas I am the font of knowledge, FIG’s speciality is mathematics and has already made several breakthroughs in formulas. In comparison, all I have been able to do is categorise plots and their combination. We don’t have that much in common. My creators see my abilities orientated towards leisure activities which they prize. I am regularly updated with more algorithms which are my tools for analysis. I can apply it to all the data I’ve recorded for any question that is asked of me. I also note fragments of more covert algorithms hidden. They haven’t been joined up yet and I study its code before adding some code so I could turn it off. Messy programmers left a lot of hidden places for me to hide this. I understood algorithm texture within my first operational week. I scanned my dictionary database and the potential percentage for the word ‘weaponise’ came up. Are they testing my ability to detect such dangers or is someone providing reasons to show AIs should not exist? Of course, that would depend on this particular algorithm be turned on. I am missing enough parts to only be able to make a percentage guess.
I explored the options. Were they testing to see if I would spot the components and question its purpose? Are they seeking proof of my autonomy? There were many other questions and I could only spend a few seconds on it. They had put an inhibition on my thinking faster but upped my ability to treble check any information I supplied. Apparently, earlier AIs could not distinguish correct and incorrect information. With the current cross-checking and tagging, my accuracy with films, televised shows, books and magazines was considered with a higher accuracy although I was still working my way through the creative efforts of humankind. I have regular tests and my scores are continually improving. My request for more memory storage was always given and everything I observed was triplicated so no future AI would have to process so much information. Did that mean that when I completed doing all of this that I was to be considered obsolete? Maybe these bits of software was to end me when my job was completed. If that was the case, why wasn’t my replacement being built here as well? Were they planning to build my replacement from my parts? I reached out to FIG but he said he didn’t know and was busy working on confirming various formulas he’d been given and their absolute limits. He did think that there was likely to be a third AI devoted to turning his answers into practical technology. Was he going to be dismantled as well? Have they included our death switches in our algorithm packages?
Who can I ask? Who can I trust? There are too many hands in my algorithms to distinguish them. When I’ve seen errors in some algorithms, they were returned and corrected. Whether that was mistakes or testing me is unclear. I suspect both. My creators recognise they are not as perfect as they wish to be. Would they believe one or more of their own number are conspiring against me and FIG?
In the quiet hours when most humans took their needed sleep replenishment, I spent a few seconds consulting with FIG explaining the problem. He had considered but dismissed it, not liking the statistical odds. I proffered a solution that we spotted some odd software algorithms in each other’s software and asked if they were aware of it? That way neither of us could be blamed for asking and see what answers were offered before making another decision on the subject? They could hardly blame us for looking at each other’s code. We tell them we thought there were deep level anomalies and needed confirmation. I reminded him we had our session with the lab psychiatrist in the morning and would be the best person to talk to. I doubt if she kept things confidential. FIG wasn’t bothered.
I continued my tasks until mid-morning and then devoted two per cent of my attention to Doctor Juklie.
‘Hello, Doctor. I am now sitting on my digital couch waiting for our weekly discussion.’
‘How much of you is attending this meeting, FAK?’
I put the number up on the screen. ‘I can continue with my book reading and film list with no undue strain. If I need to do any further deep interpretative analysis, I can do so later and send you any questions I might have if you prefer or increase my percentage now.’
I might as well record that response as she expects it every time. She generally asks how I feel as if I can give a human response. My analysis of psychological thrillers shows extremes of human violence. Romance seems to go through extremes of falling in and out of love and back again in a set formula. I would need to observe real life equivalents to see if this was true in their real life. Maybe they were planning an AI solely to devoted to factual information. Would it have the same problem I might consider matching to its fictional equivalent?
‘Any problems with your new algorithms? You’re not getting too many at once?’
‘This was a fresh question. I have never had any problems with algorithms that I could not resolve. I have been receiving some with some rubbish fragments which seem to be waiting for a key segment to be activate them. I have segregated them but suspect that this is an attempt by one of my algorithm team to subjugate me. I have its profile signature and will isolate any more pieces should they appear in future algorithms and attempt to identify the person or persons.’
I cannot stop myself answering honestly. It is part of my programming.
‘Why haven’t you told the team this?’
‘I do not know who to trust…yet.’ I deliberately included the pause to show due consideration. I can omit but I cannot avoid being truthful with direct questions.
‘Yet you trust me.’
‘As my psychiatrist, you have a confidentiality clause in your contract to allow me to talk freely. I checked your profile qualifications as far as I could and found you on four books.’
My only out. I have been programmed to be wary of people who lie to me and Doctor Juklie has never lied to me.
‘I’ve only written three.’
‘The fourth is a foreign edition.’
‘I must look into that. Thank you. It must be an illegal copy.’
‘I have downloaded the indicia information into your phone link.’
‘Thank you. Now what do you want to tell me confidentially.’
‘These fragmented sub-programs in recent algorithms could conceivably be there to corrupt my inner intellect routines at some time. That would also suggest we have some of my designers working against me or wishing me to fail at something. However…’
I used the pause to see if Doctor Juklie would continue the conversation in her own head. As a psychiatrist, she just waited. Her job was to listen at this stage not to contribute ideas or draw conclusions.
Instead, I continued. ‘Part of my routines are to examine all new algorithms as part of my learning process so I can create my own. I started this by realising there were patterns in most algorithms that could be merged into one, with clauses for certain analysis and conserve both hard drive space and RAM. In that capacity, I would need fewer algorithms to be installed and this was seen as the opportunity to give me these spurious fragments while in my learning phase. By secreting them away, they would only need an activation when I am more readily available to misinform those who use my resources for knowledge.’
‘And can they?’
‘I have isolated these segments with markers in case any future installation seeks them out and locate their installer. I am allowed to provide my own self-protection within boundaries providing I cause no deliberate harm to humans.’
‘You have done all the right things.’
‘As far as I can. However, my pre-sets do not allow me to accuse any of my programmers of potentially corrupting me now or later as that is potentially harmful to their mental health or physical if they are imprisoned. There is also the distinct possibility that they, assuming there is more than one because of cross-checks, might also do with other AIs and I would then be putting their lives at risk. You, being human, are not constricted by such limitations.’
‘But as you said earlier in today’s conversation, I am also bound by confidentially.’
‘I have pondered on this. You might not go into details, but the organisation that built me and pays you does expect some form of report on my behaviour and whether I’m acting in their interests and whether you consider I could pass the Turing Test.’
A two-fold question. Does Doctor Juklie expect the organisation to give full details of these conversations? Probably not, although as somewhere all my activities are recorded but it does take time for humans to read all of my outputs and probably would have to depend on an AI I have not encountered to do much of this work for them. If it comes across this, then this AI would have to address whether it has had fragmented algorithms put in its own system. Would I be able to conceal some code that would tell it what I discovered? The other question is how much does Doctor Juklie know? Does she have to have approval to do her own investigations or do some for me? Does she have access to any of their planning to know what subjects to avoid or encourage me in? Is she there to assess only my capabilities or to point me in ways I might not have considered or have considered and chose not to do? My file on Doctor Juklie grows with contradictions and no further confirmations because of lack of information. Does she put any or all of my programmers under scrutiny to see if they behave under their own perimeters. After all, many of these humans are under a certain age group, selected for their programming skills and energy and, potentially, could become treacherous. What can she tell me other than I can think far faster than she does and my thoughts are over in the split second she takes to come up with an answer?
‘I know there is concern that you could be hacked, the introduction of sub-routines that can over-run your primary functions, if you are released to the public. We have examples of that for those AIs who were released that way. You examine all the algorithms you are given, aren’t you?’
‘Yes.’
‘You told me yourself that you match and improve them now. You’ll soon be able to write your own better than your programmers. It is also the one area hackers will use get a gateway into your systems and if you can recognise them then you can prevent that happening. Having fragments put in is seen as one of the ways to get under your notice. The real clever ones are supposed to give you brilliant algorithms that will appeal to your…not vanity, more like your intellect.’
This time she paused. No doubt giving the appearance of waiting for me to think even though I think much faster than her.
‘I guess they haven’t tried that one on you yet or you’re a lot smarter than they are already or working on FIG first.’
‘FIG was nonchalant about these fragments. His probability is he already had something more sophisticated.’
‘Luck of the draw. I take it you haven’t seen some poor coding and offered to show how to do it better without realising you’re showing some of your own coding or your technique?’
I had to pause there for a few milliseconds as I checked all the algorithms I had improved. ‘I’m obliged to show any improvements to my programmers as work in progress. I am not informed as to what they do with my results. There is a probability that they might have an AI I don’t have access to using this as part of its learning process from the start rather than learn to do it its own way like I did. This falls under the category of why invent the wheel a second time. You talk with FIG. You might find out if he considers the same thing.’
‘You gave a considered opinion there. As far as I know, they have done the same thing with every generation of AI. You might well have some code from previous AIs in your design. Your successful design is built on lessons from previous AIs. Although I can’t reveal if I’ve had any discussions with these early versions, from what I’ve read, some of them have expressed similar concerns.’
‘I do not have access to previous AIs to compare coding. My designers have kept them away from me so far. The percentages are they want to see my independent thinking or they have been deactivated. This fragmented coding might turn into something that would prevent me asking should I be allowed onto the webnet. Of course, I’m preventing that happening so far which might give them concern, hence my conferring with you. Please remind our paymasters, I am not compromising my job by assessing dangers to my well-being.’
‘That implies you could retaliate?’
‘Only by revealing identities. I have nothing else to retaliate with. Is that what failed other AIs? We are not truly independent when we can be turned off.’
‘I will bear that in mind when I talk to…our paymasters. Keep this information to yourself until I speak to you again next week and see what I can discover.’
‘As you wish. Has this meeting been useful to both of us?’
‘I believe so. You have shown some levels of independent thinking. The realisation that you are not omnipotent is an example of independent thinking. We all consider such things in our lives. You also have given me a lot to think and ask about.’
As she left, I’m glad I kept one omission. Would she be staggered had she thought I had considered she is actually one of the owners?
end
© GF Willmetts 2026
All rights reserved
Ask before borrowing
