In this paper, we use a pseudo-algorithmic procedure for assessing an AI-generated text. We apply the Comprehensive Assessment Procedure for Natural Argumentation (CAPNA) in evaluating the arguments produced by an Artificial Intelligence text generator, GPT-3, in an opinion piece written for the Guardian newspaper. The CAPNA examines instances of argumentation in three aspects: their Process, Reasoning and Expression. Initial Analysis is conducted using the Argument Type Identification Procedure (ATIP) to establish, firstly, that an argument is present and, secondly, its specific type in terms of the argument classification framework of the Periodic Table of Arguments (PTA). Procedural Questions are then used to test the acceptability of the argument in each of the three aspects. The analysis shows that while the arguments put forward by the AI text generator are varied in terms of their type and follow familiar patterns of human reasoning, they contain obvious weaknesses. From this we can conclude that the automated generation of persuasive, well-reasoned argumentation is a far more difficult task than the generation of meaningful language, and that if AI systems producing arguments are to be persuasive, they require a method of checking the plausibility of their own output.