Computer theory

ImprimirCitar

The computing theory or informatics theory is a set of rational and systematized knowledge that focuses on the study of the abstraction of processes, with the aim of in order to reproduce them with the help of formal systems; that is, through symbols and logical rules. Computation theory allows modeling processes within the limitations of devices that process information and perform calculations; such as the computer. To do this, it relies on the theory of automata, in order to simulate and standardize these processes, as well as to formalize the problems and give them solutions.

Main subbranches

Automata Theory

This theory provides mathematical models that formalize the concept of computer or algorithm in a sufficiently simplified and general way so that its capabilities and limitations can be analyzed. Some of these models play a central role in various computer science applications, including word processing, compilers, hardware design, and artificial intelligence.

There are many other types of automata such as random access machines, cellular automata, abacus machines, and abstract state machines; however, in all cases it has been shown that these models are not more general than the Turing machine, since the Turing machine has the capacity to simulate each of these automata. This leads to the Turing machine being thought of as the universal model of computer.

Theory of Computability

This theory explores the limits of the possibility of solving problems using algorithms. Much of computer science is dedicated to solving problems algorithmically, so the discovery of impossible problems is a big surprise. Computability theory is helpful in not trying to solve these problems algorithmically, thus saving time and effort.

Problems are classified in this theory according to their degree of impossibility:

  • Them computables are those for which there is an algorithm that always solves them when there is a solution and is also able to distinguish cases that do not have it. They are also known as decisions, resoluble or recurs.
  • Them semicomputable are those for which there is an algorithm that is able to find a solution if it exists, but no algorithm to determine when the solution does not exist (in which case the algorithm to find the solution would enter an infinite loop). The classic example par excellence is the problem of the stop. These problems are also known as list, recursivamente enumerables or recognizablebecause if all possible cases of the problem are listed, it is possible recognize to those who do have a solution.
  • Them incomputable are those for which there is no algorithm that can solve them, no matter whether they have or not solution. The classic example par excellence is the problem of logical implication, which is to determine when a logical proposition is a theorem; for this problem there is no algorithm that in all cases can distinguish whether a proposition or its negation is a theorem.

There is a more general version of this classification, where uncomputable problems are subdivided in turn into more difficult problems than others. The main tool for achieving these classifications is the concept of reduction: A problem A{displaystyle A} reduces to the problem B{displaystyle B} if under the assumption that the problem is solved B{displaystyle B} it is possible to solve the problem A{displaystyle A}; this is denoted by A≤ ≤ tB{displaystyle Aleq _{t}B}and informally means the problem A{displaystyle A} It's not harder to solve. the problem B{displaystyle B}. For example, under the assumption that a person knows how to add, it is very easy to teach him to multiply by making repeated sums, so multiplying is reduced to adding.

Computational complexity theory

Even if a problem is computable, it may not be possible to solve it in practice if a lot of memory or execution time is required. The theory of computational complexity studies the needs of memory, time and other computational resources to solve problems; in this way it is possible to explain why some problems are more difficult to solve than others. One of the greatest achievements of this branch is the classification of problems, similar to the periodic table, according to their difficulty. In this classification, problems are separated by complexity classes.

This theory has application in almost all areas of knowledge where you want to solve a problem computationally, because researchers don't just want to use one method to solve a problem, but to use the fastest one. Computational complexity theory also has applications in areas such as cryptography, where cracking a secret code is expected to be a very difficult problem unless you have the password, in which case the problem becomes easy.

Other subbranches

  • Computer models Study Abstractions make a compute. Here are the classic models of automaton theory as well as other models such as recursive functions, lambda calculation and even programming languages.
  • Algorithmic Theory of Information Focus your attention on complexity to describe somewhatrithically a data sequence (chain); here complexity is measured by the length of your smaller description.
  • Specification and formal verification Find methodologies to ensure that a problem is properly modeled and formal systems to validate the correction of the algorithmic solution.
  • La Computer learning theory search for algorithms that make computers modify their behaviors autonomously based on empirical data, and specifically on examples and counterparts. This type of learning is called supervised learning. Similar to the theory of computer complexity, in this theory the functions are classified by their degree of difficulty of being learned.
  • Type theory Look for the classification of statements according to the types of values they calculate using formal language theory tools.

History

The theory of computing proper begins in the early 20th century, shortly before electronic computers were invented. At this time several mathematicians wondered if there was a universal method to solve all mathematical problems. To do this, they had to develop the precise notion of problem solving method, that is, the formal definition of algorithm.

Some of these formal models were proposed by precursors such as Alonzo Church (Lambda calculus), Kurt Gödel (recursive functions) and Alan Turing (Turing machine). These models have been shown to be equivalent in the sense that they can simulate the same algorithms, albeit in different ways. Among the most recent computing models are programming languages, which have also been shown to be equivalent to earlier models; this is strong evidence for the Church-Turing conjecture, that every algorithm ever and ever can be simulated on a Turing machine, or equivalently, using recursive functions. In 2007 Nachum Dershowitz and Yuri Gurevich published a proof of this conjecture based on some axiomatization of algorithms.

One of the first results of this theory was the existence of problems that were impossible to solve algorithmically, being the stopping problem the most famous of them. For these problems there is not and will not be any algorithm that can solve them, regardless of the amount of time or memory available in a computer. Also, with the advent of modern computers, it became clear that some problems that were solvable in theory were impossible in practice, since such solutions required unrealistic amounts of time or memory to find.

Contenido relacionado

Dungeon crawler game

Dungeon crawling games are a subgenre of role-playing video games characterized by adventure through mazes, through random procedurally generated levels...

Portable.NET

Portable.NET is a suite of free software tools for building and running applications for the Common Language Infrastructure, better known as.NET....

Mach (core)

Mach is an operating system design project started at Carnegie Mellon University with the goal of developing a...
Más resultados...
Tamaño del texto:
Copiar