Condicionamiento operante

Este artículo necesita la atención de un psicólogo / experto académico en el tema. Por favor, ayude a reclutar uno, o mejore esta página usted mismo si está calificado. Este banner aparece en artículos que son débiles y cuyos contenidos deben abordarse con precaución académica.. Operant conditioning is an aspect of learning theory and is the use of consequences to modify the occurrence and form of behavior. Operant conditioning is distinguished from Pavlovian conditioning in that operant conditioning deals with the modification of voluntary behavior through the use of consequences, while Pavlovian conditioning deals with the conditioning of behavior so that it occurs under new antecedent conditions[1]. Condicionamiento operante, sometimes called instrumental conditioning or instrumental learning, was first extensively studied by Edward L. Thorndike (1874-1949), who observed the behavior of cats trying to escape from home-made puzzle boxes.[2] When first constrained in the boxes, the cats took a long time to escape. With experience, ineffective responses occurred less frequently and successful responses occurred more frequently, enabling the cats to escape in less time over successive trials. In his Law of Effect, Thorndike theorized that successful responses, those producing satisfying consequences, were "stamped in" by the experience and thus occurred more frequently. Unsuccessful responses, those producing annoying consequences, were stamped out and subsequently occurred less frequently. En breve, some consequences strengthened behavior and some consequences weakened behavior. B.F. Desollador (1904-1990) built upon Thorndike's ideas to construct a more detailed theory of operant conditioning based on reinforcement, punishment, and extinction. Contenido 1 Reforzamiento, punishment, and extinction 2 Biological correlates of operant conditioning 3 Factors that alter the effectiveness of consequences 4 Extinction-induced variability 5 Avoidance conditioning 6 Ver también 7 References & Bibliography 8 Textos clave 8.1 Libros 8.2 Papeles 9 Material adicional 9.1 Libros 9.2 Papeles 10 External links Reinforcement, punishment, and extinction Reinforcement, and punishment, the core ideas of operant conditioning, are either positive (introducing a stimulus to an organism's environment following a response), or negative (removing a stimulus from an organism's environment following a response). This creates a total of four basic consequences, with the addition of a fifth procedure known as extinction (es decir. nothing happens following a response). It's important to note that organisms are not spoken of as being reinforced, punished, or extinguished; it is the response that is reinforced, punished, or extinguished. Adicionalmente, reinforcement, punishment, and extinction are not terms whose use are restricted to the laboratory. Naturally occurring consequences can also be said to reinforce, punish, or extinguish behavior and are not always delivered by people. Reinforcement is a consequence that causes a behavior to occur with greater frequency. Punishment is a consequence that causes a behavior to occur with less frequency. Extinction is the lack of any consequence following a response. When a response is inconsequential, producing neither favorable nor unfavorable consequences, it will occur with less frequency. Four contexts of operant conditioning: Here the terms "Positivo" y "negativo" are not used in their popular sense, but rather: "Positivo" refers to addition, y "negativo" refers to subtraction. What is added or subtracted may be either reinforcement or punishment. Hence positive punishment is sometimes a confusing term, as it denotes the addition of punishment (such as spanking or an electric shock), a context that may seem very negative in the lay sense. The four procedures are: Positive reinforcement occurs when a behavior (respuesta) is followed by a favorable stimulus (commonly seen as pleasant) that increases the frequency of that behavior. In the Skinner box experiment, a stimulus such as food or sugar solution can be delivered when the rat engages in a target behavior, such as pressing a lever. Negative reinforcement occurs when a behavior (respuesta) is followed by the removal of an aversive stimulus (commonly seen as unpleasant) thereby increasing that behavior's frequency. In the Skinner box experiment, negative reinforcement can be a loud noise continuously sounding inside the rat's cage until it engages in the target behavior, such as pressing a lever, upon which the loud noise is removed. Positive punishment (also called "Punishment by contingent stimulation") occurs when a behavior (respuesta) is followed by an aversive stimulus, such as introducing a shock or loud noise, resulting in a decrease in that behavior. Negative punishment or Omission training (also called "Punishment by contingent withdrawal") occurs when a behavior (respuesta) is followed by the removal of a favorable stimulus, such as taking away a child's toy following an undesired behavior, resulting in a decrease in that behavior. Además: Avoidance learning is a type of learning in which a certain behavior results in the cessation of an aversive stimulus. Por ejemplo, performing the behavior of shielding one's eyes when in the sunlight (or going indoors) will help avoid the punishment of having light in one's eyes. Extinction occurs when a behavior (respuesta) that had previously been reinforced is no longer effective. In the Skinner box experiment, this is the rat pushing the lever and being rewarded with a food pellet several times, and then pushing the lever again and never receiving a food pellet again. Eventually the rat would cease pushing the lever. Non-contingent Reinforcement is a procedure that decreases the frequency of a behavior by both reinforcing alternative behaviors and extinguishing the undesired behavior. Since the alternative behaviors are reinforced, they increase in frequency and therefore compete for time with the undesired behavior. Operant Conditioning vs Fixed Action Patterns Skinner's construct of instrumental learning is contrasted with what Nobel Prize winning biologist Konrad Lorenz termed "fixed action patterns," or reflexive, impulsive, or instinctive behaviors. These behaviors were said by Skinner and others to exist outside the parameters of operant conditioning but were considered essential to a comprehensive analysis of behavior. In dog training, the use of the prey drive, particularly in training working dogs, detection dogs, etc., the stimulation of these fixed action patterns, relative to the dog's predatory instincts, are the key to producing very difficult yet consistent behaviors, and in most cases, do not involve operant, clásico, or any other kind of conditioning[Cómo hacer referencia y vincular a un resumen o texto]. While evolutionary processes shaped these fix action patterns, the patterns themselves remained stable long enough to be shaped by the long time span necessary for evolution because of their survival function (es decir,, operant conditioning). According to the laws of operant conditioning, any behavior that is consistently rewarded, every single time, will extinguish at a faster rate while intermittently reinforcing behavior leads to more stable rates of behavior that are relatively more resistant to extinction. Así, in detection dogs, any correct behavior of indicating a "find," must always be rewarded with a tug toy or a ball throw early on for initial acquisition of the behavior. Thereafter, fading procedures, in which the rate of reinforcement is "thinned" (not every response is reinforced)are introduced, switching the dog to an intermittent schedule of reinforcement, which is more resistant to instances of non-reinforcement. No obstante, some trainers are now using the prey drive to train pet dogs and find that they get far better results in the dogs' responses to training than when they only use the principles of operant conditioning[Cómo hacer referencia y vincular a un resumen o texto], which according to Skinner, and his disciple Keller Breland (who invented clicker training), break down when strong instincts are at play.[3] Biological correlates of operant conditioning The first scientific studies identifying a neurons that responded in ways that suggested they encode for the conditioned stimulus came from work by Rusty Richardson and Mahlon deLong[4][5]. They showed that nucleus basalis neurons, which release acetylcholine broadly throughout the cerebral cortex, are activated shortly after a conditioned stimulus, or after a primary reward if no conditioned stimulus exists. These neurons are equally active for positive and negative reinforcers, and have been demonstrated to cause plasticity in many cortical regions[6]. Evidence also exists that dopamine is activated at similar times. The dopamine pathways encode positive reward only, and not aversive reinforcement, and they project much more densely to frontal cortex regions. Cholinergic projections, a diferencia de, are dense even in the posterior cortical regions, like the primary visual cortex. Factors that alter the effectiveness of consequences How effective a consequence can be at modifying a response will tend to increase or decrease according to various factors. These factors can apply to both reinforcing and punishing consequences. Satiation: The effectiveness of a consequence will be reduced if the individual's "appetite" for that source of stimulation has been satisfied. Inversely, the effectiveness of a consequence will increase as the individual becomes deprived of that stimulus. If someone is not hungry, food will not be an effective reinforcer for behavior. Immediacy: After a response, how immediately a consequence is then felt determines the effectiveness of the consequence. More immediate feedback will be more effective than less immediate feedback. If someone's license plate is caught by a traffic camera for speeding and they receive a speeding ticket in the mail a week later, this consequence will not be very effective against speeding. But if someone is speeding and is caught in the act by an officer who pulls them over, then their speeding behavior is more likely to be affected. Contingency: If a consequence does not contingently (reliably, or consistently) follow the target response, its effectiveness upon the response is reduced. But if a consequence follows the response reliably after successive instances, it's ability to modify the response is increased. If someone has a habit of getting to work late, but is only occasionally reprimanded for their lateness, the reprimand will not be a very effective punishment. Tamaño: This is a "cost-benefit" determinant of whether a consequence will be effective. If the size, or amount, of the consequence is large enough to be worth the effort, the consequence will be more effective upon the behavior. An unusually large lottery jackpot, por ejemplo, might be enough to get someone to buy a one-dollar lottery ticket (or even buying multiple tickets). But if a lottery jackpot is small, the same person might not feel it to be worth the effort of driving out and finding a place to buy a ticket. En este ejemplo, it's also useful to note that "esfuerzo" is a punishing consequence. How these opposing expected consequences (reinforcing and punishing) balance out will determine whether the behavior is performed or not. Most of these factors exist for biological reasons. The biological purpose of the Principle of Satiation is to maintain the organism's homeostasis. When an organism has been deprived of sugar, por ejemplo, the effectiveness of the taste of sugar as a reinforcer is high. Sin embargo, as the organism reaches or exceeds their optimum blood-sugar levels, the taste of sugar becomes less effective, perhaps even aversive. The principles of Immediacy and Contingency exist for neurochemical reasons. When an organism experiences a reinforcing stimulus, dopamine pathways in the brain are activated. This network of pathways "releases a short pulse of dopamine onto many dendrites, thus broadcasting a rather global reinforcement signal to postsynaptic neurons."[7] This makes recently activated synapses able to increase their sensitivity to efferent signals, hence increasing the probability of occurrence for the recent responses preceding the reinforcement. These responses are, estadísticamente, the most likely to have been the behavior responsible for successfully achieving reinforcement. But when the application of reinforcement is either less immediate or less contingent (less consistent), the ability of dopamine to act upon the appropriate synapses is reduced. Extinction-induced variability While extinction, when implemented consistently over time, results in the eventual decrease of the undesired behavior, in the near-term the subject might exhibit what is called an extinction burst. An extinction burst will often occur when the extinction procedure has just begun. This consists of a sudden and temporary increase in the response's frequency , followed by the eventual decline and extinction of the behavior targeted for elimination. Take, as an example, a pigeon that has been reinforced to peck an electronic button. During its training history, every time the pigeon pecked the button, it will have received a small amount of bird seed as a reinforcer. Así que, whenever the bird is hungry, it will peck the button to receive food. Sin embargo, if the button were to be turned off, the hungry pigeon will first try pecking the button just as it has in the past. When no food is forthcoming, the bird will likely try again... and again, and again. After a period of frantic activity, in which their pecking behavior yields no result, the pigeon's pecking will decrease in frequency. The evolutionary advantage of this extinction burst is clear. In a natural environment, an animal that persists in a learned behavior, despite not resulting in immediate reinforcement, might still have a chance of producing reinforcing consequences if they try again. This animal would be at an advantage over another animal that gives up too easily. Extinction-induced variability serves a similar adaptive role. When extinction begins, an initial increase in the response rate is not the only thing that can happen. Operant behavior is different from reflexes in that its response topography (the form of the response) is subject to slight variations from one performance to another. These slight variations can include small differences in the specific motions involved, differences in the amount of force applied, and small changes in the timing of the response. The subject's history of reinforcement is what keeps those slight variations stable by maintaining successful variations instead of less successful variations. Imagine a bell curve. The horizontal axis would represent the different variations possible for a given behavior. The vertical axis would represent the response's probability in a given situation. Response variants in the middle of the bell curve, at its highest point, are the most likely because those responses, according to the organism's experience, have been the most effective at producing reinforcement. The more extreme forms of the behavior would lie at the lower ends of the curve, to the left and to the right of the peak, where their probability for expression is low. A simple example would be a person inside a room opening a door to exit. The response would be the opening of the door, and the reinforcer would be the freedom to exit. For each time that same person opens that same door, they do not open the door in the exact same way every time. Bastante, each time they open the door a little differently: sometimes with less force, sometimes with more force; sometimes with one hand, sometimes with the other hand; sometimes more quickly, sometimes more slowly. Because of the physical properties of the door and its handle, there is a certain range of successful responses which are reinforced. Now imagine in our example that the subject tries to open the door and it won't budge. This is when extinction-induced variability occurs. The bell curve of probable responses will begin to broaden, with more extreme forms of behavior becoming more likely. The person might now try opening the door with extra force, repeatedly twist the knob, try to hit the door with their shoulder, maybe even call for help or climb out a window. This is how extinction causes variability in behavior, in the hope that these new variations might be successful. Por esta razón, extinction-induced variability is an important part of the operant procedure of Shaping. Avoidance conditioning Main article: Avoidance conditioning One of the practical aspects of operant conditioning with relation to animal training is the use of shaping (reinforcing successive approximations and not reinforcing behavior past approximating), as well as chaining. See also Adjunctive behavior Animal training A task that typically requires operant conditioning Behavior Modification Behaviorism A theory that behavior is explained by external events. This is the theory under which Operant Conditioning falls. Classical conditioning Cognition A theory that behavior may be explained by invoking internal mental representations and operations. This theory is in direct contrast to Behaviorism. Conditioned emotional responses Conditioned responses Conditioned stimulus Delayed alternation Discrimination learning Educational technology Escape conditioning Fading (acondicionamiento) Noncontingent behavior Omission training Polydipsia Reinforcement Self stimulation Time out Unconditioned stimulus References & Bibliography Key texts Books Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. Acton, MAMÁ: Copley. Desollador, B. F. (1953). Science and human behavior. Nueva York. Macmillan. Desollador, B. F. (1957). Verbal behavior. Englewood Cliffs, NJ: Prentice Hall. Papers Thorndike, E. L. (1901). Animal intelligence: An experimental study of the associative processes in animals. Psychological Review Monograph Supplement, 2, 1-109. Additional material Books Papers McSweeney, F.K., Hinson, J.M, & Cannon, C.B. (1996). Sensitization-habituation may occur during operant conditioning. Boletín Psicológico, 120, 256-271. Full text Lukowiak, K., Adatia, N., Krygier, D., & Syed, N. (2000). Operant Conditioning in Lymnaea: Evidence for Intermediate- and Long-term Memory. Learning & Memory, Para.. 7, No. 3, páginas. 140-150. Full text Nargeot, R., Baxter, DA, & Byrne, J.H. (1999). In Vitro Analog of Operant Conditioning in Aplysia. II. Modifications of the Functional Dynamics of an Identified Neuron Contribute to Motor Pattern Selection, The Journal of Neuroscience, 19 (6): 2261-2272.Full text Weiss, E. & Wilson, S. (2003). The Use of Classical and Operant Conditioning in Training Aldabra Tortoises (Geochelone gigantea) for Venipuncture and Other Husbandry Issues. Journal of Applied Animal Welfare Science, 6(1), 33-38.Full text External links Journal of the Experimental Analysis of Behavior Journal of Applied Behavior Analysis Learning Types of learning Avoidance conditioning | Condicionamiento clásico | Aprendizaje basado en la confianza | Aprendizaje de la discriminación | Emulación | Aprendizaje experimental | Acondicionamiento de escape | Aprendizaje incidental |Aprendizaje intencional | aprendizaje latente | laberinto de aprendizaje | Dominio del aprendizaje | aprendizaje mnemotécnico | Aprendizaje no asociativo | Aprendizaje por turnos sin inversión | Aprendizaje de sílabas sin sentido | Aprendizaje no verbal | Aprendizaje mediante la observación | entrenamiento por omisión | Condicionamiento operante | Aprendizaje asociado emparejado | Aprendizaje motor perceptivo | Acondicionamiento del lugar | Aprendizaje de probabilidad | aprendizaje de memoria | Aprendizaje de turnos inversos | Condicionamiento de segundo orden | aprendizaje secuencial | Aprendizaje de anticipación en serie | aprendizaje en serie | aprendizaje de habilidades | Condicionamiento de evitación de Sidman | Aprendizaje social | Aprendizaje espacial | Aprendizaje dependiente del estado | Teoria de aprendizaje social | Aprendizaje dependiente del estado | Aprendizaje por ensayo y error | Aprendizaje verbal Conceptos en la teoría del aprendizaje Encadenamiento | Prueba de hipótesis cognitiva | Acondicionamiento | Respuestas condicionadas | Estimulo condicionado | Supresión condicionada | Retardo de tiempo constante | contracondicionamiento | Acondicionamiento encubierto | contracondicionamiento | alternancia retardada | Hipótesis de reducción del retraso | Respuesta discriminatoria | práctica distribuida |Extinción | Mapeo rápido | jerarquía de Gagne | Generalización (aprendizaje) | Efecto de generación (aprendizaje) | Hábitos | habituación | Imitación (aprendizaje) | repetición implícita | Interferencia (aprendizaje) | Intervalo interestímulo | Refuerzo intermitente | inhibición latente | Horarios de aprendizaje | Tasa de aprendizaje | Aprendiendo estrategias | práctica masiva | Modelado | Transferencia negativa | sobreaprendizaje | Practicar | principio de premack | preacondicionamiento | Efecto de primacía | Refuerzo primario | Principios del aprendizaje | Incitación | Castigo | Recuerdo (aprendizaje) | Efectos recientes | Reconocimiento (aprendizaje) | Reconstrucción (aprendizaje) | Reforzamiento | Reaprendizaje | Modelo Rescorla-Wagner | Respuesta | Reforzamiento | Refuerzo secundario | Sensibilización | Efecto de posición en serie | Retiro en serie | Formación | Estímulo | Programa de refuerzo | Recuperacion espontanea | Aprendizaje dependiente del estado | Control de estímulo | Generalización de estímulos | Transferencia de aprendizaje | Respuestas incondicionadas | Estímulo incondicionado Aprendizaje de animales Aprendizaje de gatos | Aprendizaje de perros Aprendizaje de ratas Neuroanatomía del aprendizaje Neuroquímica del aprendizaje Adenilil ciclasa Aprendizaje en entornos clínicos Análisis de comportamiento aplicado | Terapia conductual | Modificación del comportamiento | Retraso de la gratificación | TCC | Desensibilización | Terapia de exposición | Prevención de exposición y respuesta | Inundación | Práctica calificada | habituación | Dificultades de aprendizaje | Terapia de inhibición recíproca | Desensibilización sistemática | Análisis de tareas | Tiempo fuera Aprendizaje en la educación Aprendizaje de adultos | Aprendizaje cooperativo | aprendizaje construccionista | Aprendizaje experimental | aprendizaje de lenguas extranjeras | Instrucción individualizada | capacidad de aprendizaje | Dificultades de aprendizaje | Trastornos del aprendizaje | Gestión del aprendizaje | Aprendiendo estilos | Teoría del aprendizaje (educación) | Aprendiendo a través del juego | aprendizaje escolar | Hábitos de estudio Aprendizaje automático Aprendizaje de diferencia temporal | Q-learning Contexto filosófico de la teoría del aprendizaje Conductismo | conexionismo | constructivismo | funcionalismo | positivismo lógico | Conductismo radical Trabajadores destacados en la teoría del aprendizaje|- pavlov | Cáscara | Tolman | Desollador | Bandura | Thorndike | Desollador | Watson Varios|- Categoría:diarios de aprendizaje | Teoría de mejora editar Esta página utiliza contenido con licencia Creative Commons de Wikipedia (ver autores). ↑ The Principles of Learning and Behavior, Fifth Edition, Ed. Michael Domjan ↑ Thorndike, E. L. (1901). Animal intelligence: An experimental study of the associative processes in animals. Psychological Review Monograph Supplement, 2, 1-109. ↑ Breland, Keller & Breland, mariano (1961), The Misbehavior of Organisms, psicólogo estadounidense. ↑ [J. Neurophysiol. 34:414-27, 1971] ↑ [Advances Exp. Medicine Biol. 295:233-53 1991] ↑ [PNAS 93:11219-24 1996, Ciencia 279:1714-8 1998] ↑ Schultz, Wolfram (1998). Predictive Reward Signal of Dopamine Neurons. The Journal of Neurophysiology, 80(1), 1-27.

Si quieres conocer otros artículos parecidos a Condicionamiento operante puedes visitar la categoría Articles with unsourced statements.

Deja una respuesta

Tu dirección de correo electrónico no será publicada.

Subir

we use own and third party cookies to improve user experience More information