Bayesian network

Evaluación | Biopsicología | Comparativo | Cognitivo | Del desarrollo | Idioma | Diferencias individuales | Personalidad | Filosofía | Social | Métodos | Estadística | Clínico | Educativo | Industrial | Artículos profesionales | Psicología mundial | Estadística: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory A Bayesian network (or a belief network) is a probabilistic graphical model that represents a set of variables and their probabilistic independencies. Por ejemplo, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. El término "Bayesian networks" was coined by Pearl (1985) to emphasize three aspects: The often subjective nature of the input information. The reliance on Bayes's conditioning as the basis for updating information. The distinction between causal and evidential modes of reasoning, which underscores Thomas Bayes's posthumous paper of 1763.[1] Formally, Bayesian networks are directed acyclic graphs whose nodes represent variables, and whose arcs encode conditional independencies between the variables. Nodes can represent any kind of variable, be it a measured parameter, a latent variable or a hypothesis. They are not restricted to representing random variables, which represents another "Bayesian" aspect of a Bayesian network. Efficient algorithms exist that perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (such as for example speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. Contenido 1 Definitions and concepts 1.1 Independencies and d-separation 1.2 Causal Bayesian networks 2 Ejemplo 3 Inference 4 Parameter learning 5 Structure learning 6 Aplicaciones 7 Historia 8 Ver también 9 Enlaces externos 10 References Definitions and concepts If there is an arc from node A to another node B, A is called a parent of B, and B is a child of A. The set of parent nodes of a node Xi is denoted by parents(Xi). A directed acyclic graph is a Bayesian Network relative to a set of variables if the joint distribution of the node values can be written as the product of the local distributions of each node and its parents: If node has no parents, its local probability distribution is said to be unconditional, otherwise it is conditional. If the value of a node is observed, then the node is said to be an evidence node. Independencies and d-separation The graph encodes independencies between variables. Conditional independence can be determined by the graphical property of d-separation. If two sets of nodes X and Y are d-separated in the graph by a third set Z, then the corresponding variable sets X and Y are independent given the variables in Z. The minimal set of nodes which d-separates node X from all other nodes is given by Xs Markov blanket. A path p (allowing paths that are not directed) is said to be d-separated (or blocked) by a set of nodes Z if and only if one of the following holds: p contains a chain i -> m -> j such that the middle node m is in Z, p contains a fork i <- m -> j such that the middle node m is in Z, p contains an inverted fork (or collider) i -> m <- j such that the middle node m is not in Z and no descendant of Z. A set said to d-separate x from y a directed acyclic graph G =Grass if all paths are d-separated by The 'd' d-separation stands for 'directional', since behavior three link on path depends direction arrows link. Two nodes (unconditionally) independent two have common ancestors (since this equivalent saying between these contain at least one collider, which empty set). Causal Bayesian networks network carrier conditional independencies variables, their causal connections. However, relations can be modelled closely related network. additional semantics specify X actively caused given state (an operation written as do(x)), then probability density function changes obtained cutting links X's parents X, setting value (Pearl, 2000). Using semantics, predict impact external interventions data prior intervention. Example simple Suppose there reasons could cause grass wet: either sprinkler or it's raining. Also, suppose rain has direct effect use (namely when it rains, usually turned on.) Then situation with adjacent All variables possible values T (for true) F false). joint is: where names been abbreviated wet, S =Sprinkler, R =Rain. model answer questions like "What likelihood raining, wet?" using formula summing over nuisance variables: As example numerator pointed out explicitly, used calculate each iteration summation function. In marginalizing denominator . If, other hand, we wish an interventional question: would rain, wet grass?" governed post-intervention distribution removing factor pre-intervention distribution. expected, unaffected action: save considerable amounts memory, dependencies sparse. For example, naive way storing probabilities 10 two-valued table requires storage space values. If local distributions variable more than 3 parent representation only needs store most One advantage intuitively easier human understand (a sparse of) complete Inference Because relationships, probabilistic queries about them. find updated knowledge subset (the evidence variables) observed. This process computing posterior called inference. gives universal sufficient statistic detection applications, wants choose minimize some expected loss function, instance decision error. thus considered mechanism automatically applying Bayes' theorem complex problems. exact inference methods elimination, eliminates (by integration summation) non-observed non-query distributing sum product; clique tree propagation, caches computation so many queried time new propagated quickly; recursive conditioning, allows space-time tradeoff matches efficiency elimination enough used. complexity exponential network's treewidth. approximate algorithms stochastic MCMC simulation, mini-bucket generalizes loopy belief variational methods. Parameter learning order fully represent distribution, necessary upon parents. its may any form. It work discrete Gaussian simplifies calculations. Sometimes constraints known; principle maximum entropy determine single greatest constraints. (Analogously, specific context dynamic network, commonly specifies hidden state's temporal evolution maximize rate implied process.) Often include parameters unknown must estimated data, sometimes approach. Direct maximization (or probability) often unobserved variables. classical approach problem expectation-maximization algorithm alternates observed maximizing posterior) assuming previously computed correct. Under mild regularity conditions converges parameters. treat compute full integrate expensive lead large dimension models, practise parameter-setting approaches common. Structure simplest case, specified expert perform applications task defining too humans. case structure learned data. Learning (i.e., graph) challenge pursued within machine learning. basic idea goes back recovery developed Rebane Pearl (1987)[2] rests distinction types triplets allowed (DAG): Type 1 type 2 same ) are, therefore, indistinguishable. 3, however, uniquely identified, marginally pairs dependent. Thus, while skeletons graphs stripped arrows) identical, directionality partially identifiable. applies parents, except first condition those Algorithms systematically skeleton underlying and, then, orient whose dictated observed.[3][4][5][6] An alternative method structural uses optimization based search. scoring search strategy. training requirement exhaustive returning maximizes score superexponential number strategy makes incremental aimed improving structure. global Markov chain Monte Carlo avoid getting trapped minima. Friedman et al.[How reference summary text] talk mutual information finding this. They do restricting candidate k exhaustively searching therein. Applications modelling bioinformatics (gene regulatory networks, protein structure), medicine, document classification, image processing, fusion, support systems,[How engineering[7] law[8][7]. History Informal variants were legal scholar John Henry Wigmore, form Wigmore charts, analyse trial 1913.[9] Another variant, diagrams was geneticist Sewall Wright[10] social behavioral sciences (mostly linear parametric models). See also statistics Chow-Liu Graphical Influence diagram Machine Polytree Variable-order Structural equation modeling World view External Tutorial Networks: http:>

Si quieres conocer otros artículos parecidos a Bayesian network puedes visitar la categoría Articles with unsourced statements.

Deja una respuesta

Tu dirección de correo electrónico no será publicada.


we use own and third party cookies to improve user experience More information