TY - THES AB - The process of detecting and evaluating sensory information to guide behaviour is termed perceptual decision-making (PDM), and is critical for the ability of an organism to interact with its external world. Individuals with autism, a neurodevelopmental condition primarily characterised by social and communication difficulties, frequently exhibit altered sensory processing and PDM difficulties are widely reported. Recent technological advancements have pushed forward our understanding of the genetic changes accompanying this condition, however our understanding of how these mutations affect the function of specific neuronal circuits and bring about the corresponding behavioural changes remains limited. Here, we use an innate PDM task, the looming avoidance response (LAR) paradigm, to identify a convergent behavioural abnormality across three molecularly distinct genetic mouse models of autism (Cul3, Setd5 and Ptchd1). Although mutant mice can rapidly detect threatening visual stimuli, their responses are consistently delayed, requiring longer to initiate an appropriate response than their wild-type siblings. Mutant animals show abnormal adaptation in both their stimulus- evoked escape responses and exploratory dynamics following repeated stimulus presentations. Similarly delayed behavioural responses are observed in wild-type animals when faced with more ambiguous threats, suggesting the mutant phenotype could arise from a dysfunction in the flexible control of this PDM process. Our knowledge of the core neuronal circuitry mediating the LAR facilitated a detailed dissection of the neuronal mechanisms underlying the behavioural impairment. In vivo extracellular recording revealed that visual responses were unaffected within a key brain region for the rapid processing of visual threats, the superior colliculus (SC), indicating that the behavioural delay was unlikely to originate from sensory impairments. Delayed behavioural responses were recapitulated in the Setd5 model following optogenetic stimulation of the excitatory output neurons of the SC, which are known to mediate escape initiation through the activation of cells in the underlying dorsal periaqueductal grey (dPAG). In vitro patch-clamp recordings of dPAG cells uncovered a stark hypoexcitability phenotype in two out of the three genetic models investigated (Setd5 and Ptchd1), that in Setd5, is mediated by the misregulation of voltage-gated potassium channels. Overall, our results show that the ability to use visual information to drive efficient escape responses is impaired in three diverse genetic mouse models of autism and that, in one of the models studied, this behavioural delay likely originates from differences in the intrinsic excitability of a key subcortical node, the dPAG. Furthermore, this work showcases the use of an innate behavioural paradigm to mechanistically dissect PDM processes in autism. AU - Burnett, Laura ID - 12716 SN - 2663-337X TI - To flee, or not to flee? Using innate defensive behaviours to investigate rapid perceptual decision-making through subcortical circuits in mouse models of autism ER - TY - THES AB - Understanding the mechanisms of learning and memory formation has always been one of the main goals in neuroscience. Already Pavlov (1927) in his early days has used his classic conditioning experiments to study the neural mechanisms governing behavioral adaptation. What was not known back then was that the part of the brain that is largely responsible for this type of associative learning is the cerebellum. Since then, plenty of theories on cerebellar learning have emerged. Despite their differences, one thing they all have in common is that learning relies on synaptic and intrinsic plasticity. The goal of my PhD project was to unravel the molecular mechanisms underlying synaptic plasticity in two synapses that have been shown to be implicated in motor learning, in an effort to understand how learning and memory formation are processed in the cerebellum. One of the earliest and most well-known cerebellar theories postulates that motor learning largely depends on long-term depression at the parallel fiber-Purkinje cell (PC-PC) synapse. However, the discovery of other types of plasticity in the cerebellar circuitry, like long-term potentiation (LTP) at the PC-PC synapse, potentiation of molecular layer interneurons (MLIs), and plasticity transfer from the cortex to the cerebellar/ vestibular nuclei has increased the popularity of the idea that multiple sites of plasticity might be involved in learning. Still a lot remains unknown about the molecular mechanisms responsible for these types of plasticity and whether they occur during physiological learning. In the first part of this thesis we have analyzed the variation and nanodistribution of voltagegated calcium channels (VGCCs) and α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid type glutamate receptors (AMPARs) on the parallel fiber-Purkinje cell synapse after vestibuloocular reflex phase reversal adaptation, a behavior that has been suggested to rely on PF-PC LTP. We have found that on the last day of adaptation there is no learning trace in form of VGCCs nor AMPARs variation at the PF-PC synapse, but instead a decrease in the number of PF-PC synapses. These data seem to support the view that learning is only stored in the cerebellar cortex in an initial learning phase, being transferred later to the vestibular nuclei. Next, we have studied the role of MLIs in motor learning using a relatively simple and well characterized behavioral paradigm – horizontal optokinetic reflex (HOKR) adaptation. We have found behavior-induced MLI potentiation in form of release probability increase that could be explained by the increase of VGCCs at the presynaptic side. Our results strengthen the idea of distributed cerebellar plasticity contributing to learning and provide a novel mechanism for release probability increase. AU - Alcarva, Catarina ID - 12809 SN - 2663 - 337X TI - Plasticity in the cerebellum: What molecular mechanisms are behind physiological learning ER - TY - THES AB - During navigation, animals can infer the structure of the environment by computing the optic flow cues elicited by their own movements, and subsequently use this information to instruct proper locomotor actions. These computations require a panoramic assessment of the visual environment in order to disambiguate similar sensory experiences that may require distinct behavioral responses. The estimation of the global motion patterns is therefore essential for successful navigation. Yet, our understanding of the algorithms and implementations that enable coherent panoramic visual perception remains scarce. Here I pursue this problem by dissecting the functional aspects of interneuronal communication in the lobula plate tangential cell network in Drosophila melanogaster. The results presented in the thesis demonstrate that the basis for effective interpretation of the optic flow in this circuit are stereotyped synaptic connections that mediate the formation of distinct subnetworks, each extracting a particular pattern of global motion. Firstly, I show that gap junctions are essential for a correct interpretation of binocular motion cues by horizontal motion-sensitive cells. HS cells form electrical synapses with contralateral H2 neurons that are involved in detecting yaw rotation and translation. I developed an FlpStop-mediated mutant of a gap junction protein ShakB that disrupts these electrical synapses. While the loss of electrical synapses does not affect the tuning of the direction selectivity in HS neurons, it severely alters their sensitivity to horizontal motion in the contralateral side. These physiological changes result in an inappropriate integration of binocular motion cues in walking animals. While wild-type flies form a binocular perception of visual motion by non-linear integration of monocular optic flow cues, the mutant flies sum the monocular inputs linearly. These results indicate that rather than averaging signals in neighboring neurons, gap-junctions operate in conjunction with chemical synapses to mediate complex non-linear optic flow computations. Secondly, I show that stochastic manipulation of neuronal activity in the lobula plate tangential cell network is a powerful approach to study the neuronal implementation of optic flow-based navigation in flies. Tangential neurons form multiple subnetworks, each mediating course-stabilizing response to a particular global pattern of visual motion. Application of genetic mosaic techniques can provide sparse optogenetic activation of HS cells in numerous combinations. These distinct combinations of activated neurons drive an array of distinct behavioral responses, providing important insights into how visuomotor transformation is performed in the lobula plate tangential cell network. This approach can be complemented by stochastic silencing of tangential neurons, enabling direct assessment of the functional role of individual tangential neurons in the processing of specific visual motion patterns. Taken together, the findings presented in this thesis suggest that establishing specific activity patterns of tangential cells via stereotyped synaptic connectivity is a key to efficient optic flow-based navigation in Drosophila melanogaster. AU - Pokusaeva, Victoria ID - 12826 SN - 2663 - 337X TI - Neural control of optic flow-based navigation in Drosophila melanogaster ER - TY - THES AB - Most energy in humans is produced in form of ATP by the mitochondrial respiratory chain consisting of several protein assemblies embedded into lipid membrane (complexes I-V). Complex I is the first and the largest enzyme of the respiratory chain which is essential for energy production. It couples the transfer of two electrons from NADH to ubiquinone with proton translocation across bacterial or inner mitochondrial membrane. The coupling mechanism between electron transfer and proton translocation is one of the biggest enigma in bioenergetics and structural biology. Even though the enzyme has been studied for decades, only recent technological advances in cryo-EM allowed its extensive structural investigation. Complex I from E.coli appears to be of special importance because it is a perfect model system with a rich mutant library, however the structure of the entire complex was unknown. In this thesis I have resolved structures of the minimal complex I version from E. coli in different states including reduced, inhibited, under reaction turnover and several others. Extensive structural analyses of these structures and comparison to structures from other species allowed to derive general features of conformational dynamics and propose a universal coupling mechanism. The mechanism is straightforward, robust and consistent with decades of experimental data available for complex I from different species. Cyanobacterial NDH (cyanobacterial complex I) is a part of broad complex I superfamily and was studied as well in this thesis. It plays an important role in cyclic electron transfer (CET), during which electrons are cycled within PSI through ferredoxin and plastoquinone to generate proton gradient without NADPH production. Here, I solved structure of NDH and revealed additional state, which was not observed before. The novel “resting” state allowed to propose the mechanism of CET regulation. Moreover, conformational dynamics of NDH resembles one in complex I which suggest more broad universality of the proposed coupling mechanism. In summary, results presented here helped to interpret decades of experimental data for complex I and contributed to fundamental mechanistic understanding of protein function. AU - Kravchuk, Vladyslav ID - 12781 SN - 2663-337X TI - Structural and mechanistic study of bacterial complex I and its cyanobacterial ortholog ER - TY - THES AB - Deep learning has become an integral part of a large number of important applications, and many of the recent breakthroughs have been enabled by the ability to train very large models, capable to capture complex patterns and relationships from the data. At the same time, the massive sizes of modern deep learning models have made their deployment to smaller devices more challenging; this is particularly important, as in many applications the users rely on accurate deep learning predictions, but they only have access to devices with limited memory and compute power. One solution to this problem is to prune neural networks, by setting as many of their parameters as possible to zero, to obtain accurate sparse models with lower memory footprint. Despite the great research progress in obtaining sparse models that preserve accuracy, while satisfying memory and computational constraints, there are still many challenges associated with efficiently training sparse models, as well as understanding their generalization properties. The focus of this thesis is to investigate how the training process of sparse models can be made more efficient, and to understand the differences between sparse and dense models in terms of how well they can generalize to changes in the data distribution. We first study a method for co-training sparse and dense models, at a lower cost compared to regular training. With our method we can obtain very accurate sparse networks, and dense models that can recover the baseline accuracy. Furthermore, we are able to more easily analyze the differences, at prediction level, between the sparse-dense model pairs. Next, we investigate the generalization properties of sparse neural networks in more detail, by studying how well different sparse models trained on a larger task can adapt to smaller, more specialized tasks, in a transfer learning scenario. Our analysis across multiple pruning methods and sparsity levels reveals that sparse models provide features that can transfer similarly to or better than the dense baseline. However, the choice of the pruning method plays an important role, and can influence the results when the features are fixed (linear finetuning), or when they are allowed to adapt to the new task (full finetuning). Using sparse models with fixed masks for finetuning on new tasks has an important practical advantage, as it enables training neural networks on smaller devices. However, one drawback of current pruning methods is that the entire training cycle has to be repeated to obtain the initial sparse model, for every sparsity target; in consequence, the entire training process is costly and also multiple models need to be stored. In the last part of the thesis we propose a method that can train accurate dense models that are compressible in a single step, to multiple sparsity levels, without additional finetuning. Our method results in sparse models that can be competitive with existing pruning methods, and which can also successfully generalize to new tasks. AU - Peste, Elena-Alexandra ID - 13074 SN - 2663-337X TI - Efficiency and generalization of sparse neural networks ER -