<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="uk">
		<id>http://istoriya.soippo.edu.ua/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Knife6hope</id>
		<title>HistoryPedia - Внесок користувача [uk]</title>
		<link rel="self" type="application/atom+xml" href="http://istoriya.soippo.edu.ua/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Knife6hope"/>
		<link rel="alternate" type="text/html" href="http://istoriya.soippo.edu.ua/index.php?title=%D0%A1%D0%BF%D0%B5%D1%86%D1%96%D0%B0%D0%BB%D1%8C%D0%BD%D0%B0:%D0%92%D0%BD%D0%B5%D1%81%D0%BE%D0%BA/Knife6hope"/>
		<updated>2026-05-04T11:39:47Z</updated>
		<subtitle>Внесок користувача</subtitle>
		<generator>MediaWiki 1.24.1</generator>

	<entry>
		<id>http://istoriya.soippo.edu.ua/index.php?title=Hermal_hyperalgesia_at_1_or_7_days_immediately_after_CFA_injection,_the_plantar_test&amp;diff=220199</id>
		<title>Hermal hyperalgesia at 1 or 7 days immediately after CFA injection, the plantar test</title>
		<link rel="alternate" type="text/html" href="http://istoriya.soippo.edu.ua/index.php?title=Hermal_hyperalgesia_at_1_or_7_days_immediately_after_CFA_injection,_the_plantar_test&amp;diff=220199"/>
				<updated>2017-08-24T11:38:30Z</updated>
		
		<summary type="html">&lt;p&gt;Knife6hope: Створена сторінка: The resulting supernatant was moved into a glassy vial because the analysis sample of every FFA.Injection volumes had been 5 mL introduced more than 5 s. Verifi...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resulting supernatant was moved into a glassy vial because the analysis sample of every FFA.Injection volumes had been 5 mL introduced more than 5 s. Verification of needle position within the lateral cerebroventricle was created by i.c.v. dye injection and subsequent post-mortem confirmation of dye placement inside brain sections. FFAs comparative evaluation FFAs comparative analysis was measured as previously described with some modifications. The composition of FFAs was analyzed together with the UHPLC-MS/MS technique controlled by LabSolutions LCMS version 5.four. To perform the relative concentration assessment, the peak area values obtained from the NMR chromatogram of every fatty acid had been normalized working with that of C19:0 tuberculostearic acid as an internal standard. Next, the amounts of every fatty acid within the hypothalamus extract, with and without having CFA remedy, were calculated, subtracting the outcomes of each unfavorable control sample from those with the corresponding hypothalamus tissue extract. HPLC separation was performed on a Mightysil RP-18 GP column. The mobile phases have been gradients of ten mM ammonium acetate/methanol. The flow price was set to 0.three mL/min. Western blot analyses Western blotting was accomplished as previously described with some modifications. Hypothalamus tissue was homogenized in homogenization buffer. Protein samples had been resolved by 15% sodium dodecyl [http://svetisavaflemington.org/members/attack3link/activity/328512/ http://svetisavaflemington.org/members/attack3link/activity/328512/] sulfate-polyacrylamide gel electrophoresis and transferred onto nitrocellulose membranes. GPR40 was then assessed working with rabbit polyclonal main antibodies, and glial fibrillary acidic protein was detected employing mouse monoclonal major antibodies. Glyceraldehyde-3-phosphate dehydrogenase was applied as a loading handle and was detected [http://www.mrfoxss.com/members/karate1prison/activity/297677/ http://www.mrfoxss.com/members/karate1prison/activity/297677/] applying principal antibodies. Blots for GPR40 and GFAP were incubated overnight with all the main antibody at 4uC in Tris-buffered saline containing 0.1% Tween-20 and blocking agent. After washing, blots have been incubated with horseradish peroxidase -conjugated anti-rabbit IgG for GPR40 and HRP-conjugated anti-mouse IgG for GFAP and GAPDH for 1 h at area temperature. Immunoreactive bands were visualized working with a Light-Capture technique with an ECLTM Western Blotting Analysis Method. The signal intensities of immunoreactive bands have been analyzed applying a CsAnalyzer. GPR40, which was prelabeled with t.Hermal hyperalgesia at 1 or 7 days following CFA injection, the plantar test was performed around the mice at 30 min after DHA or GW9508 i.c.v. injection. Flavopiridoltreated mice underwent the plantar test after 1 or 7 days soon after CFA injection. Drugs and Administration schedule DHA, the selective GPR40-agonist GW9508 and also the GPR40 antagonist GW1100 had been dissolved in 1% dimethyl sulfoxide and also the remedy was diluted with saline ahead of von Frey testing. The doses of GW9508 had been selected primarily based upon our preceding publication, whereas GW1100 was chosen on the basis of earlier reports and our preliminary experiments. Under a non-anesthetized state, DHA and GW9508 were administered by means of the intracerebroventricular route 10 min just before CFA injection, and GW1100 was administered by means of the i.c.v. route 10 min ahead of GW9508 injection. Flavopiridol, a cyclin-dependent kinase inhibitor, was administered by i.c.v. injection into the left lateral ventricle on the mice twice per day after CFA treatment.&lt;/div&gt;</summary>
		<author><name>Knife6hope</name></author>	</entry>

	<entry>
		<id>http://istoriya.soippo.edu.ua/index.php?title=D_model,_we_argue,_can_explain_extant_data_and_account_for&amp;diff=216778</id>
		<title>D model, we argue, can explain extant data and account for</title>
		<link rel="alternate" type="text/html" href="http://istoriya.soippo.edu.ua/index.php?title=D_model,_we_argue,_can_explain_extant_data_and_account_for&amp;diff=216778"/>
				<updated>2017-08-18T02:27:26Z</updated>
		
		<summary type="html">&lt;p&gt;Knife6hope: Створена сторінка: Within the following, we will (1) briefly critique the currentFrontiers in Human Neurosciencewww.frontiersin.orgMay 2014 | Volume eight | Report 254 |Bach et al...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Within the following, we will (1) briefly critique the currentFrontiers in Human Neurosciencewww.frontiersin.orgMay 2014 | Volume eight | Report 254 |Bach et al.The affordance-matching hypothesisunderstanding of action expertise linked with objects; (two) sketch a simple model of how this expertise could contribute to action understanding, and (3) overview widespread findings in humans and monkeys around the use of object-related expertise in action observation [http://www.eilearn.net/members/grass2heat/activity/147510/ http://www.eilearn.net/members/grass2heat/activity/147510/] inside the light of this model. Within the following, we'll (1) briefly assessment the currentFrontiers in Human Neurosciencewww.frontiersin.orgMay 2014 | Volume eight | Post 254 |Bach et al.The affordance-matching hypothesisunderstanding of action knowledge linked with objects; (2) sketch a simple model of how this know-how could contribute to action understanding, and (3) assessment prevalent findings in humans and monkeys on the use of object-related information in action observation inside the light of this model. Throughout the manuscript we use the term &amp;quot;goal&amp;quot; to refer to preferred states of the environment, one's personal body, or thoughts. Following Csibra (2008), we presuppose that objectives could be positioned at distinct levels, reaching from basic, low level targets, which include finishing a grasp or hammering in a nail, to distal targets such as hanging up a image frame. We make use of the term &amp;quot;action&amp;quot; to refer to bodily movements that are performed using the express goal to achieve such a target. The term &amp;quot;target objects&amp;quot; or &amp;quot;recipient objects&amp;quot; are used to refer towards the objects affected by these actions.ACTION Information Provided BY OBJECTS The productive use of objects sets humans apart from even their closest relatives in the animal kingdom (e.g., Johnson-Frey, 2003). Most human actions involve objects, either as the recipient to be acted upon, or as a tool to become acted with (cf. Johnson-Frey et al., 2003). The capacity to work with objects has unlocked a vast array of effects humans can reach inside the environment that would otherwise be outdoors the scope of their effector systems. They variety from cutting with a knife, shooting a gun, to sending a text message having a mobile telephone, and traveling the planet with many forms of automobile. The capacity for utilizing these objects is underpinned by a specialized network within the left hemisphere, spanning frontal, parietal and temporal regions (Haaland et al., 2000; Johnson-Frey, 2004, for assessment; Binkofski and Buxbaum, 2013; for reviews, see van Elk et al., 2013), a number of which seem to be exceptional to humans (Orban et al., 2006; Peeters et al., 2009, 2013). This network supports object-directed action by coding (no less than) two types of information. For every single object, humans study not just what objectives they will, in principle, achieve with it (&amp;quot;function knowledge&amp;quot;), but additionally the motor behaviors that happen to be needed to attain these objectives (&amp;quot;manipulation knowledge&amp;quot;) (Kelemen, 1999; Buxbaum et al., 2000; Buxbaum and Saffran, 2002; Casby, 2003, for any evaluation, see van Elk et al., 2013). When developing up, one particular learns, one example is, that a tap is for obtaining water, and that this calls for turning it clockwise. Similarly, 1 learns that a knife is for cutting, and that this calls for alternating forward and backwards movements, with an amount of downward pressure that depends upon the object 1 desires to cut. Objects, consequently, seem to provide one particular together with the exact same links between potential action outcomes and necessary motor behaviors which might be central towards the control of voluntary action (see Hommel et al., 2001).&lt;/div&gt;</summary>
		<author><name>Knife6hope</name></author>	</entry>

	<entry>
		<id>http://istoriya.soippo.edu.ua/index.php?title=Hence_no_contextualROI-Based_MVPA_An_independent_pSTS_ROI_was_obtained_from&amp;diff=216776</id>
		<title>Hence no contextualROI-Based MVPA An independent pSTS ROI was obtained from</title>
		<link rel="alternate" type="text/html" href="http://istoriya.soippo.edu.ua/index.php?title=Hence_no_contextualROI-Based_MVPA_An_independent_pSTS_ROI_was_obtained_from&amp;diff=216776"/>
				<updated>2017-08-18T02:24:38Z</updated>
		
		<summary type="html">&lt;p&gt;Knife6hope: Створена сторінка: Specifically, the data labels for every participant were [http://ramaaltofoula.com/members/may9lier/activity/519204/ http://ramaaltofoula.com/members/may9lier/a...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Specifically, the data labels for every participant were [http://ramaaltofoula.com/members/may9lier/activity/519204/ http://ramaaltofoula.com/members/may9lier/activity/519204/] permuted (inside each run) one hundred instances plus the classification analysis was repeated utilizing every permuted label set to yield 100 likelihood accuracies for every single participant. We then randomly drew among the likelihood accuracies from each participant and averaged these accuracies to obtain a likelihood group-level accuracy. This random sampling (with replacement) was repeated 105 times to create a group-level null distribution. The accurate group-level classification accuracy was then in comparison with the null distribution to obtain the p-value related with the accuracy. Whole-Brain Searchlight Evaluation To determine other brain regions that discriminate context-specific facts, we carried out a whole-brain searchlight analysis in subject-space for every single participant with a three-voxel-radius searchlight consisting of 123 voxels centered on every non-zero voxel in an MNI152 brain mask. The four-way classification analysis performed for each and every searchlight followed the method employed in the ROI-based evaluation, except that no function choice was performed. The classification accuracy for every single searchlight was assigned for the voxel in the center of your searchlight, yielding a whole-brain classification accuracy map for each participant. Every participant's accuracy map was transformed back into MNI152 template space. The group-level classification accuracyFrontiers in Human Neuroscience | www.frontiersin.orgSeptember 2015 | Volume 9 | ArticleLee and McCarthyNeural discrimination of contextual [http://svetisavaflemington.org/members/attack3link/activity/323406/ http://svetisavaflemington.org/members/attack3link/activity/323406/] informationmap was obtained by averaging the accuracy maps from all participants. Significance testing of your whole-brain classification final results also applied permutation and bootstrap sampling techniques, along with cluster thresholding to appropriate for many comparisons (Stelzer et al., 2013). Particularly, we ran the searchlight classification evaluation.Therefore no contextualROI-Based MVPA An independent pSTS ROI was obtained in the Atlas of Social Agent Perception (Engell and McCarthy, 2013). Briefly, this Atlas integrated final results from a Biological Motion localizer (consisting of blocks of point-light figures and blocks of their scrambled counterparts) that was run on 121 participants. The probability map on the Biological Motion &amp;gt; Scrambled Motion contrast, which localizes the pSTS, was thresholded at 0.1 and intersected with all the suitable Supramarginal Gyrus from the Harvard Oxford Atlas to obtain a liberal pSTS mask. The mask was further edited manually to take away voxels spreading into the parietal operculum. The resulting ROI of 751 voxels (Figure 1, in yellow) was then transformed into subject-space for each participant. The beta estimates inside the ROI had been mean-normalized by z-scoring inside every sample to take away mean differences involving samples. Feature choice was performed around the samples within the instruction set of every single cross-validation fold by conducting a one-way ANOVA around the beta estimates for the four ``Preference'' trials for each and every voxel in the pSTS ROI. The top 123 voxels (to match the amount of voxels made use of for the searchlight analysis described later) that showed the greatest variance amongst the 4 trial sorts were selected as options for that cross-validation fold. The accuracies from all participants had been then averaged to get the group level classification accuracy. Significance testing in the group level was implemented applying a mixture of permutation and bootstrap sampling solutions (Stelzer et al., 2013).&lt;/div&gt;</summary>
		<author><name>Knife6hope</name></author>	</entry>

	<entry>
		<id>http://istoriya.soippo.edu.ua/index.php?title=Ponse_to_the_task-relevant_stimulus_(image)._Within_a_joint_version_of&amp;diff=214869</id>
		<title>Ponse to the task-relevant stimulus (image). Within a joint version of</title>
		<link rel="alternate" type="text/html" href="http://istoriya.soippo.edu.ua/index.php?title=Ponse_to_the_task-relevant_stimulus_(image)._Within_a_joint_version_of&amp;diff=214869"/>
				<updated>2017-08-15T03:20:29Z</updated>
		
		<summary type="html">&lt;p&gt;Knife6hope: Створена сторінка: Also, we hypothesize that A represents B's upcoming utterance within a equivalent format and computes timing estimates and content material predictions for that...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Also, we hypothesize that A represents B's upcoming utterance within a equivalent format and computes timing estimates and content material predictions for that utterance, as well (correct box). To test these hy.Ponse towards the task-relevant stimulus (image). Within a joint version of this task, participants take turns to name the picture and to carry out a secondary process, which is either congruent or incongruent with the major activity of image naming. 1 possibility is for the participants to become in the same area, using the congruent job getting tacit naming in the image along with the incongruent activity becoming tacit naming on the word. Alternatively, the participants could be in separate and soundproofed rooms, in which case the secondary task may be overt image or word naming. In any case, we would have a Very same situation (congruent secondary activity), in which both participants create the exact same utterance (i.e., the picture's name) as well as a Diverse conditionFrontiers in Psychology | CognitionNovember 2011 | Volume two | Short article 275 |Gambi and PickeringThe coordination of utterances(incongruent secondary activity), in which they make diverse utterances (i.e., the picture's name plus the distractor word). If speakers represent the processes underlying their partners' acts of speaking, we count on each the same and Distinct situations to differ in the person task. If speakers represent the processes underlying their partners' response through a forward model, we anticipate longer latencies in the Same than the Unique situation. If representing the other involves activation of one's own production system, around the contrary, we anticipate more quickly latencies within the Very same than within the Diverse situation. In addition, we may possibly uncover enhanced effects of distractor words around the processing on the images (e.g., higher semantic interference) in the Diverse condition.Joint picture icture naming1. A: so if one particular individual mentioned he couldn't invest (0.1) B: then I'd need to wait (Lerner, 1991, p. 445) As a result, speakers have much more will need of representing their very own utterance and their partner's upcoming utterance. Consecutive production paradigms must then somewhat mimic the naturalistic scenario exemplified in 1. One example is, A and B could be shown two photos (e.g., of a wig and of a carrot), one particular on the ideal and 1 around the left of a laptop or computer screen. A 1st names the left image (wig ); then B names the best image (carrot ; see Figure 2A). They may be told to reduce delay involving the two names (cf. Griffin, 2003). We consequently produce a joint objective for them. This scenario surely differs from naturally occurring situations of &amp;quot;collaborative turn completion&amp;quot;, but it allows clear experimental manage, and is arguably comparable to applying tasks for example picture naming to [http://svetisavaflemington.org/members/attack3link/activity/328484/ http://svetisavaflemington.org/members/attack3link/activity/328484/] understand natural monolog. (In an alternative version in the activity, participants may well simply get started speaking in response to cues, which could possibly take place at different instances (i.e., SOAs) based on situation.) Figure 2A presents a schematic description. Offered the complexity of your circumstance, so that you can make sure that the figure is readable, we illustrate what happens in the perspective of A, the speaker that names the initial picture. The timeline at the top shows the time course of word production for A's utterance (plus the onset of B's utterance).&lt;/div&gt;</summary>
		<author><name>Knife6hope</name></author>	</entry>

	<entry>
		<id>http://istoriya.soippo.edu.ua/index.php?title=Preoccupation,_anxiety_and_irritated_emotional_feelings;_2._Extraversion-Introversion:_Individuals_with_a_high&amp;diff=214378</id>
		<title>Preoccupation, anxiety and irritated emotional feelings; 2. Extraversion-Introversion: Individuals with a high</title>
		<link rel="alternate" type="text/html" href="http://istoriya.soippo.edu.ua/index.php?title=Preoccupation,_anxiety_and_irritated_emotional_feelings;_2._Extraversion-Introversion:_Individuals_with_a_high&amp;diff=214378"/>
				<updated>2017-08-14T06:59:28Z</updated>
		
		<summary type="html">&lt;p&gt;Knife6hope: Створена сторінка: Two one-way ANOVA with Group (A and NA) as [https://bongalong.co.za/members/bridge0heat/activity/198622/ https://bongalong.co.za/members/bridge0heat/activity/19...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Two one-way ANOVA with Group (A and NA) as [https://bongalong.co.za/members/bridge0heat/activity/198622/ https://bongalong.co.za/members/bridge0heat/activity/198622/] independent variable and corrected answers (hits) in solving the 36-item or the 29-item RMET showed a significant difference (F (1,98) = 43.09; p&lt;/div&gt;</summary>
		<author><name>Knife6hope</name></author>	</entry>

	</feed>