<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="uk">
		<id>http://istoriya.soippo.edu.ua/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Linen81icicle</id>
		<title>HistoryPedia - Внесок користувача [uk]</title>
		<link rel="self" type="application/atom+xml" href="http://istoriya.soippo.edu.ua/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Linen81icicle"/>
		<link rel="alternate" type="text/html" href="http://istoriya.soippo.edu.ua/index.php?title=%D0%A1%D0%BF%D0%B5%D1%86%D1%96%D0%B0%D0%BB%D1%8C%D0%BD%D0%B0:%D0%92%D0%BD%D0%B5%D1%81%D0%BE%D0%BA/Linen81icicle"/>
		<updated>2026-05-03T10:37:08Z</updated>
		<subtitle>Внесок користувача</subtitle>
		<generator>MediaWiki 1.24.1</generator>

	<entry>
		<id>http://istoriya.soippo.edu.ua/index.php?title=Of_false_negatives_was_spelling_mistakes_(eg._hemorrajia_as_an_alternative_to_hemorragia&amp;diff=307186</id>
		<title>Of false negatives was spelling mistakes (eg. hemorrajia as an alternative to hemorragia</title>
		<link rel="alternate" type="text/html" href="http://istoriya.soippo.edu.ua/index.php?title=Of_false_negatives_was_spelling_mistakes_(eg._hemorrajia_as_an_alternative_to_hemorragia&amp;diff=307186"/>
				<updated>2018-03-27T04:43:05Z</updated>
		
		<summary type="html">&lt;p&gt;Linen81icicle: Створена сторінка: Some examples of misspelled drugs are [http://gemmausa.net/index.php?mid=forum_05&amp;amp;document_srl=2598671 Onfusion. As we can see, the simplicity in the comment (a...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Some examples of misspelled drugs are [http://gemmausa.net/index.php?mid=forum_05&amp;amp;document_srl=2598671 Onfusion. As we can see, the simplicity in the comment (and] avilify (Abilify) or rivotril (Ribotril). So as to know the volume of messages reporting about therapies,in a 1st evaluation messages were classified based on the their annotations: messages possessing nor drug neither impact (55 ), messages devoid of a drug (27 ), messages without having an impact (5 ) and messages with drug(s) and effect(s) annotated (13 ). This implies that around half of them are usually not associated to drug treatment options. Regarding the false positives (see Table two), the principle supply of errors is the lack of context resolution. This means that, regardless of appropriately detecting a drug and an effect (based on the drug package insert), the context from the text did not fulfill the needs to properly contemplate it a relation. In the example FP1 (see Table three) we are able to see how diabetes and Escitalopram are regarded as [https://dx.doi.org/10.1186/s13569-016-0053-3 title= ][https://dx.doi.org/10.1016/j.nmni.2016.07.009 title= j.nmni.2016.07.009] target='resource_window'&amp;gt;s13569-016-0053-3 a pair by the method, in spite of the fact that the user is talking about them in two diverse contexts. Furthermore, in FP2 (see Table 3) we can see how the lack of co-reference resolution introduces an additional important source of error for false positives. The user introduces the term negative effects then talks about two of them in unique. This sort of cataphora will not be appropriately solved by the system. An additional case of false positives is due to the fact that either the drug or the impact requires a modifier in order for the phrase to obtain total meaning.Of false negatives was spelling mistakes (eg. hemorrajia as an alternative to hemorragia). Quite a few users have wonderful difficulty in spelling uncommon and complicated technical terms. This error source could be handled by a extra advanced matching strategy capable of dealing with the spelling error trouble. The usage of abbreviations (depre is definitely an abbreviation for depresi ) also produces false negatives. Linguistic pre-processing tactics which include lemmatization and stemming may perhaps assistance to handle this sort of abbreviations. The key supply of false negatives for drugs appears to be that customers usually misspelled drug names. Some generic and brand drugs have complex names for patients. Some examples of misspelled drugs are avilify (Abilify) or rivotril (Ribotril). One more significant source of errors was the abbreviations for drug families. For instance, benzodiacepinas (benzodiazepine) is normally utilised as benzos, that is not integrated in our dictionary. An intriguing source of errors to point out would be the use of acronyms referring to a mixture of two or more drugs. As an example, FEC can be a mixture of Fluorouracil, Epirubicin and Cyclophosphamide, 3 chemotherapy drugs applied to treat breast cancer. Most false positives for drugs had been as a consequence of a lack of ambiguity resolution. Some drug names are widespread Spanish words such as All?(a slimming drug) or Puntual (a laxative). Similarly, some drug names such as alcohol (alcohol) or ox eno (oxygen) can take a which means diverse than the among pharmaceutical substance. A different critical bring about of false positives is due to the use of drug loved ones names as adjectives that specify an impact. This really is the case of sedante (sedative) or antidepresivo (antidepressant), which can refer [https://dx.doi.org/10.1038/ncomms12536 title= ncomms12536] to a family members of drugs, but in addition to the definition of an effect or disorder triggered by a drug (sedative effects).&lt;/div&gt;</summary>
		<author><name>Linen81icicle</name></author>	</entry>

	<entry>
		<id>http://istoriya.soippo.edu.ua/index.php?title=Of_towards_the_poisoning_brought_on_by_it._Within_this_case,_based&amp;diff=305703</id>
		<title>Of towards the poisoning brought on by it. Within this case, based</title>
		<link rel="alternate" type="text/html" href="http://istoriya.soippo.edu.ua/index.php?title=Of_towards_the_poisoning_brought_on_by_it._Within_this_case,_based&amp;diff=305703"/>
				<updated>2018-03-22T01:55:55Z</updated>
		
		<summary type="html">&lt;p&gt;Linen81icicle: Створена сторінка: Lo dem  es advertising (d1, e3) para vender m  [http://s154.dzzj001.com/comment/html/?229214.html Ving in difficult and usually dangerous environments. Machismo...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Lo dem  es advertising (d1, e3) para vender m  [http://s154.dzzj001.com/comment/html/?229214.html Ving in difficult and usually dangerous environments. Machismo amongst this group] medicamentos. An fascinating supply of errors may be the lack of negation resolution, which implies that despite the fact that the user specifies he/she did not practical experience an effect right after taking a drug, the method annotates the relation. In FP4 (see Table 3) we can see how a user expresses happiness for not having experienced hot flashes after taking Xeloda. Ultimately, the complex sentences (coordinated and [https://dx.doi.org/10.1371/journal.pgen.1006179 title= journal.pgen.1006179] subordinated sentences) within a comment might mislead the method into annotating a relation that is not correct, providing place to a further intriguing source of false positives. For example, within the example FP5 (see Table 3) the relation among schizophrenia and Anafranil was annotated. As we are able to see, you'll find three effects one just after the other, as well as a drug which can be separated from them with an intermediate phrase where the user provides his/her opinion about theTable 2 Analysis of false positives in the test dataset.Error cause Unique context Co-reference resolution necessary Modifier required for full understanding Lack of negation resolution Syntactically Complicated phrases Total False Positives 62 46 28 13 9 158 Examples FP1 FP2 FP3 FP4 FPSegura-Bedmar et al. BMC Healthcare Informatics and Decision Creating 2015, 15(Suppl two):S6 http://www.biomedcentral.com/1472-6947/15/S2/SPage 7 ofTable 3 Instance of false positives in the test dataset.ID Example FP (d1, e1) FP1 Lo del aumento del az ar lo dec  porque tengo en [https://dx.doi.org/10.7554/eLife.16673 title= eLife.16673] mi familia antecedentes de diabetese1 y me da miedoe2 que este medicamento pueda a la larga generarme esta enfermedade3. Otra cuesti ... Por las ma nas tomo Escitalopramd1, que me pone muy nerviosa. FP2 Cada vez que voy a por las pastillas de Xelodad1 me siento con el farmac tico y me pregunta qu?tal con los efectos secundariose1, la primera vez me dijo que los m  comunes son la diarreae2 y las rojecese3 FP3 La intoxicaci  con litiod1 generate los siguientes s tomas: n sease1 o malestar digestivoe2 importante, v itose3, temblor de manose4 acrecentado, diarreae5, problemas de memoriae6, visi  borrosae7 y dificultades de coordinaci  de movimientose8 FP4 Tambi  tomo Xelodad1, en mi caso llevo five ciclos de six, no tengo ning  efecto secundarioe1, todo perfecto (que alegr  que no tengo sofocose2).(d1,e1) (d1,e1),(d1,e2), (d1,e3),(d1,e4),(d1,e5), (d1,e6), (d1, e8) (d1, e2)FP5 S o hay tres enfermedades mentales: depresi  mayore1, depresi  ansiosae2 y esquizofreniae3. Lo dem  es marketing (d1, e3) para vender m  medicamentos. De todas formas, mientras te mantengan el Anafranild1 seguir  estando bien.topic they are discussing. Additionally, in the Anafranil's drug package insert it really is pointed out that sufferers with schizophrenia ought to be cautious if taking the drug, but it will not say that this impact [https://dx.doi.org/10.7554/eLife.16673 title= eLife.16673] is definitely an indication or an adverse impact for this drug. With respect towards the false negatives (see Table four), the main source of errors may be the long distance in between the pair drug-effect in the text. In FN1 (see Table five) we've got an instance exactly where a drug and an effect usually are not annotated as a relation simply because the technique doesn't take into account it as a consequence of this dilemma.&lt;/div&gt;</summary>
		<author><name>Linen81icicle</name></author>	</entry>

	<entry>
		<id>http://istoriya.soippo.edu.ua/index.php?title=Title_Loaded_From_File&amp;diff=304930</id>
		<title>Title Loaded From File</title>
		<link rel="alternate" type="text/html" href="http://istoriya.soippo.edu.ua/index.php?title=Title_Loaded_From_File&amp;diff=304930"/>
				<updated>2018-03-19T14:43:29Z</updated>
		
		<summary type="html">&lt;p&gt;Linen81icicle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Therefore, though inside a classical supervised approach the system will be limited for the modest size in the SpanishADRWe decided to work with the Shallow Linguistic (SL) [https://www.medchemexpress.com/PF-670462.html MedChemExpress PF-670462] kernel proposed by Giuliano et al. [35] since it has been shown to carry out effectively making use of only shallow linguistic functions. Furthemore, we assume that kernel methods incorporating syntactic info are usually not suitable for social media texts, due to the fact numerous sentences are ungrammatical, and thereby, a syntactic parser is just not capable to properly method them. Yet another significant benefit is that the functionality of the [https://dx.doi.org/10.3389/fmicb.2016.01082 title= fmicb.2016.01082] SL kernel does not seem to be influenced by named entity recognition errors [36]. The SL kernel is often a linear mixture of two sequence kernels, Global Context and Local Context. The global context kernel is capable to recognize the existence of a binary relation making use of the tokens of your entire sentence. Bunescu and Mooney [37] claim that binary relations are characterized by the tokens that take place in one of these contexts: Fore-Between (FB), Among (B) or Between-After (BA). Because it is well-known in Information and facts Retrieval, stop-words and punctuation marks are often removed for the reason that they may be not beneficial to discover documents. Even so, these [https://www.medchemexpress.com/PF-562271.html order PF-562271] attributes are precious clues for identifying relations. For this reason, they may be preserved inside the contexts. The similarity involving two relation situations is calculated utilizing the n-gram kernel [38]. For every single in the three contexts (FB, B, BA), an n-gram kernel is defined by counting the popular n-grams that each relation instances share. Ultimately, the worldwide context kernel is defined as the linear combination of those 3 n-grams kernels. The neighborhood context kernel is in a position to identify if two entities are participating inside a relation by utilizing the contextSegura-Bedmar et al. BMC Health-related Informatics and Choice Generating 2015, 15(Suppl two):S6 http://www.biomedcentral.com/1472-6947/15/S2/SPage five ofFigure 1 Pipeline integrated in GATE platform to process user messages.information and facts linked to every single entity.5  for coaching (using a total of 63,067 messages) and 25  [https://dx.doi.org/10.1186/s12879-016-1718-5 title= s12879-016-1718-5] (21,023 messages) for testing. Within this way, the database offers us a instruction set of relation situations to train any supervised algorithm.Shallow Linguistic KernelMethods In general, co-occurrence systems present higher recall but low precision prices. It's well known that Supervised Machine Finding out methods create the most effective results in Details Extraction tasks. One particular major limitation of those techniques is that they call for a important quantity of annotated coaching examples. Unfortunately, you'll find quite handful of annotated corpora since their construction is costly. In this paper, we propose a system based on distant supervision [34], an option option that does not need to have annotated information. The distant supervision hypothesis establishes that if two entities take place in a sentence, then both entities may participate in a relation. The finding out course of action is supervised by a database, as an alternative to by annotated texts. Therefore, this approach does not imply overfitting troubles that produce a domain-dependence in nearly all supervised systems.5  for coaching (having a total of 63,067 messages) and 25  [https://dx.doi.org/10.1186/s12879-016-1718-5 title= s12879-016-1718-5] (21,023 messages) for testing.&lt;/div&gt;</summary>
		<author><name>Linen81icicle</name></author>	</entry>

	</feed>