Standaard Boekhandel gebruikt cookies en gelijkaardige technologieën om de website goed te laten werken en je een betere surfervaring te bezorgen.
Hieronder kan je kiezen welke cookies je wilt inschakelen:
Technische en functionele cookies
Deze cookies zijn essentieel om de website goed te laten functioneren, en laten je toe om bijvoorbeeld in te loggen. Je kan deze cookies niet uitschakelen.
Analytische cookies
Deze cookies verzamelen anonieme informatie over het gebruik van onze website. Op die manier kunnen we de website beter afstemmen op de behoeften van de gebruikers.
Marketingcookies
Deze cookies delen je gedrag op onze website met externe partijen, zodat je op externe platformen relevantere advertenties van Standaard Boekhandel te zien krijgt.
Je kan maximaal 250 producten tegelijk aan je winkelmandje toevoegen. Verwijdere enkele producten uit je winkelmandje, of splits je bestelling op in meerdere bestellingen.
Graeme Hirst University of Toronto Of the many kinds of ambiguity in language, the two that have received the most attention in computational linguistics are those of word senses and those of syntactic structure, and the reasons for this are clear: these ambiguities are overt, their resolution is seemingly essential for any prac- cal application, and they seem to require a wide variety of methods and knowledge-sources with no pattern apparent in what any particular - stance requires. Right at the birth of artificial intelligence, in his 1950 paper "Computing machinery and intelligence", Alan Turing saw the ability to understand language as an essential test of intelligence, and an essential test of l- guage understanding was an ability to disambiguate; his example involved deciding between the generic and specific readings of the phrase a winter's day. The first generations of AI researchers found it easy to construct - amples of ambiguities whose resolution seemed to require vast knowledge and deep understanding of the world and complex inference on this kno- edge; for example, Pharmacists dispense with accuracy. The disambig- tion problem was, in a way, nothing less than the artificial intelligence problem itself. No use was seen for a disambiguation method that was less than 100% perfect; either it worked or it didn't. Lexical resources, such as they were, were considered secondary to non-linguistic common-sense knowledge of the world.