Quantcast
Channel: Microsoft Community - Super Fresh
Viewing all 1237233 articles
Browse latest View live

problème ajoute compte "Outlook.com" dans Outlook 2016

$
0
0

Bonjour,

Lorsque j'ajoute un compte Outlook.com dans Outlook en config auto, tout fonctionne assez bien excepté un point:

mon adresse "*** Email address is removed for privacy ***" devient une adresse farfelue : *** Email address is removed for privacy ***

Je voulais faire un essai en ajoutant le compte en manuel mais je n'arrive pas à mettre la main sur le serveur de messagerie pour créer un compte Exchange avec Outlook.com

Merci de votre aide,

Sébastien


Gesendete Mails werden nicht mehr angezeigt

$
0
0

Hallo!
Ich habe folgendes Problem!

Heute ließ sich das Windows Live Mail nicht mehr öffnen. Die Fehlermeldung habe ich mir leider nicht gemerkt.
Durch den Neustart des PCs hat es dann funktioniert. Alle eingegangenen bereits gelesenen Mails (über 4000) wurden neu importiert und unter "Ungelesene E-Mails" gespeichert und auch als ungelesen markiert.

Nur unter Gesendeten Elemente scheinen gar keine Mails mehr auf?

Wie kann ich diese wieder herstellen?
Bitte kann mir jemand helfen. 
Vielen Dank.

Синий Экран Смерти, 0x000000d1 (Прикреплён Мини-Дамп). Помогите пожалуйста, ребята! :(

$
0
0

Доброго времени суток, уважаемые. 

Случилось так, что смотрел видео, и тут вдруг бац, синий экран! Вроде 2 раз за всё время использования Win 10, раньше был какой-то другой, но не суть, очень плохо разбирают в этом.. В гугле полазил - что-то ничего не понял в этих экранах :((

Компьютер был перезагружен после критической ошибки.  Код ошибки: 0x000000d1 (0x0000000000000000, 0x0000000000000002, 0x0000000000000000, 0xfffff809f6ea6420). 

Сделал через программу BSOD, как в сентябре просили, когда синий экран тогда был из-за касперыча, а теперь его нет, но и код другой, в общем вот:

https://yadi.sk/d/cNAb9gCh3C9JGC

Или мини-дамп, не знаю нужно ли:

https://yadi.sk/d/yDuWMaUb3C9D65

Надеюсь, что добрый человек сможет помочь узнать причину

Однажды, с другой причиной синего экрана (из-за касперыча), он мне скинул это:

BugCheck A, {d0, 2, 1, fffff800e324c7fc}

*** WARNING: Unable to verify timestamp for klim6.sys
*** ERROR: Module load completed but symbols could not be loaded for klim6.sys
*** WARNING: Unable to verify timestamp for kneps.sys
*** ERROR: Module load completed but symbols could not be loaded for kneps.sys
Probably caused by : klim6.sys ( klim6+2bf0 )

Когда он просил меня через BSOD скинуть файл, возможно кому-то это тоже поможет определить причину нового синего экрана :(

Shared Outlook Cal w/ iCal

$
0
0

Does anyone know how to sync/open an already shared calendar on outlook for Mac onto my iCal? 

I have a shared calendar on my outlook for Mac (not Outlook 2016), but I need to open and sync that cal to my ical. 

Thank you!

running chkdsk in windows 10

$
0
0

when entering the following:  winkey + x  then  chkdsk C: /f /r /x  the system says it cannot lock the disk due to in use by other processes, then asks if I want to run on next restart.  I answer yes.   then exit the command prompt,  then restart.

On next restart chkdsk does not run, the system boots normally.

So....... how do I get this to execute?????

How Do I Get A List Of The Applications That Are Installed On My System?

$
0
0

Windows 10 Pro x64

I am about to move to a new machine with a clean build of Windows 10 Pro x64 from my current machine that has Windows 10 Pro x64.

I would like to get a list of all of the applications that I will have to reinstall.  I thought I could do that from Programs and Features in Control Panel.  Unfortunately, there does not appear to be any way to export the list.

You can get a list of applications that will be removed if you reset your PC but that is exportable before you reset the system.

Is there anyway to get a list of the applications installed on this machine that can be exported?

Outlook 2010 "Can't open file. The file may not exist, you may not have permission to open it, or it may be open in another program

$
0
0

Hi,

I have an issue with opening .msg files from a network share, in Outlook 2010/Windows 7. I am the owner of the files and have full access to it. The error is - "Can't open file. The file may not exist, you may not have permission to open it, or it may\\ \be open in another program"

The file isn't in use by another programme, I can open it in Notepad. If I copy the file from the network drive to the C: drive, I can open it. So far I have tried

-Starting Outlook in Safe mode

-Renaming extend.dat

-Clearing Outlook and temp cache files

-Disable AV on the server and PC.

-Checked permissions, I have full access to the drive.

-Recreated my Outlook profile.

What is stopping Outlook from opening msg files on a network drive?

Mike

Running Outlook 2010 on Windows7 Pro on a PC. Outlook has stopped running.

$
0
0

Running Outlook 2010 on Windows7 Pro on a PC.  Outlook has stopped running and sends a message that says "Outlook has stopped working.  Windows will close the program and notify you if a solution is available".  

I've never received an error message like that - that doesn't have a reference number for the problem and uses the word "IF". And now Outlook is in some kind of 'lock mode' and I can't even view any past emails or files I've set up.  It shuts down when you click on the above message or use the cursor to close the message.  

I found "MSN mail" on my start up menu, and added my email address, which created a file where I can now see my email, but the filing system is all messed up (as it looks on an Apple device like my iPad mini without Outlook).  

Will I have to de-install Outlook and reinstall it?  I'm a total novice at tech, so likely won't be able to do that on my own if that's the solution.  

Also, I have a new android phone that was receiving mail via the Outlook app on the phone, and it (coincidentally?) stopped syncing with Outlook when it  prompted me to download "One" to get Cloud backup, which I do not want, and don't have on my PC either. Is not allowing the One Cloud to download on my devices related?  I can't get email on my phone now, and that started at the same time my PC Outlook went down.


процесс Local Security Authority Process

$
0
0
После сброса пароля Win 10 при открытии любого браузера процесс Local Security Authority Process грузит ЦП на 50-100%.Страница грузится минут 5. Раньше такого не было. Как исправить не переустанавливая систему? Это не вирус, точно.

Excel dropping a digit - Maximum integer?

$
0
0

If I enter the number "2715276658900505" and hit enter in a new Excel spreadsheet, it drops the final "5" and enters "2715276658900500"

Is there a limit to the size of integer Excel can display?

I'm using office 2013 V 15.0.4893.1002 and Windows 8.

Thanks

Mark

OneDrive app won't work on my MacBook Pro

$
0
0

I have a MacBook Pro running the latest macOS Sierra 10.12.3.  I can go to OneDrive on a browser but the app refuses to load.  Once I download the app and put in my user name and password I get a pop-up screen that says "Signing in" and it will stay that way for an extended time, after which I get the message that there is a server error -- please try again.

I do get a small cloud in the top tool bar but the drop down menu only gives two choices, "Logging In" and "Report a Problem".  When I try to report a problem it also hangs up until I get the Server Error.

I have un-installed the app and re-installed it twice, same issue both times.  I have rebooted the MBPro, same results.  Is there an incompatibility issue with the latest OS?

John

apps that are hidden

$
0
0
Where's all of my apps that are hidden from the start in Windows 8.1?

Unblocking Contacts

$
0
0
So my boss sends me an email and I never get it, how do I try to unblock him?

Excel WENN Funktion - Prüffunktion

$
0
0

Guten Tag liebes Forum,

Ich habe ein Problem und zwar 

steht in meiner Tabelle in der Zelle E3 folgende Formel drin: =WENN(E1=E2;0)

D.h. wenn in der Zelle E1 einen Betrag steht gleich wie in E2 steht, dann soll er in E3 den wert 0 schreiben, wenn diese Bedingung nicht erfüllt ist, sollte automatisch "FALSCH" erscheinen bzw. soll er nichts schreiben oder extra "Falsch". Was Excel nun aber schreibt ist "FALSCH" auch wenn z.B. in E1 gleichen Betrag steht wie in E2 steht (anstatt die gewünschte "0"). Oder er schreibt auch gar nichts wenn die Formel komplett (mit "Sonst") => =WENN(E1=E2;0;"") so steht, obwohl beide Zellen den gleichen Wert haben (und 0 erscheinen sollte). Auch =WENN(E1=E2;0;"Falsch") funktioniert nicht...
Ich denke das "FALSCH" (automatisch) oder "gar nichts" oder "Falsch" (je nach Aufbau der Formel) wird seine Berechtigung haben, nur weiß ich nicht, was ich falsch mache bzw. wie ich dasändern könnte? Denn, wenn die Zelle A1 einen Wert hat, den manuell eingegeben wurde und gleich wie in E1 oder E2 ist, klappt wunderbar. Jedoch wenn die Zelle E2 einen Wert hat, den aus einer Formel oder Bezug auf anderer Zelle hat (im E2 steht =A1), die wiederum Bezug auf eine andere Zelle und mit Formel, dann funktioniert nicht und bekomme ich das falsche Ergebnis... Warum?

Folgendes Bild zeigt das korrekte Ergebnis, da A1 (143,09) manuell eingegeben wurde. Ebenfalls wird korrekt wenn E1 und E2 nicht gleich sind. Dann erscheint Falsch.

Folgendes Bild zeigt ein falsches Ergebnis, da A1 (143,09) bezieht sich auf eine andere Zelle (mit Formel) und nicht manuell eingegeben wurde. Denn E1 ist gleich E2 und sollte wie oben 0 erscheinen und nicht Falsch...



Ich hoffe, Ihr könnt mir da weiterhelfen, ich danke euch auf jedenfalls schon mal im Voraus!!

Liebe Grüße, Viviana

OneDrive app not loading on MacBook Pro

$
0
0

I cab download the OneDrive App but once I get past the enter user name and password the system gives a popup box that says "Signing In", which runs until I get a message "Server Error.  Try again later".  I've tried three times over 3 days, it just won't load.  Any suggestions?

== John


windows 10 update

$
0
0

every time Windows 10 do update! I can't get on my computer. This time I'm Blue Screen Error. Wedsite for the blue screen errors instruction to fix the problem but when get step click on Window update it just goes back safe mode screen.  So would like to get my computer back!!!!!!!!!!!!!!!!

I just like when Windows 10 update that my computer does go down.  The computer had Windows 8.1.

THE RULES OF PREDICTION

$
0
0

Amos liked to say that if you are asked to do anything—go to a party, give a speech, lift a finger—you should never answer right away, even if you are sure that you want to do it. Wait a day, Amos said, and you’ll be amazed how many of those invitations you would have accepted yesterday you’ll refuse after you have had a day to think it over. A corollary to his rule for dealing with demands upon his time was his approach to situations from which he wished to extract himself. A human being who finds himself stuck at some boring meeting or cocktail party often finds it difficult to invent an excuse to flee. Amos’s rule, whenever he wanted to leave any gathering, was to just get up and leave. Just start walking and you’ll be surprised how creative you will become and how fast you’ll find the words for your excuse, he said. His attitude to the clutter of daily life was of a piece with his strategy for dealing with social demands. Unless you are kicking yourself once a month for throwing something away, you are not throwing enough away, he said. Everything that didn’t seem to Amos obviously important he chucked, and thus what he saved acquired the interest of objects that have survived a pitiless culling. One unlikely survivor is a single scrap of paper with a few badly typed words on it, drawn from conversations he had with Danny in the spring of 1972 as they neared the end of their time in Eugene. For some reason Amos saved it: People predict by making up stories People predict very little and explain everything People live under uncertainty whether they like it or not People believe they can tell the future if they work hard enough People accept any explanation as long as it fits the facts The handwriting was on the wall, it was just the ink that was invisible People often work hard to obtain information they already have And avoid new knowledge Man is a deterministic device thrown into a probabilistic Universe In this match, surprises are expected Everything that has already happened must have been inevitable At first glance it resembles a poem. What it was, in fact, was early fodder for his and Danny’s next article, which would also be their first attempt to put their thinking in such a way that it might directly influence the world outside of their discipline. Before returning to Israel, they had decided to write a paper about how people made predictions. The difference between a judgment and a prediction wasn’t as obvious to everyone as it was to Amos and Danny. To their way of thinking, a judgment (“he looks like a good Israeli army officer”) implies a prediction (“he will make a good Israeli army officer”), just as a prediction implies some judgment—without a judgment, how would you predict? In their minds, there was a distinction: A prediction is a judgment that involves uncertainty. “Adolf Hitler is an eloquent speaker” is a judgment you can’t do much about. “Adolf Hitler will become chancellor of Germany” is, at least until January 30, 1933, a prediction of an uncertain event that eventually will be proven either right or wrong. The title of their next paper was “On the Psychology of Prediction.” “In making predictions and judgments under uncertainty,” they wrote, “people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic error.” Viewed in hindsight, the paper looks to have more or less started with Danny’s experience in the Israeli army. The people in charge of vetting Israeli youth hadn’t been able to predict which of them would make good officers, and the people in charge of officer training school hadn’t been able to predict who among the group they were sent would succeed in combat, or even in the routine day-to-day business of leading troops. Danny and Amos had once had a fun evening trying to predict the future occupations of their friends’ small children, and had surprised themselves by the ease, and the confidence, with which they had done it. Now they sought to test how people predicted—or, rather, to dramatize how people used what they now called the representativeness heuristic to predict. To do this, however, they needed to give them something to predict. They decided to ask their subjects to predict the future of a student, identified only by some personality traits, who would go on to graduate school. Of the then nine major courses of graduate study in the United States, which would he pursue? They began by asking their subjects to estimate the percentage of students in each course of study. Here were their average guesses: Business: 15 percent Computer Science: 7 percent Engineering: 9 percent Humanities and Education: 20 percent Law: 9 percent Library Science: 3 percent Medicine: 8 percent Physical and Life Sciences: 12 percent Social Science and Social Work: 17 percent For anyone trying to predict which area of study any given person was in, those percentages should serve as a base rate. That is, if you knew nothing at all about a particular student, but knew that 15 percent of all graduate students were pursuing degrees in business administration, and were asked to predict the likelihood that the student in question was in business school, you should guess “15 percent.” Here was a useful way of thinking about base rates: They were what you would predict if you had no information at all. Now Danny and Amos sought to dramatize what happened when you gave people some information. But what kind of information? Danny spent a day inside the Oregon Research Institute stewing over the question—and became so engrossed by his task that he stayed up all night creating what at the time seemed like the stereotype of a graduate student in computer science. He named him “Tom W.” Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to have little feel and little sympathy for other people and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense. They would ask one group of subjects—they called it the “similarity” group—to estimate how “similar” Tom was to the graduate students in each of the nine fields. That was simply to determine which field of study was most “representative” of Tom W. Then they would hand a second group—what they called the “prediction” group—this additional information: The preceding personality sketch of Tom W. was written during Tom’s senior year in high school by a psychologist, on the basis of projective tests. Tom W. is currently a graduate student. Please rank the following nine fields of graduate specialization in order of the likelihood that Tom W. is now a graduate student in each of these fields. They would not only give their subjects the sketch but inform them that it was a far from reliable description of Tom W. That it had been written by a psychologist, for a start; they would further tell subjects that the assessment had been made years earlier. What Amos and Danny suspected—because they had tested it first on themselves—is that people would essentially leap from the similarity judgment (“that guy sounds like a computer scientist!”) to some prediction (“that guy must be a computer scientist!”) and ignore both the base rate (only 7 percent of all graduate students were computer scientists) and the dubious reliability of the character sketch. The first person to arrive for work on the morning Danny finished his sketch was an Oregon researcher named Robyn Dawes. Dawes was trained in statistics and legendary for the rigor of his mind. Danny handed him the sketch of Tom W. “He read it over and he had a sly smile, as if he had figured it out,” said Danny. “And he said, ‘Computer scientist!’ After that I wasn’t worried about how the Oregon students would fare.” The Oregon students presented with the problem simply ignored all objective data and went with their gut sense, and predicted with great certainty that Tom W. was a computer scientist. Having established that people would allow a stereotype to warp their judgment, Amos and Danny then wondered: If people are willing to make irrational predictions based on that sort of information, what kind of predictions might they make if we give them totally irrelevant information? As they played with this idea—they might increase people’s confidence in their predictions by giving them any information, however useless—the laughter to be heard from the other side of the closed door must have grown only more raucous. In the end, Danny created another character. This one he named “****”: **** is a 30 year old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues. Then they ran another experiment. It was a version of the book bag and poker chips experiment that Amos and Danny had argued about in Danny’s seminar at Hebrew University. They told their subjects that they had picked a person from a pool of 100 people, 70 of whom were engineers and 30 of whom were lawyers. Then they asked them: What is the likelihood that the selected person is a lawyer? The subjects correctly judged it to be 30 percent. And if you told them that you were doing the same thing, but from a pool that had 70 lawyers in it and 30 engineers, they said, correctly, that there was a 70 percent chance the person you’d plucked from it was a lawyer. But if you told them you had picked not just some nameless person but a guy named ****, and read them Danny’s description of ****—which contained no information whatsoever to help you guess what **** did for a living—they guessed there was an equal chance that **** was a lawyer or an engineer, no matter which pool he had emerged from. “Evidently, people respond differently when given no specific evidence and when given worthless evidence,” wrote Danny and Amos. “When no specific evidence is given, the prior probabilities are properly utilized; when worthless specific evidence is given, prior probabilities are ignored.”* There was much more to “On the Psychology of Prediction”—for instance, they showed that the very factors that caused people to become more confident in their predictions also led those predictions to be less accurate. And in the end it returned to the problem that had interested Danny since he had first signed on to help the Israeli army rethink how it selected and trained incoming recruits: The instructors in a flight school adopted a policy of consistent positive reinforcement recommended by psychologists. They verbally reinforced each successful execution of a flight maneuver. After some experience with this training approach, the instructors claimed that contrary to psychological doctrine, high praise for good execution of complex maneuvers typically results in a decrement of performance on the next try. What should the psychologist say in response? The subjects to whom they posed this question offered all sorts of advice. They surmised that the instructors’ praise didn’t work because it led the pilots to become overconfident. They suggested that the instructors didn’t know what they were talking about. No one saw what Danny saw: that the pilots would have tended to do better after an especially poor maneuver, or worse after an especially great one, if no one had said anything at all. Man’s inability to see the power of regression to the mean leaves him blind to the nature of the world around him. We are exposed to a lifetime schedule in which we are most often rewarded for punishing others, and punished for rewarding. When they wrote their first papers, Danny and Amos had no particular audience in mind. Their readers would be the handful of academics who happened to subscribe to the highly specialized psychology trade journals in which they published. By the summer of 1972, they had spent the better part of three years uncovering the ways in which people judged and predicted—but the examples that they had used to illustrate their ideas were all drawn directly from psychology, or from the strange, artificial-seeming tests that they had given high school and college students. Yet they were certain that their insights applied anywhere in the world that people were judging probabilities and making decisions. They sensed that they needed to find a broader audience. “The next phase of the project will be devoted primarily to the extension and application of this work to other high-level professional activities, e.g., economic planning, technological forecasting, political decision making, medical diagnosis, and the evaluation of legal evidence,” they wrote in a research proposal. They hoped, they wrote, that the decisions made by experts in these fields could be “significantly improved by making these experts aware of their own biases, and by the development of methods to reduce and counteract the sources of bias in judgment.” They wanted to turn the real world into a laboratory. It was no longer just students who would be their lab rats but also doctors and judges and politicians. The question was: How to do it? They couldn’t help but sense, during their year in Eugene, a growing interest in their work. “That was the year it was really clear we were onto something,” recalled Danny. “People started treating us with respect.” Irv Biederman, then a visiting associate professor of psychology at Stanford University, heard Danny give a talk about heuristics and biases on the Stanford campus in early 1972. “I remember I came home from the talk and told my wife, ‘This is going to win a Nobel Prize in economics,’” recalled Biederman. “I was so absolutely convinced. This was a psychological theory about economic man. I thought, What could be better? Here is why you get all these irrationalities and errors. They come from the inner workings of the human mind.” Biederman had been friends with Amos at the University of Michigan and was now a member of the faculty at the State University of New York at Buffalo. The Amos he knew was consumed by possibly important but probably insolvable and certainly obscure problems about measurement. “I wouldn’t have invited Amos to Buffalo to talk about that,” he said—as no one would have understood it or cared about it. But this new work Amos was apparently doing with Danny Kahneman was breathtaking. It confirmed Biederman’s sense that “most advances in science come not from eureka moments but from ‘hmmm, that’s funny.’” He persuaded Amos to pass through Buffalo in the summer of 1972, on his way from Oregon to Israel. Over the course of a week, Amos gave five different talks about his work with Danny, each aimed at a different group of academics. Each time, the room was jammed—and fifteen years later, in 1987, when Biederman left Buffalo for the University of Minnesota, people were still talking about Amos’s talks. Amos devoted talks to each of the heuristics he and Danny had discovered, and another to prediction. But the talk that lingered in Biederman’s mind was the fifth and final one. “Historical Interpretation: Judgment Under Uncertainty,” Amos had called it. With a flick of the wrist, he showed a roomful of professional historians just how much of human experience could be reexamined in a fresh, new way, if seen through the lens he had created with Danny. In the course of our personal and professional lives, we often run into situations that appear puzzling at first blush. We cannot see for the life of us why Mr. X acted in a particular way, we cannot understand how the experimental results came out the way they did, etc. Typically, however, within a very short time we come up with an explanation, a hypothesis, or an interpretation of the facts that renders them understandable, coherent, or natural. The same phenomenon is observed in perception. People are very good at detecting patterns and trends even in random data. In contrast to our skill in inventing scenarios, explanations, and interpretations, our ability to assess their likelihood, or to evaluate them critically, is grossly inadequate. Once we have adopted a particular hypothesis or interpretation, we grossly exaggerate the likelihood of that hypothesis, and find it very difficult to see things any other way. Amos was polite about it. He did not say, as he often said, “It is amazing how dull history books are, given how much of what’s in them must be invented.” What he did say was perhaps even more shocking to his audience: Like other human beings, historians were prone to the cognitive biases that he and Danny had described. “Historical judgment,” he said, was “part of a broader class of processes involving intuitive interpretation of data.” Historical judgments were subject to bias. As an example, Amos talked about research then being conducted by one of his graduate students at Hebrew University, Baruch Fischhoff. When Richard Nixon announced his surprising intention to visit China and Russia, Fischhoff asked people to assign odds to a list of possible outcomes—say, that Nixon would meet Chairman Mao at least once, that the United States and the Soviet Union would create a joint space program, that a group of Soviet Jews would be arrested for attempting to speak with Nixon, and so on. After the trip, Fischhoff went back and asked the same people to recall the odds they had assigned to each outcome. Their memories of the odds they had assigned to various outcomes were badly distorted. They all believed that they had assigned higher probabilities to what happened than they actually had. They greatly overestimated the odds that they had assigned to what had actually happened. That is, once they knew the outcome, they thought it had been far more predictable than they had found it to be before, when they had tried to predict it. A few years after Amos described the work to his Buffalo audience, Fischhoff named the phenomenon “hindsight bias.”† In his talk to the historians, Amos described their occupational hazard: the tendency to take whatever facts they had observed (neglecting the many facts that they did not or could not observe) and make them fit neatly into a confident-sounding story: All too often, we find ourselves unable to predict what will happen; yet after the fact we explain what did happen with a great deal of confidence. This “ability” to explain that which we cannot predict, even in the absence of any additional information, represents an important, though subtle, flaw in our reasoning. It leads us to believe that there is a less uncertain world than there actually is, and that we are less bright than we actually might be. For if we can explain tomorrow what we cannot predict today, without any added information except the knowledge of the actual outcome, then this outcome must have been determined in advance and we should have been able to predict it. The fact that we couldn’t is taken as an indication of our limited intelligence rather than of the uncertainty that is in the world. All too often, we feel like kicking ourselves for failing to foresee that which later appears inevitable. For all we know, the handwriting might have been on the wall all along. The question is: was the ink visible? It wasn’t just sports announcers and political pundits who radically revised their narratives, or shifted focus, so that their stories seemed to fit whatever had just happened in a game or an election. Historians imposed false order upon random events, too, probably without even realizing what they were doing. Amos had a phrase for this. “Creeping determinism,” he called it—and jotted in his notes one of its many costs: “He who sees the past as surprise-free is bound to have a future full of surprises.” A false view of what has happened in the past makes it harder to see what might occur in the future. The historians in his audience of course prided themselves on their “ability” to construct, out of fragments of some past reality, explanatory narratives of events which made them seem, in retrospect, almost predictable. The only question that remained, once the historian had explained how and why some event had occurred, was why the people in his narrative had not seen what the historian could now see. “All the historians attended Amos’s talk,” recalled Biederman, “and they left ashen-faced.” After he had heard Amos explain how the mind arranged historical facts in ways that made past events feel a lot less uncertain, and a lot more predictable, than they actually were, Biederman felt certain that his and Danny’s work could infect any discipline in which experts were required to judge the odds of an uncertain situation—which is to say, great swaths of human activity. And yet the ideas that Danny and Amos were generating were still very much confined to academia. Some professors, most of them professors of psychology, had heard of them. And no one else. It was not at all clear how two guys working in relative obscurity at Hebrew University could spread the word of their discoveries to people outside their field. In the early months of 1973, after their return to Israel from Eugene, Amos and Danny set to work on a long article summarizing their findings. They wanted to gather in one place the chief insights of the four papers they had already written and allow readers to decide what to make of them. “We decided to present the work for what it was: a psychological investigation,” said Danny. “We’d leave the big implications to others.” He and Amos both agreed that the journal Science offered them the best hope of reaching people in fields outside of psychology. Their article was less written than it was constructed. (“A sentence was a good day,” said Danny). As they were building it, they stumbled upon what they saw as a clear path for their ideas to enter everyday human life. They had been gripped by “The Decision to Seed Hurricanes,” a paper coauthored by Stanford professor Ron Howard. Howard was one of the founders of a new field called decision analysis. Its idea was to force decision makers to assign probabilities to various outcomes: to make explicit the thinking that went into their decisions before they made them. How to deal with killer hurricanes was one example of a problem that policy makers might use decision analysts to help address. Hurricane Camille had just wiped out a large tract of the Mississippi Gulf Coast and obviously might have done a lot more damage—say, if it had hit New Orleans or Miami. Meteorologists thought they now had a technique—dumping silver iodide into the storm—to reduce the force of a hurricane, and possibly even alter its path. Seeding a hurricane wasn’t a simple matter, however. The moment the government intervened in the storm, it was implicated in whatever damage that storm inflicted. The public, and the courts of law, were unlikely to give the government credit for what had not happened, for who could say with certainty what would have happened if the government had not intervened? Instead the society would hold its leaders responsible for whatever damage the storm inflicted, wherever it hit. Howard’s paper explored how the government might decide what to do—and that involved estimating the odds of various outcomes. But the way the decision analysts elicited probabilities from the minds of the hurricane experts was, in Danny and Amos’s eyes, bizarre. The analysts would present the hurricane seeding experts inside government with a wheel of fortune on which, say, a third of the slots were painted red. They’d ask: “Would you rather bet on the red sector of this wheel or bet that the seeded hurricane will cause more than $30 billion of property damage?” If the hurricane authority said he would rather bet on red, he was saying that he thought the chance the hurricane would cause more than $30 billion of property damage was less than 33 percent. And so the decision analysts would show him another wheel, with, say, 20 percent of the slots painted red. They did this until the percentage of red slots matched up with the authority’s sense of the odds that the hurricane would cause more than $30 billion of property damage. They just assumed that the hurricane seeding experts had an ability to correctly assess the odds of highly uncertain events. Danny and Amos had already shown that people’s ability to judge probabilities was **** by various mechanisms used by the mind when it faced uncertainty. They believed that they could use their new understanding of the systematic errors in people’s judgment to improve that judgment—and, thus, to improve people’s decision making. For instance, any person’s assessment of probabilities of a killer storm making landfall in 1973 was bound to be warped by the ease with which they recalled the fresh experience of Hurricane Camille. But how, exactly, was that judgment warped? “We thought decision analysis would conquer the world and we would help,” said Danny. The leading decision analysts were clustered around Ron Howard in Menlo Park, California, at a place called the Stanford Research Institute. In the fall of 1973 Danny and Amos flew to meet with them. But before they could figure out exactly how they were going to bring their ideas about uncertainty into the real world, uncertainty intervened. On October 6, the armies of Egypt and Syria—with troops and planes and money from as many as nine other Arab countries—launched an attack on Israel. Israeli intelligence analysts had dramatically misjudged the odds of an attack of any sort, much less a coordinated one. The army was caught off guard. On the Golan Heights, a hundred or so Israeli tanks faced fourteen hundred Syrian tanks. Along the Suez Canal, a garrison of five hundred Israeli troops and three tanks were quickly overrun by two thousand Egyptian tanks and one hundred thousand Egyptian soldiers. On a cool, cloudless, perfect morning in Menlo Park, Amos and Danny heard the news of the shocking Israeli losses. They raced to the airport for the first flight back home, so that they might fight in yet another war. * By the time they were finished with the project, they had dreamed up an array of hysterically bland characters for people to evaluate and judge to be more likely lawyers or engineers. Paul, for example. “Paul is 36 years old, married, with 2 children. He is relaxed and comfortable with himself and with others. An excellent member of a team, he is constructive and not opinionated. He enjoys all aspects of his work, and in particular, the satisfaction of finding clean solutions to complex problems.” † In a brief memoir, Fischhoff later recalled how his idea had first come to him in Danny’s seminar: “We read Paul Meehl’s (1973) ‘Why I Do Not Attend Case Conferences.’ One of his many insights concerned clinicians’ exaggerated feeling of having known all along how cases were going to turn out.” The conversation about Meehl’s idea led Fischhoff to think about the way Israelis were always pretending to have foreseen essentially unforeseeable political events. Fischhoff thought, “If we’re so prescient, why aren’t we running the world?” Then he set out to see exactly how prescient people who thought themselves prescient actually were.

 

Open command on Word version 15.30 locks up program

$
0
0

I've installed Microsoft Office version 15.30 on my MacBook Pro running Sierra. I can boot up all of the programs, Excel, Word, Power Point and it boots up fine. When I go to the File->Open menu and click on Open I get the spinning rainbow wheel of death and I have to force quit the program as it has become unresponsive. This happens every time. 

However, if I go to my drive and double click on the file I want to work on, it boots up the programs and opens the file with no problems. This also doesn't happen when I open the program directly and choose New or Open Recent. 

My Software update says that 15.30 is the latest update. So what can I do to resolve this issue? 

Outlook sync

$
0
0
having issues - when I add an event to my outlook calendar ( office 365 ) it does not show up on my phone or tablet - was working fine until last week. Emails show up fine on all but calendar does not sync, thanks

Error 0x80073cf9

$
0
0
Split from this thread.

Hi I am having the exact same issue; here is the link of the files

https://1drv.ms/f/s!Apijuz67oxyvjpw97GM3B5dmKvHo4g

Viewing all 1237233 articles
Browse latest View live




Latest Images