Next Page: 25
          

Senior Financial Analyst   

Cache   

DC-Washington DC, job summary: Large non-profit located in the heart of NW, DC is seeking a Senior Financial Analyst. The ideal candidate will posses 7+ years of finance experience that includes interfacing with organizational leadership and working with many groups within an organization. Work experience includes providing financial analytical data to business groups and providing advice to such groups for busines

          

Mental health website struggles after royals feature in advert - BBC News   

Cache   

  1. Mental health website struggles after royals feature in advert  BBC News
  2. We are in the midst of a mental health crisis – advice about jogging and self-care is not enough  The Guardian
  3. Every Mind Matters: Prince Harry and Prince William praised for hard-hitting campaign  Express
  4. Rush of traffic crashes mental health website after TV ad fronted by royals  ITV News
  5. Meghan Markle and Prince Harry join William and Kate as royals appear on first ever TV advert tonight for NHS  The Sun
  6. View full coverage on Google News

          

Senior Policy Analyst - Yukon Government - Whitehorse, YT   

Cache   

Marni Delaurier, HR Consultant at (867) 393-6275 or marni.delaurier@gov.yk.ca. Considerable experience planning, leading and providing direction, advice and… $86,950 - $100,521 a year
From Yukon Government - Fri, 04 Oct 2019 09:03:10 GMT - View all Whitehorse, YT jobs

          

[AN #67]: Creating environments in which to study inner alignment failures   

Cache   

Published on October 7, 2019 5:10 PM UTC

Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email.

Audio version here (may not be up yet).

Highlights

Towards an empirical investigation of inner alignment (Evan Hubinger) (summarized by Rohin): Last week, we saw that the worrying thing about mesa optimizers (AN #58) was that they could have robust capabilities, but not robust alignment (AN#66). This leads to an inner alignment failure: the agent will take competent, highly-optimized actions in pursuit of a goal that you didn't want.

This post proposes that we empirically investigate what kinds of mesa objective functions are likely to be learned, by trying to construct mesa optimizers. To do this, we need two ingredients: first, an environment in which there are many distinct proxies that lead to good behavior on the training environment, and second, an architecture that will actually learn a model that is itself performing search, so that it has robust capabilities. Then, the experiment is simple: train the model using deep RL, and investigate its behavior off distribution to distinguish between the various possible proxy reward functions it could have learned. (The next summary has an example.)

Some desirable properties:

- The proxies should not be identical on the training distribution.

- There shouldn't be too many reasonable proxies, since then it would be hard to identify which proxy was learned by the neural net.

- Proxies should differ on "interesting" properties, such as how hard the proxy is to compute from the model's observations, so that we can figure out how a particular property influences whether the proxy will be learned by the model.

Rohin's opinion: I'm very excited by this general line of research: in fact, I developed my own proposal along the same lines. As a result, I have a lot of opinions, many of which I wrote up in this comment, but I'll give a summary here.

I agree pretty strongly with the high level details (focusing on robust capabilities without robust alignment, identifying multiple proxies as the key issue, and focusing on environment design and architecture choice as the hard problems). I do differ in the details though. I'm more interested in producing a compelling example of mesa optimization, and so I care about having a sufficiently complex environment, like Minecraft. I also don't expect there to be a "part" of the neural net that is actually computing the mesa objective; I simply expect that the heuristics learned by the neural net will be consistent with optimization of some proxy reward function. As a result, I'm less excited about studying properties like "how hard is the mesa objective to compute".

A simple environment for showing mesa misalignment (Matthew Barnett) (summarized by Rohin): This post proposes a concrete environment in which we can run the experiments suggested in the previous post. The environment is a maze which contains keys and chests. The true objective is to open chests, but opening a chest requires you to already have a key (and uses up the key). During training, there will be far fewer keys than chests, and so we would expect the learned model to develop an "urge" to pick up keys. If we then test it in mazes with lots of keys, it would go around competently picking up keys while potentially ignoring chests, which would count as a failure of inner alignment. This predicted behavior is similar to how humans developed an "urge" for food because food was scarce in the ancestral environment, even though now food is abundant.

Rohin's opinion: While I would prefer a more complex environment to make a more compelling case that this will be a problem in realistic environments, I do think that this would be a great environment to start testing in. In general, I like the pattern of "the true objective is Y, but during training you need to do X to get Y": it seems particularly likely that even current systems would learn to competently pursue X in such a situation.

Technical AI alignment

Iterated amplification

Machine Learning Projects on IDA (Owain Evans et al) (summarized by Nicholas): This document describes three suggested projects building on Iterated Distillation and Amplification (IDA), a method for training ML systems while preserving alignment. The first project is to apply IDA to solving mathematical problems. The second is to apply IDA to neural program interpretation, the problem of replicating the internal behavior of other programs as well as their outputs. The third is to experiment with adaptive computation where computational power is directed to where it is most useful. For each project, they also include motivation, directions, and related work.

Nicholas's opinion: Figuring out an interesting and useful project to work on is one of the major challenges of any research project, and it may require a distinct skill set from the project's implementation. As a result, I appreciate the authors enabling other researchers to jump straight into solving the problems. Given how detailed the motivation, instructions, and related work are, this document strikes me as an excellent way for someone to begin her first research project on IDA or AI safety more broadly. Additionally, while there are many public explanations of IDA, I found this to be one of the most clear and complete descriptions I have read.

Read more: Alignment Forum summary post

List of resolved confusions about IDA (Wei Dai) (summarized by Rohin): This is a useful post clarifying some of the terms around IDA. I'm not summarizing it because each point is already quite short.

Mesa optimization

Concrete experiments in inner alignment (Evan Hubinger) (summarized by Matthew): While the highlighted posts above go into detail about one particular experiment that could clarify the inner alignment problem, this post briefly lays out several experiments that could be useful. One example experiment is giving an RL trained agent direct access to its reward as part of its observation. During testing, we could try putting the model in a confusing situation by altering its observed reward so that it doesn't match the real one. The hope is that we could gain insight into when RL trained agents internally represent 'goals' and how they relate to the environment, if they do at all. You'll have to read the post to see all the experiments.

Matthew's opinion: I'm currently convinced that doing empirical work right now will help us understand mesa optimization, and this was one of the posts that lead me to that conclusion. I'm still a bit skeptical that current techniques are sufficient to demonstrate the type of powerful learned search algorithms which could characterize the worst outcomes for failures in inner alignment. Regardless, I think at this point classifying failure modes is quite beneficial, and conducting tests like the ones in this post will make that a lot easier.

Learning human intent

Fine-Tuning GPT-2 from Human Preferences (Daniel M. Ziegler et al) (summarized by Sudhanshu): This blog post and its associated paper describes the results of several text generation/continuation experiments, where human feedback on initial/older samples was used in the form of a reinforcement learning reward signal to finetune the base 774-million parameter GPT-2 language model (AN #46). The key motivation here was to understand whether interactions with humans can help algorithms better learn and adapt to human preferences in natural language generation tasks.

They report mixed results. For the tasks of continuing text with positive sentiment or physically descriptive language, they report improved performance above the baseline (as assessed by external examiners) after fine-tuning on only 5,000 human judgments of samples generated from the base model. The summarization task required 60,000 samples of online human feedback to perform similarly to a simple baseline, lead-3 - which returns the first three sentences as the summary - as assessed by humans.

Some of the lessons learned while performing this research include 1) the need for better, less ambiguous tasks and labelling protocols for sourcing higher quality annotations, and 2) a reminder that "bugs can optimize for bad behaviour", as a sign error propagated through the training process to generate "not gibberish but maximally bad output". The work concludes on the note that it is a step towards scalable AI alignment methods such as debate and amplification.

Sudhanshu's opinion: It is good to see research on mainstream NLProc/ML tasks that includes discussions on challenges, failure modes and relevance to the broader motivating goals of AI research.

The work opens up interesting avenues within OpenAI's alignment agenda, for example learning a diversity of preferences (A OR B), or a hierarchy of preferences (A AND B) sequentially without catastrophic forgetting.

In order to scale, we would want to generate automated labelers through semi-supervised reinforcement learning, to derive the most gains from every piece of human input. The robustness of this needs further empirical and conceptual investigation before we can be confident that such a system can work to form a hierarchy of learners, e.g. in amplification.

Rohin's opinion: One thing I particularly like here is that the evaluation is done by humans. This seems significantly more robust as an evaluation metric than any automated system we could come up with, and I hope that more people use human evaluation in the future.

Read more: Paper: Fine-Tuning Language Models from Human Preferences

Preventing bad behavior

Robust Change Captioning (Dong Huk Park et al) (summarized by Dan H): Safe exploration requires that agents avoid disrupting their environment. Previous work, such as Krakovna et al. (AN #10), penalize an agent's needless side effects on the environment. For such techniques to work in the real world, agents must also estimate environment disruptions, side effects, and changes while not being distracted by peripheral and unaffecting changes. This paper proposes a dataset to further the study of "Change Captioning," where scene changes are described by a machine learning system in natural language. That is, given before and after images, a system describes the salient change in the scene. Work on systems that can estimate changes can likely progress safe exploration.

Interpretability

Learning Representations by Humans, for Humans (Sophie Hilgard, Nir Rosenfeld et al) (summarized by Asya): Historically, interpretability approaches have involved machines acting as experts, making decisions and generating explanations for their decisions. This paper takes a slightly different approach, instead using machines as advisers who are trying to give the best possible advice to humans, the final decision makers. Models are given input data and trained to generate visual representations based on the data that cause humans to take the best possible actions. In the main experiment in this paper, humans are tasked with deciding whether to approve or deny loans based on details of a loan application. Advising networks generate realistic-looking faces whose expressions represent multivariate information that's important for the loan decision. Humans do better when provided the facial expression 'advice', and furthermore can justify their decisions with analogical reasoning based on the faces, e.g. "x will likely be repaid because x is similar to x', and x' was repaid".

Asya's opinion: This seems to me like a very plausible story for how AI systems get incorporated into human decision-making in the near-term future. I do worry that further down the line, AI systems where AIs are merely advising will get outcompeted by AI systems doing the entire decision-making process. From an interpretability perspective, it also seems to me like having 'advice' that represents complicated multivariate data still hides a lot of reasoning that could be important if we were worried about misaligned AI. I like that the paper emphasizes having humans-in-the-loop during training and presents an effective mechanism for doing gradient descent with human choices.

Rohin's opinion: One interesting thing about this paper is its similarity to Deep RL from Human Preferences: it also trains a human model, that is improved over time by collecting more data from real humans. The difference is that DRLHP produces a model of the human reward function, whereas the model in this paper predicts human actions.

Other progress in AI

Reinforcement learning

The Principle of Unchanged Optimality in Reinforcement Learning Generalization (Alex Irpan and Xingyou Song) (summarized by Flo): In image recognition tasks, there is usually only one label per image, such that there exists an optimal solution that maps every image to the correct label. Good generalization of a model can therefore straightforwardly be defined as a good approximation of the image-to-label mapping for previously unseen data.

In reinforcement learning, our models usually don't map environments to the optimal policy, but states in a given environment to the corresponding optimal action. The optimal action in a state can depend on the environment. This means that there is a tradeoff regarding the performance of a model in different environments.

The authors suggest the principle of unchanged optimality: in a benchmark for generalization in reinforcement learning, there should be at least one policy that is optimal for all environments in the train and test sets. With this in place, generalization does not conflict with good performance in individual environments. If the principle does not initially hold for a given set of environments, we can change that by giving the agent more information. For example, the agent could receive a parameter that indicates which environment it is currently interacting with.

Flo's opinion: I am a bit torn here: On one hand, the principle makes it plausible for us to find the globally optimal solution by solving our task on a finite set of training environments. This way the generalization problem feels more well-defined and amenable to theoretical analysis, which seems useful for advancing our understanding of reinforcement learning.

On the other hand, I don't expect the principle to hold for most real-world problems. For example, in interactions with other adapting agents performance will depend on these agents' policies, which can be hard to infer and change dynamically. This means that the principle of unchanged optimality won't hold without precise information about the other agent's policies, while this information can be very difficult to obtain.

More generally, with this and some of the criticism of the AI safety gridworlds that framed them as an ill-defined benchmark, I am a bit worried that too much focus on very "clean" benchmarks might divert from issues associated with the messiness of the real world. I would have liked to see a more conditional conclusion for the paper, instead of a general principle.



Discuss
          

Sex With Hypnosis   

Cache   

Sexual dysfunction caused by decreased circulation, hormonal imbalance, depression, or anxiety may be reduced with alternative therapies, such as hypnotherapy. Sexual dysfunctions recognized by professional therapists include hyposexuality (or inhibited sexual excitement), in which sexual arousal can be achieved only with great difficulty. Sexual dysfunction among women can also be caused by physical symptoms such as vaginal dryness or thrush, pain or severe pre-menstrual syndrome (PMS). Sexual dysfunction can occur at any age and can have both psychological and organic causes.

Sexual dysfunction is common among Americans, A University of Chicago study shows about 40 percent of women and 30 percent of men experience sexual dysfunction, a rate much higher than previously believed, according to a new study co-authored by researchers at the University of Chicago and the Robert Wood Johnson Medical School.

Erectile dysfunction, or impotence, is the inability to achieve or maintain an erection sufficient for satisfactory sexual performance. Nearly 9 out of 10 men with erectile dysfunction do not tell their GPs about the problem and 6 out of 10 do not even tell their partners.

Sexual dysfunction among men can often be a result of decreased testosterone levels (hypogonadism), which can also lead to fatigue. Sexual dysfunction in women is characterized by a lack of desire, arousal, or orgasm. Sexual dysfunction is commonly reported with seizure disorders, and many anticonvulsant drugs affect levels of sex hormones. When we say sexual dysfunction, we generally refer to a problem during any of the phases of the sexual response cycle of an individual. Most of the time we hear talks about male sexual dysfunction; female sexual dysfunction gets largely ignored. Approximately 43% of women suffer from some form of sexual dysfunction. The best treatment for sexual dysfunction in women may simply be exercise, counselling, and vaginal lubrication products which can act as more natural alternatives to Viagra and improve your overall health and wellness.

It has been found that hypnosis and hypnotherapy are extremely powerful tools to help with sexual function disorders.

www.sexwithhypnosis.com

General disclaimer: This page is designed for information purposes only and is not engaged in rendering medical advice or professional services.

          

Perspectives on Practice: Careers in Health Law   

Cache   

The Health Law Students' Association invites you to an exciting panel to hear from three high-profile lawyers working in private and public health law sectors. Marjorie Hickey (Partner, McInnes Cooper), Nancy MacCready-Williams (CEO, Doctors Nova Scotia)and Jen Feron (Legal Counsel, IWK) will share with you their own journeys, experiences and advice as it relates to working in health law. Questions from the moderator and then the audience will be followed by a mixer. Light refreshments will be served.​


          

The Invisible Portfolio Killer   

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

While a sudden bear market is what many investors fear the most, there’s a camouflaged threat to your portfolio that can do far greater long-term damage ***A big “thank you” to all our Digest readers Today, we celebrate the Digest’s one-year anniversary. First and foremost, thank you for helping us make it a success. The Digest began back on Oct. 6,....

The post The Invisible Portfolio Killer appeared first on InvestorPlace.


          

How to Find Stocks Poised to Skyrocket   

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

Using modern technology and loads of data, Louis Navellier can identify which stocks are ready to skyrocket, and the gains can come in months, not years!

The post How to Find Stocks Poised to Skyrocket appeared first on InvestorPlace.

More From InvestorPlace

          

5G Stocks: What the $12.3 Trillion 5G Battle Means for Investors   

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

Invest in 5G stocks early, while this enormous technological shift is just getting underway. That’s how fortunes are made.

The post 5G Stocks: What the $12.3 Trillion 5G Battle Means for Investors appeared first on InvestorPlace.

More From InvestorPlace

          

Monday Apple Rumors: Apple Preparing for Subscription Bundle   

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

Monday's Apple Rumors include the launch of macOS Catalina, a larger investment coming from AAPL for Japan Display and more.

The post Monday Apple Rumors: Apple Preparing for Subscription Bundle appeared first on InvestorPlace.


          

Dow Jones Today: Pre-Trade Jitters   

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

Assuming those reports are accurate, that could stall the talks before they really get going because President Trump has previously displayed an “all or nothing” attitude on dealing with China, meaning investors' sense of the matter is that the president wants big action on trade with the world's second-largest economy or that the talks could be scuttled.

The post Dow Jones Today: Pre-Trade Jitters appeared first on InvestorPlace.

More From InvestorPlace

          

Stock Market Today: Trade War Deal Coming?    

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

The trade war caused more back and forth in equities on Monday. Here's what happened in the stock market today.

The post Stock Market Today: Trade War Deal Coming?  appeared first on InvestorPlace.

More From InvestorPlace

          

Pfenex News: PFNX Stock Surges on Osteoporosis Treatment Approval   

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

Pfenex (PFNX) news for Monday about it getting approval for its new osteoporosis treatment has PFNX stock soaring higher.

The post Pfenex News: PFNX Stock Surges on Osteoporosis Treatment Approval appeared first on InvestorPlace.

More From InvestorPlace

          

5 Top Stock Trades for Tuesday: AAPL, GM, SRPT   

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

Apple, General Motors, Delta Air Lines, Sarepta and Bristol-Myers Squibb were our top stock trades to watch for Tuesday. Here's what you need to know.

The post 5 Top Stock Trades for Tuesday: AAPL, GM, SRPT appeared first on InvestorPlace.

More From InvestorPlace

          

Akcea Therapeutics News: AKCA Stock Rockets on Pfizer Licensing Deal   

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

Akcea Therapeutics (AKCA) stock is flying high on Monday following news of a $250 million deal with Pfizer (NYSE:PFE) for licensing rights.

The post Akcea Therapeutics News: AKCA Stock Rockets on Pfizer Licensing Deal appeared first on InvestorPlace.

More From InvestorPlace

          

PepsiCo News: Pepsi Plans to Deploy 15 Tesla Semi Trucks   

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

PepsiCo news for Monday includes the company planning to switch over to Tesla's electric semi trucks as it looks to reduce carbon emissions.

The post PepsiCo News: Pepsi Plans to Deploy 15 Tesla Semi Trucks appeared first on InvestorPlace.

More From InvestorPlace

          

The Sharks Are Circling, but I’m Still Standing by Roku Stock   

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

Even as the competition heats up, ROKU stock could still outperform.

The post The Sharks Are Circling, but I’m Still Standing by Roku Stock appeared first on InvestorPlace.

More From InvestorPlace

          

10 Best Small Cities to Visit in the U.S. 2019   

Cache   

InvestorPlace - Stock Market News, Stock Advice & Trading Tips

'Conde Nast Traveler's' Readers’ Choice Awards survey results are out and they reveal the best small cities to visit in the U.S.

The post 10 Best Small Cities to Visit in the U.S. 2019 appeared first on InvestorPlace.

More From InvestorPlace

          

28 Tools to Conquer the Social Media Recruiting World   

Cache   

From LinkedIn to Snapchat, social networks have become ubiquitous—and recruiters have been swift in mining their potential to connect job seekers and job holders.

But too often, social recruiting devolves into spamming job openings on as many sites as possible and hoping for the best.

Sounds daunting, doesn’t it? We’re here to help: Software Advice turned to the experts to learn what recruiters are doing wrong.

Here, we highlight 28 different social media recruiting tools and platforms (including free ones) you can use to conquer the social media recruiting world.



Request Free!

          

Obama's New Commerce Secretary Nominee: Not So "Squeaky Clean"   

Cache   

The left-leaning Seattle Weekly newspaper notes that Locke presided over a $3.2 billion tax break for Boeing while "never disclosing he paid $715,000 to -- and relied on the advice of -- Boeing's own private consultant and outside auditor." Then there's the tainted matter of Locke's "favors for his brother-in-law (who lived in the governor's mansion), including a tax break for his relative's company, personal intervention in a company dispute, and Locke's signature on a federal loan application for the company." Locke's laces ain't so straight.

The glowing profiles of Locke have largely glossed over his troubling ties to the Clinton-era Chinagate scandal. As the nation's first Chinese-American governor, Locke aggressively raised cash from ethnic constituencies around the country. Convicted campaign finance money-launderer John Huang helped grease the wheels and open doors.

In the same time period that Huang was drumming up illegal cash for Clinton-Gore at the federal level, he also organized two 1996 galas for Locke in Washington, D.C. (where Locke hobnobbed with Clinton and other Chinagate principals); three fundraisers in Los Angeles; and an extravaganza at the Universal City, Calif., Hilton in October 1996 that raised upward of $30,000. Huang also made personal contributions to Locke -- as did another Clinton-Gore funny-money figure, Indonesian business mogul Ted Sioeng and his family and political operatives.

Sioeng, whom Justice Department and intelligence officials suspected of acting on behalf of the Chinese government, illegally donated hundreds of thousands of dollars to both Democratic and Republican coffers. Bank records from congressional investigators indicated that one Sioeng associate's maximum individual contribution to Locke was illegally reimbursed by the businessman's daughter.

Checks to Locke's campaign poured in from prominent Huang and Sioeng associates, many of whom were targets of federal investigations, including: Hoyt Zia, a Commerce Department counsel, who stated in a sworn deposition that Huang had access to virtually any classified document through him; Melinda Yee, another Clinton Commerce Department official who admitted to destroying Freedom of Information Act-protected notes on a China trade mission involving Huang's former employer, the Indonesia-based Lippo Group; Praitun Kanchanalak, mother of convicted Thai influence-peddler Pauline Kanchanalak; Kent La, exclusive distributor of Sioeng's Chinese cigarettes in the United States; and Sioeng's wife and son-in-law.

Locke eventually returned a token amount of money from Huang and Kanchanalak, but not before bitterly playing the race card and accusing critics of his sloppy accounting and questionable schmoozing of stirring up anti-Asian-American sentiment. "It will make our efforts doubly hard to get Asian Americans appointed to top-level positions across the United States," Locke complained. "If they have any connection to John Huang, those individuals will face greater scrutiny and their lives will be completely opened up and examined -- perhaps more than usual."

That scrutiny (such as it was) was more than justified. On top of his Chinagate entanglements, Locke's political committee was fined the maximum amount by Washington's campaign finance watchdog for failing to disclose out-of-state New York City Chinatown donors. One of those events was held at NYC's Harmony Palace restaurant, co-owned by Chinese street gang thugs.

And then there were Locke's not-so-squeaky-clean fundraising trips to a Buddhist temple in Redmond, Wash., which netted nearly $14,000 from monks and nuns -- many of whom barely spoke English, couldn't recall donating to Locke, or were out of the country and could never be located. Of the known temple donors identified by the Locke campaign, five gave $1,000 each on July 22, 1996 -- paid in sequentially ordered cashier's checks. Two priests gave $1,000 and $1,100 respectively on Aug. 8, 1996. Three other temple adherents also gave $1,000 contributions on Aug. 8. Internal campaign records show that two other temple disciples donated $2,000 and $1,000 respectively on other dates. State campaign finance investigators failed to track down some of the donors during their probe.

But while investigating the story for the Seattle Times, I interviewed temple donor Siu Wai Wong, a bald, robed 40-year-old priest who could not remember when or by what means he had given a $1,000 contribution to Locke. He also refused to say whether he was a U.S. citizen, explaining that his "English (was) not so good." Although an inept state campaign-finance panel absolved Locke and his campaign of any wrongdoing, the extensive public record clearly shows that the Locke campaign used Buddhist monks as conduits for laundered money.

The longtime reluctance to press Locke -- who became a high-powered attorney specializing in China trade issues for international law firm Davis, Wright & Tremaine after leaving the governor's mansion -- on his reckless, ethnic-based fundraising will undoubtedly extend to the politically correct and cowed Beltway. Supporters are now touting Locke's cozy relations with the Chinese government as a primary reason he deserves the Commerce Department post. Yet another illustration of how "Hope and Change" is just another synonym for "Screw Up, Move Up."


          

Rebel Yell: Taxpayers Revolt Against Gimme-Mania   

Cache   

Some wore pig noses. Others waved Old Glory and "Don't Tread on Me" flags. Their handmade signs read: "Say No to Generational Theft"; "Obama'$ Porkulu$ Wear$ Lip$tick"; and "I don't want to pay for the SwindleUs! I'm only 10 years old!" The event was peaceful, save for an unhinged city-dweller who showed his tolerance by barging onto the speakers' stage and giving a Nazi salute.

Carender, a newcomer to political activism, shared advice for other first-timers: "Basically, everyone, you just have to do it. Call up your police station or parks department and ask how you can obtain a permit, and then just start advertising. The word will spread. I am only one person, but with a little hard work this protest has become the efforts of a lot of people."

Why bother? It's for posterity's sake. For the historical record. And hopefully it will spur others to move from the phones and computers to the streets. For Carender, it's just the beginning. She gathered all the attendees' e-mail addresses and will keep up the pressure.

"We need to show that we exist. Second, we need to show support for the Republicans and Democrats that voted against the porkulus. If they think, for one second, that they made a bad choice, we have no chance to fight. Third, it sends a message to Obama and Pelosi that we are awake and we know what's happening and we are not going to take it lying down. It is a message saying, 'Expect more opposition because we're out here.'"

The anti-pork activists turned out in Denver, too. On Tuesday, while Obama cocooned himself at the city's Museum of Nature and Science for the stimulus signing, a crowd of nearly 300 gathered on the Capitol steps on their lunch hour to flame-broil the spending bill and feast on roasted pig (also donated by yours truly). Jim Pfaff of Colorado's fiscal conservative citizens group Americans for Prosperity condemned the "Ponzi scheme, Madoff style" stimulus and led the crowd in chants of "No more pork!" Free-market think-tank head Jon Caldara of the Independence Institute brought oversized checks representing the $30,000 stimulus debt load for American families.

On Wednesday in Mesa, local conservative talk station KFYI spearheaded a third large protest to welcome Obama as he unveiled a $100 billion to $200 billion program to bail out banks and beleaguered borrowers having trouble paying their mortgages. The entitlement theme played well last week in Florida, where Obama played Santa Claus to enraptured supporters shamelessly seeking government presents. But nearly 500 protesters in Mesa came to reject the savior-based economy with signs mocking gimme-mania.

Their posters jeered: "Give me Pelosi's Plane"; "Annual Passes to Disneyland"; "Fund Bikini Wax Now"; "Stimulate the Economy: Give Me a Tummy Tuck"; "Free Beer for My Horses."

And my favorite: "Give me liberty or at least a big-screen TV."

Plans are underway for anti-stimulus-palooza protests in Overland Park, Kan., Nashville and New York -- home of smug Democratic Sen. Chuck Schumer. Schumer's derisive comment on the Senate floor about the "chattering classes" who oppose reckless spending has not been forgotten or forgiven. The insult spurred central Kentucky talk show host Leland Conway to organize a pork rind drive. Angry taxpayers bombarded the senator's office with 1,500 bags of cracklins.

Disgraced Democratic Sen. John Edwards was right about one thing: There are two Americas. One America is full of moochers, big and small, corporate and individual, trampling over themselves with their hands out demanding endless bailouts. The other America is full of disgusted, hardworking citizens getting sick of being played for chumps and punished for practicing personal responsibility.

Now is the time for all good taxpayers to turn the tables on free-lunching countrymen and their enablers in Washington. Community organizing helped propel Barack Obama to the White House. It can work for fiscal conservatism, too.


          

Episode 299 - Azure Redhat OpenShift   

Cache   

Harold Wong, a Principal Software Engineer in the Commercial Software Engineering team, gives us the scoop on the popular Azure Redhat OpenShift service (ARO) which gives customers a fully managed OpenShift cluster in Azure. He gives us use-cases for this service as well as tips and advice on moving to ARO.

Harold Wong

Media file: https://God.blue/splash.php?url=Fh06qtDXh_SLASH_S3HG8HoHRxaU3YBohMZpx_PLUS_IIweu2VhbvZaOmx8IGeR_SLASH_1g7Rj_SLASH_7Y8I5rWC17ItkNFX_SLASH_h8zyxr4VGudmGUY22MpHVKoPm6kSkKemzCIKceo4CXPiEFp74iOsyJg0dnT7RK3_SLASH_p8Z2YNK8IA_EQUALS__EQUALS_

Transcript: https://God.blue/splash.php?url=isBH9FDuTyC3Dl3i_PLUS_bGdRJHSooDCqrRi42DAsFCg6ZkqMiqAKDXaUhml_SLASH_Iyn_SLASH_DwFB3Jd8t_PLUS_tAJITCkB4u0Tkt2e_PLUS_DrAItYHu3RrqrqGXuJ0AL08DxkS6LaNXzzY_PLUS_ci3yF7UBt6pDTS0Sg89_SLASH_nSOzfUE5AUQQ_SLASH_TGv8opx6_PLUS_0LwS6i0sigtw7aV6m0798KTxZEZNp2MnowbYsYnCkL_SLASH_W97Cw_EQUALS__EQUALS_

Resources: https://God.blue/splash.php?url=8KMTFoCVd4M2Kom6TW2WfS3S2_SLASH_f5z1C1Yb78rWkhiM88kscTar7nNpGhW_PLUS_Kb_SLASH_TLcmXYKaIWA4ESKfaxFBosI2neT_SLASH_QOGvPcEswozomCoIbCoNkwEIZYsnBqsJUZgxHx9vOZE_SLASH_soBfhIA5_PLUS_uiDxURXw_EQUALS__EQUALS_

 

Other updates:

https://God.blue/splash.php?url=0aC0U8lGXM_PLUS_Zd6oCksvfz398hsRcHdOfCaXsxzvmIn5oHZPoONFWm4_SLASH_RRWEq3fbi8_SLASH_6GX3OFv1zAoGfiaufMSuVLK6_PLUS_2MT98yU8JbuncgyhY4gcObTBbhAqn_PLUS_vbwdCn9BFkNE2TxT_SLASH_Bc_SLASH_wUCKxxVBopJzIcokwL1fTkERehjWMfeM1ZNxDPF7CzF9Js7qLsT

https://God.blue/splash.php?url=SJuOMLb07z_PLUS_lIXJEbFcdtMCusWA4Li7ze6Gmz6hHudl6Nbkb7kWWPqlbnH0fpWt4OcqL1pl9gtZKLrRAXZH2XvfklSQ7dL316cJlin_SLASH_YGvENDuqdCTK75LSemnPrK5dud0QnNZV30fzbn3NpH0azyQqnLDN_PLUS_bafRtIQVvk6fmg3S_PLUS_bAp4NRGZ5HZOOWrRQs2gq9m8hamA1GZS431p6beORH_PLUS__SLASH_gvLCGy55Zz9Ay1XQlI_EQUALS_

https://God.blue/splash.php?url=XfZ2CBczQhtMvtfp_SLASH_LheJX_SLASH_eE6LgKNAPbWCi4lxNTY8eK3O_SLASH_DRJHB_SLASH_o6AZRr9DYdC_PLUS_OQLH0Ve40Lt3DM9QhFfD7SjKSjDM4Ui6ku1osZIYf4fGXlczPeFAfSUBMkdmdGnGuDgM0S7RXnQM7iIrygWbeWmwkF4aAxFPkQ9uPjTjI90bkhi4h9NmaGZH3N0J_SLASH_mrOrXOCwmuT3SvwwPN6ZGdU378gpMqA4uO8DJTxqLi1I_EQUALS_

https://God.blue/splash.php?url=RH4_PLUS_hmhG7c19KxoxgX7BL_PLUS_RgQ6ulG0qC0k5yhNGaOEYeiJnouHqMdulwWz_PLUS_FPf1a1kOjYB7aHoh9hxsj1JScevAs9uJQ_SLASH_Jb3SXProoCATUV2_PLUS_4wwLSPAWNnj6J7_SLASH_7VYac8G8Uq5vFMQClhpFQQdFiXhO_SLASH_ZQEU7Z_SLASH_TRRYvrABNqFuFssAFzpdye7eFwr3Ii01A9nRbQ6E_PLUS_JMSURCP7ziYhw_EQUALS__EQUALS_

https://God.blue/splash.php?url=dWbXlJguGQFiv0hFdx1Jq_PLUS_l12to_SLASH_NXGQ_PLUS_aa_PLUS_6B8cQXztIfvDfAQD6GcxxAEYyNLpaNI0qZu55FsZ4gAyn0yJK1xAYap_PLUS_Rx1U2e8npAtUT5XX2wxy20WdrMe9LOthY_PLUS_vjaATrN5HPHmX_SLASH_xy0PljpSfW6jnaRHGaNMpYs7_SLASH_Ngb8VGIxQPQmiv9WDswCS3HDELGCnjc5tHe6h6Dkx9Vhz7HcA_EQUALS__EQUALS_

https://God.blue/splash.php?url=b3tj1z0hrJXWDZ8jmi1fAI9jjjOrhfWlVYIQTsXJPBLiR9ck3JWtj9tMohYTg9Oo04Qx02jVMRph2Bk0drMwkD6_PLUS_xRzvfOHSdBEpDzkozwJdOB_PLUS_CJ5Hw2CySMX2vwgUZi78lN_PLUS_hHri2pz_PLUS_W_SLASH_mFVbeAu23hfafKAN3N_PLUS_EJHv4WX2S74ltcxRSCL5XXRiwkjhWnYlssEEPy9XWJwqe0X_SLASH_zew_EQUALS__EQUALS_

https://God.blue/splash.php?url=lmwUAyZuSD0JZy5jJ_SLASH_DuEkFMuh9FQUS6u_PLUS_9V0EF5ql6wi0gSYtkwZnJjrSJWQ5MOhu_PLUS_ubSdx6383NZCpfcBFbcYgTBDk4LzQpahuma_PLUS_H4caSsELFJFY7Ms3_SLASH_HRGqru4azY94TE1IFCE3enfE_SLASH_X9y1uPYXHbtF8R5oilQmrIaf8BFJfEeMj3k9edQdSM8XehoGs9GiOM_SLASH_z6Oh7LtEdIp5Rg_EQUALS__EQUALS_


          

ePrint Report: Threat Models and Security of Phase-Change Memory    

Cache   

ePrint Report: Threat Models and Security of Phase-Change Memory
Gang Wang

Emerging non-volatile memories (NVMs) have been considered promising alternatives to DRAM for future main memory design. Among the NVMs, Phase-Change Memory (PCM) can serve as a good substitute due to its low standby power, high density, and good scalability. However, PCM material also induces security design challenges mainly due to its interior non-volatility. Designing the memory system necessitates considering the challenges which may open the backdoor for attackers. A threat model can help to identify security vulnerabilities in design processes. It is all about finding the security problems, and therefore it should be done early in the design and adoption of manufacture. To our knowledge, this paper is the first attempt to thoroughly discuss the potential threat models for the PCM memory, which can provide a good reference for designing the new generation of PCM. Meanwhile, this paper gives security advice and potential security solutions to design a secure PCM to protect against these potential threats.

          

WAYV   

Cache   

Join the best community platform for people wanting to travel the world, share stories, and meet like minded people.

Share and discover travel advice and start planning your trip!

Recent changes:
We are hard at work to bring you plenty of requested features!
- Improved feedback data so we can bring features and fixes to you even faster!
- Fixing some bugs to improve user account settings
- Design improvements
- Secret improvements to make everything better!

          

CppCon 2019 Trip Report and Slides   

Cache   

Having been back from CppCon 2019 for over a week, I thought it was about time I wrote up my trip report.

The venue

This year, CppCon was at a new venue: the Gaylord Rockies Resort near Denver, Colorado, USA. This is a huge conference centre, currently surrounded by vast tracts of empty space, though people told me there were many plans for developing the surrounding area.

There were hosting multiple conferences and events alongside CppCon; it was quite amusing to emerge from the conference rooms and find oneself surrounded by people in ballgowns and fancy evening wear for an event in the nearby ballroom!

There were a choice of eating establishments, but they all had one thing in common: they were overpriced, taking advantage of the captured nature of the hotel clientelle. The food was reasonably nice though.

The size of the venue did make for a fair amount of walking around between sessions.

Overall the venue was nice, and the staff were friendly and helpful.

Pre-conference Workshop

I ran a 2-day pre-conference class, entitled More Concurrent Thinking in C++: Beyond the Basics, which was for those looking to move beyond the basics of threads and locks to the next level: high level library and application design, as well as lock-free programming with atomics. This was well attended, and I had interesting discussions with people over lunch and in the evening.

If you would like to book this course for your company, please see my training page.

The main conference

Bjarne Stroustrup kicked off the main conference with his presentation on "C++20: C++ at 40". Bjarne again reiterated his vision for C++, and outlined some of the many nice language and library features we have to make development easier, and code clearer and less error-prone.

Matt Godbolt's presentation on "Compiler Explorer: Behind the Scenes" was good and entertaining. Matt showed how he'd evolved Compiler Explorer from a simple script to the current website, and demonstrated some nifty things about it along the way, including features you might not have known about such as the LLVM instruction cost view, or the new "run your code" facility.

In "If You Can't Open It, You Don't Own It", Matt Butler talked about security and trust, and how bad things can happen if something you trust is compromised. Mostly this was obvious if you thought about it, but not something we necessarily do think about, so it was nice to be reminded, especially with the concrete examples. His advice on what we can do to build more secure systems, and existing and proposed C++ features that help was also good.

Barbara Geller and Ansel Sermersheim made an enthusiastic duo presenting "High performance graphics and text rendering on the GPU for any C++ application". I am excited about the potential for their Copperspice wrapper for the Vulkan rendering library: rendering 3D graphics portably is hard, and text more so.

Andrew Sutton's presentation on "Reflections: Compile-time Introspection of Source Code" was an interesting end to Monday. There is a lot of scope for eliminating boilerplate if we can use reflection, so it is good to see the progress being made on it.

Tuesday morning began with a scary question posed by Michael Wong, Paul McKenney and Maged Michael: "Will Your Code Survive the Attack of the Zombie Pointers?" Currently, if you delete an object or call free then all copies of those pointers immediately become invalid across all threads. Since invalid pointers can't even be compared, this can result in zombies eating your brains. Michael, Paul and Maged looked at what we can do in our code to avoid this, and what they are proposing for the C++ Standard to fix the problem.

Andrei Alexandrescu's presentation on "Speed is found in the minds of people" was an insightful look at optimizing sort. Andrei showed how compiler and processor features mean that performance can be counter-intuitive, and code with a higher algorithmic complexity can run faster in the right conditions. Always use infinite loops (except for most cases).

I love the interactive slides in Hana Dusikova's presentation "A State of Compile Time Regular Expressions". She is pushing the boundaries of compile-time coding to make our code perform better at runtime. std::regex can be slow compared to other regular expression libraries, but ctre can be much better. I am excited to see how this can be extended to compile-time parsing of other DSLs.

In "Applied WebAssembly: Compiling and Running C++ in Your Web Browser", Ben Smith showed the use of WebAssembly as a target to allow you to write high-performance C++ code that will run in a suitable web browser on any platform, much like the "Write once, run anywhere" promise of Java. I am interested to see where this can lead.

Samy Al Bahra and Paul Khuong presented the final session I attended: "Abusing Your Memory Model for Fun and Profit". They discussed how they have written code that relies on the stronger memory ordering requirements imposed by X86 CPUs over and above the standard C++ memory model in order to write high-performance concurrent data structures. I am intrigued to see if any of their techniques can be used in a portable fashion, or used to improve Just::Thread Pro.

Whiteboard code

This year there were a few whiteboards around the conference area for people to use for impromptu discussions. One of them had a challenge written on it:

"Can you write a requires expression that ensures a class has a member function with a specified signature?"

This led to a lot of discussion, which Arthur O'Dwyer wrote up as a blog post. Though the premise of the question is wrong (we shouldn't want to constrain on such specifics), it was fun, interesting and enlightening trying to think how one might do it — it allows you to explore the corner cases of the language in ways that might turn out to be useful later.

My presentation

As well as the workshop, I presented a talk on "Concurrency in C++20 and beyond", which was on Tuesday afternoon. It was in an intermediate-sized room, and I believe was well attended, though it was hard to see the audience with the bright stage lighting. There were a number of interesting questions from the audience addressing the issues raised in my presentation, which is always good, though the acoustics did make it hard to hear some of them.

Slides are available here.

~trip_report()

So that was an overview of another awesome CppCon. I love the in-person interactions with so many people involved in using C++ for such a wide variety of things. Everyone has their own perspective, and I always learn something.

The videos are being uploaded incrementally to the CppCon YouTube channel, so hopefully the video of my presentation and the ones above that aren't already available will be uploaded soon.

Posted by Anthony Williams
[/ news /] permanent link
Tags: , , , , , ,
Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

Follow me on Twitter


Next Page: 25

© Googlier LLC, 2020