Next Page: 25
DC-Washington DC, job summary: Large non-profit located in the heart of NW, DC is seeking a Senior Financial Analyst. The ideal candidate will posses 7+ years of finance experience that includes interfacing with organizational leadership and working with many groups within an organization. Work experience includes providing financial analytical data to business groups and providing advice to such groups for busines
Marni Delaurier, HR Consultant at (867) 393-6275 or firstname.lastname@example.org. Considerable experience planning, leading and providing direction, advice and… $86,950 - $100,521 a year
Published on October 7, 2019 5:10 PM UTC
Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email.
Audio version here (may not be up yet).
Towards an empirical investigation of inner alignment (Evan Hubinger) (summarized by Rohin): Last week, we saw that the worrying thing about mesa optimizers (AN #58) was that they could have robust capabilities, but not robust alignment (AN#66). This leads to an inner alignment failure: the agent will take competent, highly-optimized actions in pursuit of a goal that you didn't want.
This post proposes that we empirically investigate what kinds of mesa objective functions are likely to be learned, by trying to construct mesa optimizers. To do this, we need two ingredients: first, an environment in which there are many distinct proxies that lead to good behavior on the training environment, and second, an architecture that will actually learn a model that is itself performing search, so that it has robust capabilities. Then, the experiment is simple: train the model using deep RL, and investigate its behavior off distribution to distinguish between the various possible proxy reward functions it could have learned. (The next summary has an example.)
Some desirable properties:
- The proxies should not be identical on the training distribution.
- There shouldn't be too many reasonable proxies, since then it would be hard to identify which proxy was learned by the neural net.
- Proxies should differ on "interesting" properties, such as how hard the proxy is to compute from the model's observations, so that we can figure out how a particular property influences whether the proxy will be learned by the model.
Rohin's opinion: I'm very excited by this general line of research: in fact, I developed my own proposal along the same lines. As a result, I have a lot of opinions, many of which I wrote up in this comment, but I'll give a summary here.
I agree pretty strongly with the high level details (focusing on robust capabilities without robust alignment, identifying multiple proxies as the key issue, and focusing on environment design and architecture choice as the hard problems). I do differ in the details though. I'm more interested in producing a compelling example of mesa optimization, and so I care about having a sufficiently complex environment, like Minecraft. I also don't expect there to be a "part" of the neural net that is actually computing the mesa objective; I simply expect that the heuristics learned by the neural net will be consistent with optimization of some proxy reward function. As a result, I'm less excited about studying properties like "how hard is the mesa objective to compute".
A simple environment for showing mesa misalignment (Matthew Barnett) (summarized by Rohin): This post proposes a concrete environment in which we can run the experiments suggested in the previous post. The environment is a maze which contains keys and chests. The true objective is to open chests, but opening a chest requires you to already have a key (and uses up the key). During training, there will be far fewer keys than chests, and so we would expect the learned model to develop an "urge" to pick up keys. If we then test it in mazes with lots of keys, it would go around competently picking up keys while potentially ignoring chests, which would count as a failure of inner alignment. This predicted behavior is similar to how humans developed an "urge" for food because food was scarce in the ancestral environment, even though now food is abundant.
Rohin's opinion: While I would prefer a more complex environment to make a more compelling case that this will be a problem in realistic environments, I do think that this would be a great environment to start testing in. In general, I like the pattern of "the true objective is Y, but during training you need to do X to get Y": it seems particularly likely that even current systems would learn to competently pursue X in such a situation.
Technical AI alignment
Machine Learning Projects on IDA (Owain Evans et al) (summarized by Nicholas): This document describes three suggested projects building on Iterated Distillation and Amplification (IDA), a method for training ML systems while preserving alignment. The first project is to apply IDA to solving mathematical problems. The second is to apply IDA to neural program interpretation, the problem of replicating the internal behavior of other programs as well as their outputs. The third is to experiment with adaptive computation where computational power is directed to where it is most useful. For each project, they also include motivation, directions, and related work.
Nicholas's opinion: Figuring out an interesting and useful project to work on is one of the major challenges of any research project, and it may require a distinct skill set from the project's implementation. As a result, I appreciate the authors enabling other researchers to jump straight into solving the problems. Given how detailed the motivation, instructions, and related work are, this document strikes me as an excellent way for someone to begin her first research project on IDA or AI safety more broadly. Additionally, while there are many public explanations of IDA, I found this to be one of the most clear and complete descriptions I have read.
Read more: Alignment Forum summary post
List of resolved confusions about IDA (Wei Dai) (summarized by Rohin): This is a useful post clarifying some of the terms around IDA. I'm not summarizing it because each point is already quite short.
Concrete experiments in inner alignment (Evan Hubinger) (summarized by Matthew): While the highlighted posts above go into detail about one particular experiment that could clarify the inner alignment problem, this post briefly lays out several experiments that could be useful. One example experiment is giving an RL trained agent direct access to its reward as part of its observation. During testing, we could try putting the model in a confusing situation by altering its observed reward so that it doesn't match the real one. The hope is that we could gain insight into when RL trained agents internally represent 'goals' and how they relate to the environment, if they do at all. You'll have to read the post to see all the experiments.
Matthew's opinion: I'm currently convinced that doing empirical work right now will help us understand mesa optimization, and this was one of the posts that lead me to that conclusion. I'm still a bit skeptical that current techniques are sufficient to demonstrate the type of powerful learned search algorithms which could characterize the worst outcomes for failures in inner alignment. Regardless, I think at this point classifying failure modes is quite beneficial, and conducting tests like the ones in this post will make that a lot easier.
Learning human intent
Fine-Tuning GPT-2 from Human Preferences (Daniel M. Ziegler et al) (summarized by Sudhanshu): This blog post and its associated paper describes the results of several text generation/continuation experiments, where human feedback on initial/older samples was used in the form of a reinforcement learning reward signal to finetune the base 774-million parameter GPT-2 language model (AN #46). The key motivation here was to understand whether interactions with humans can help algorithms better learn and adapt to human preferences in natural language generation tasks.
They report mixed results. For the tasks of continuing text with positive sentiment or physically descriptive language, they report improved performance above the baseline (as assessed by external examiners) after fine-tuning on only 5,000 human judgments of samples generated from the base model. The summarization task required 60,000 samples of online human feedback to perform similarly to a simple baseline, lead-3 - which returns the first three sentences as the summary - as assessed by humans.
Some of the lessons learned while performing this research include 1) the need for better, less ambiguous tasks and labelling protocols for sourcing higher quality annotations, and 2) a reminder that "bugs can optimize for bad behaviour", as a sign error propagated through the training process to generate "not gibberish but maximally bad output". The work concludes on the note that it is a step towards scalable AI alignment methods such as debate and amplification.
Sudhanshu's opinion: It is good to see research on mainstream NLProc/ML tasks that includes discussions on challenges, failure modes and relevance to the broader motivating goals of AI research.
The work opens up interesting avenues within OpenAI's alignment agenda, for example learning a diversity of preferences (A OR B), or a hierarchy of preferences (A AND B) sequentially without catastrophic forgetting.
In order to scale, we would want to generate automated labelers through semi-supervised reinforcement learning, to derive the most gains from every piece of human input. The robustness of this needs further empirical and conceptual investigation before we can be confident that such a system can work to form a hierarchy of learners, e.g. in amplification.
Rohin's opinion: One thing I particularly like here is that the evaluation is done by humans. This seems significantly more robust as an evaluation metric than any automated system we could come up with, and I hope that more people use human evaluation in the future.
Preventing bad behavior
Robust Change Captioning (Dong Huk Park et al) (summarized by Dan H): Safe exploration requires that agents avoid disrupting their environment. Previous work, such as Krakovna et al. (AN #10), penalize an agent's needless side effects on the environment. For such techniques to work in the real world, agents must also estimate environment disruptions, side effects, and changes while not being distracted by peripheral and unaffecting changes. This paper proposes a dataset to further the study of "Change Captioning," where scene changes are described by a machine learning system in natural language. That is, given before and after images, a system describes the salient change in the scene. Work on systems that can estimate changes can likely progress safe exploration.
Learning Representations by Humans, for Humans (Sophie Hilgard, Nir Rosenfeld et al) (summarized by Asya): Historically, interpretability approaches have involved machines acting as experts, making decisions and generating explanations for their decisions. This paper takes a slightly different approach, instead using machines as advisers who are trying to give the best possible advice to humans, the final decision makers. Models are given input data and trained to generate visual representations based on the data that cause humans to take the best possible actions. In the main experiment in this paper, humans are tasked with deciding whether to approve or deny loans based on details of a loan application. Advising networks generate realistic-looking faces whose expressions represent multivariate information that's important for the loan decision. Humans do better when provided the facial expression 'advice', and furthermore can justify their decisions with analogical reasoning based on the faces, e.g. "x will likely be repaid because x is similar to x', and x' was repaid".
Asya's opinion: This seems to me like a very plausible story for how AI systems get incorporated into human decision-making in the near-term future. I do worry that further down the line, AI systems where AIs are merely advising will get outcompeted by AI systems doing the entire decision-making process. From an interpretability perspective, it also seems to me like having 'advice' that represents complicated multivariate data still hides a lot of reasoning that could be important if we were worried about misaligned AI. I like that the paper emphasizes having humans-in-the-loop during training and presents an effective mechanism for doing gradient descent with human choices.
Rohin's opinion: One interesting thing about this paper is its similarity to Deep RL from Human Preferences: it also trains a human model, that is improved over time by collecting more data from real humans. The difference is that DRLHP produces a model of the human reward function, whereas the model in this paper predicts human actions.
Other progress in AI
The Principle of Unchanged Optimality in Reinforcement Learning Generalization (Alex Irpan and Xingyou Song) (summarized by Flo): In image recognition tasks, there is usually only one label per image, such that there exists an optimal solution that maps every image to the correct label. Good generalization of a model can therefore straightforwardly be defined as a good approximation of the image-to-label mapping for previously unseen data.
In reinforcement learning, our models usually don't map environments to the optimal policy, but states in a given environment to the corresponding optimal action. The optimal action in a state can depend on the environment. This means that there is a tradeoff regarding the performance of a model in different environments.
The authors suggest the principle of unchanged optimality: in a benchmark for generalization in reinforcement learning, there should be at least one policy that is optimal for all environments in the train and test sets. With this in place, generalization does not conflict with good performance in individual environments. If the principle does not initially hold for a given set of environments, we can change that by giving the agent more information. For example, the agent could receive a parameter that indicates which environment it is currently interacting with.
Flo's opinion: I am a bit torn here: On one hand, the principle makes it plausible for us to find the globally optimal solution by solving our task on a finite set of training environments. This way the generalization problem feels more well-defined and amenable to theoretical analysis, which seems useful for advancing our understanding of reinforcement learning.
On the other hand, I don't expect the principle to hold for most real-world problems. For example, in interactions with other adapting agents performance will depend on these agents' policies, which can be hard to infer and change dynamically. This means that the principle of unchanged optimality won't hold without precise information about the other agent's policies, while this information can be very difficult to obtain.
More generally, with this and some of the criticism of the AI safety gridworlds that framed them as an ill-defined benchmark, I am a bit worried that too much focus on very "clean" benchmarks might divert from issues associated with the messiness of the real world. I would have liked to see a more conditional conclusion for the paper, instead of a general principle.
Sexual dysfunction caused by decreased circulation, hormonal imbalance, depression, or anxiety may be reduced with alternative therapies, such as hypnotherapy. Sexual dysfunctions recognized by professional therapists include hyposexuality (or inhibited sexual excitement), in which sexual arousal can be achieved only with great difficulty. Sexual dysfunction among women can also be caused by physical symptoms such as vaginal dryness or thrush, pain or severe pre-menstrual syndrome (PMS). Sexual dysfunction can occur at any age and can have both psychological and organic causes.
The Health Law Students' Association invites you to an exciting panel to hear from three high-profile lawyers working in private and public health law sectors. Marjorie Hickey (Partner, McInnes Cooper), Nancy MacCready-Williams (CEO, Doctors Nova Scotia)and Jen Feron (Legal Counsel, IWK) will share with you their own journeys, experiences and advice as it relates to working in health law. Questions from the moderator and then the audience will be followed by a mixer. Light refreshments will be served.
While a sudden bear market is what many investors fear the most, there’s a camouflaged threat to your portfolio that can do far greater long-term damage ***A big “thank you” to all our Digest readers Today, we celebrate the Digest’s one-year anniversary. First and foremost, thank you for helping us make it a success. The Digest began back on Oct. 6,....
Using modern technology and loads of data, Louis Navellier can identify which stocks are ready to skyrocket, and the gains can come in months, not years!
Invest in 5G stocks early, while this enormous technological shift is just getting underway. That’s how fortunes are made.
The post 5G Stocks: What the $12.3 Trillion 5G Battle Means for Investors appeared first on InvestorPlace.More From InvestorPlace
Monday's Apple Rumors include the launch of macOS Catalina, a larger investment coming from AAPL for Japan Display and more.
The post Monday Apple Rumors: Apple Preparing for Subscription Bundle appeared first on InvestorPlace.
Assuming those reports are accurate, that could stall the talks before they really get going because President Trump has previously displayed an “all or nothing” attitude on dealing with China, meaning investors' sense of the matter is that the president wants big action on trade with the world's second-largest economy or that the talks could be scuttled.
The trade war caused more back and forth in equities on Monday. Here's what happened in the stock market today.
Pfenex (PFNX) news for Monday about it getting approval for its new osteoporosis treatment has PFNX stock soaring higher.
The post Pfenex News: PFNX Stock Surges on Osteoporosis Treatment Approval appeared first on InvestorPlace.More From InvestorPlace
Apple, General Motors, Delta Air Lines, Sarepta and Bristol-Myers Squibb were our top stock trades to watch for Tuesday. Here's what you need to know.
Akcea Therapeutics (AKCA) stock is flying high on Monday following news of a $250 million deal with Pfizer (NYSE:PFE) for licensing rights.
The post Akcea Therapeutics News: AKCA Stock Rockets on Pfizer Licensing Deal appeared first on InvestorPlace.More From InvestorPlace
PepsiCo news for Monday includes the company planning to switch over to Tesla's electric semi trucks as it looks to reduce carbon emissions.
The post PepsiCo News: Pepsi Plans to Deploy 15 Tesla Semi Trucks appeared first on InvestorPlace.More From InvestorPlace
Even as the competition heats up, ROKU stock could still outperform.
The post The Sharks Are Circling, but I’m Still Standing by Roku Stock appeared first on InvestorPlace.More From InvestorPlace
'Conde Nast Traveler's' Readers’ Choice Awards survey results are out and they reveal the best small cities to visit in the U.S.
The left-leaning Seattle Weekly newspaper notes that Locke presided over a $3.2 billion tax break for Boeing while "never disclosing he paid $715,000 to -- and relied on the advice of -- Boeing's own private consultant and outside auditor." Then there's the tainted matter of Locke's "favors for his brother-in-law (who lived in the governor's mansion), including a tax break for his relative's company, personal intervention in a company dispute, and Locke's signature on a federal loan application for the company." Locke's laces ain't so straight.
The glowing profiles of Locke have largely glossed over his troubling ties to the Clinton-era Chinagate scandal. As the nation's first Chinese-American governor, Locke aggressively raised cash from ethnic constituencies around the country. Convicted campaign finance money-launderer John Huang helped grease the wheels and open doors.
In the same time period that Huang was drumming up illegal cash for Clinton-Gore at the federal level, he also organized two 1996 galas for Locke in Washington, D.C. (where Locke hobnobbed with Clinton and other Chinagate principals); three fundraisers in Los Angeles; and an extravaganza at the Universal City, Calif., Hilton in October 1996 that raised upward of $30,000. Huang also made personal contributions to Locke -- as did another Clinton-Gore funny-money figure, Indonesian business mogul Ted Sioeng and his family and political operatives.
Sioeng, whom Justice Department and intelligence officials suspected of acting on behalf of the Chinese government, illegally donated hundreds of thousands of dollars to both Democratic and Republican coffers. Bank records from congressional investigators indicated that one Sioeng associate's maximum individual contribution to Locke was illegally reimbursed by the businessman's daughter.
Checks to Locke's campaign poured in from prominent Huang and Sioeng associates, many of whom were targets of federal investigations, including: Hoyt Zia, a Commerce Department counsel, who stated in a sworn deposition that Huang had access to virtually any classified document through him; Melinda Yee, another Clinton Commerce Department official who admitted to destroying Freedom of Information Act-protected notes on a China trade mission involving Huang's former employer, the Indonesia-based Lippo Group; Praitun Kanchanalak, mother of convicted Thai influence-peddler Pauline Kanchanalak; Kent La, exclusive distributor of Sioeng's Chinese cigarettes in the United States; and Sioeng's wife and son-in-law.
Locke eventually returned a token amount of money from Huang and Kanchanalak, but not before bitterly playing the race card and accusing critics of his sloppy accounting and questionable schmoozing of stirring up anti-Asian-American sentiment. "It will make our efforts doubly hard to get Asian Americans appointed to top-level positions across the United States," Locke complained. "If they have any connection to John Huang, those individuals will face greater scrutiny and their lives will be completely opened up and examined -- perhaps more than usual."
That scrutiny (such as it was) was more than justified. On top of his Chinagate entanglements, Locke's political committee was fined the maximum amount by Washington's campaign finance watchdog for failing to disclose out-of-state New York City Chinatown donors. One of those events was held at NYC's Harmony Palace restaurant, co-owned by Chinese street gang thugs.
And then there were Locke's not-so-squeaky-clean fundraising trips to a Buddhist temple in Redmond, Wash., which netted nearly $14,000 from monks and nuns -- many of whom barely spoke English, couldn't recall donating to Locke, or were out of the country and could never be located. Of the known temple donors identified by the Locke campaign, five gave $1,000 each on July 22, 1996 -- paid in sequentially ordered cashier's checks. Two priests gave $1,000 and $1,100 respectively on Aug. 8, 1996. Three other temple adherents also gave $1,000 contributions on Aug. 8. Internal campaign records show that two other temple disciples donated $2,000 and $1,000 respectively on other dates. State campaign finance investigators failed to track down some of the donors during their probe.
But while investigating the story for the Seattle Times, I interviewed temple donor Siu Wai Wong, a bald, robed 40-year-old priest who could not remember when or by what means he had given a $1,000 contribution to Locke. He also refused to say whether he was a U.S. citizen, explaining that his "English (was) not so good." Although an inept state campaign-finance panel absolved Locke and his campaign of any wrongdoing, the extensive public record clearly shows that the Locke campaign used Buddhist monks as conduits for laundered money.
The longtime reluctance to press Locke -- who became a high-powered attorney specializing in China trade issues for international law firm Davis, Wright & Tremaine after leaving the governor's mansion -- on his reckless, ethnic-based fundraising will undoubtedly extend to the politically correct and cowed Beltway. Supporters are now touting Locke's cozy relations with the Chinese government as a primary reason he deserves the Commerce Department post. Yet another illustration of how "Hope and Change" is just another synonym for "Screw Up, Move Up."
Some wore pig noses. Others waved Old Glory and "Don't Tread on Me" flags. Their handmade signs read: "Say No to Generational Theft"; "Obama'$ Porkulu$ Wear$ Lip$tick"; and "I don't want to pay for the SwindleUs! I'm only 10 years old!" The event was peaceful, save for an unhinged city-dweller who showed his tolerance by barging onto the speakers' stage and giving a Nazi salute.
Carender, a newcomer to political activism, shared advice for other first-timers: "Basically, everyone, you just have to do it. Call up your police station or parks department and ask how you can obtain a permit, and then just start advertising. The word will spread. I am only one person, but with a little hard work this protest has become the efforts of a lot of people."
Why bother? It's for posterity's sake. For the historical record. And hopefully it will spur others to move from the phones and computers to the streets. For Carender, it's just the beginning. She gathered all the attendees' e-mail addresses and will keep up the pressure.
"We need to show that we exist. Second, we need to show support for the Republicans and Democrats that voted against the porkulus. If they think, for one second, that they made a bad choice, we have no chance to fight. Third, it sends a message to Obama and Pelosi that we are awake and we know what's happening and we are not going to take it lying down. It is a message saying, 'Expect more opposition because we're out here.'"
The anti-pork activists turned out in Denver, too. On Tuesday, while Obama cocooned himself at the city's Museum of Nature and Science for the stimulus signing, a crowd of nearly 300 gathered on the Capitol steps on their lunch hour to flame-broil the spending bill and feast on roasted pig (also donated by yours truly). Jim Pfaff of Colorado's fiscal conservative citizens group Americans for Prosperity condemned the "Ponzi scheme, Madoff style" stimulus and led the crowd in chants of "No more pork!" Free-market think-tank head Jon Caldara of the Independence Institute brought oversized checks representing the $30,000 stimulus debt load for American families.
On Wednesday in Mesa, local conservative talk station KFYI spearheaded a third large protest to welcome Obama as he unveiled a $100 billion to $200 billion program to bail out banks and beleaguered borrowers having trouble paying their mortgages. The entitlement theme played well last week in Florida, where Obama played Santa Claus to enraptured supporters shamelessly seeking government presents. But nearly 500 protesters in Mesa came to reject the savior-based economy with signs mocking gimme-mania.
Their posters jeered: "Give me Pelosi's Plane"; "Annual Passes to Disneyland"; "Fund Bikini Wax Now"; "Stimulate the Economy: Give Me a Tummy Tuck"; "Free Beer for My Horses."
And my favorite: "Give me liberty or at least a big-screen TV."
Plans are underway for anti-stimulus-palooza protests in Overland Park, Kan., Nashville and New York -- home of smug Democratic Sen. Chuck Schumer. Schumer's derisive comment on the Senate floor about the "chattering classes" who oppose reckless spending has not been forgotten or forgiven. The insult spurred central Kentucky talk show host Leland Conway to organize a pork rind drive. Angry taxpayers bombarded the senator's office with 1,500 bags of cracklins.
Disgraced Democratic Sen. John Edwards was right about one thing: There are two Americas. One America is full of moochers, big and small, corporate and individual, trampling over themselves with their hands out demanding endless bailouts. The other America is full of disgusted, hardworking citizens getting sick of being played for chumps and punished for practicing personal responsibility.
Now is the time for all good taxpayers to turn the tables on free-lunching countrymen and their enablers in Washington. Community organizing helped propel Barack Obama to the White House. It can work for fiscal conservatism, too.
ePrint Report: Threat Models and Security of Phase-Change Memory
Join the best community platform for people wanting to travel the world, share stories, and meet like minded people.
Having been back from CppCon 2019 for over a week, I thought it was about time I wrote up my trip report.
This year, CppCon was at a new venue: the Gaylord Rockies Resort near Denver, Colorado, USA. This is a huge conference centre, currently surrounded by vast tracts of empty space, though people told me there were many plans for developing the surrounding area.
There were hosting multiple conferences and events alongside CppCon; it was quite amusing to emerge from the conference rooms and find oneself surrounded by people in ballgowns and fancy evening wear for an event in the nearby ballroom!
There were a choice of eating establishments, but they all had one thing in common: they were overpriced, taking advantage of the captured nature of the hotel clientelle. The food was reasonably nice though.
The size of the venue did make for a fair amount of walking around between sessions.
Overall the venue was nice, and the staff were friendly and helpful.
I ran a 2-day pre-conference class, entitled More Concurrent Thinking in C++: Beyond the Basics, which was for those looking to move beyond the basics of threads and locks to the next level: high level library and application design, as well as lock-free programming with atomics. This was well attended, and I had interesting discussions with people over lunch and in the evening.
If you would like to book this course for your company, please see my training page.
The main conference
Bjarne Stroustrup kicked off the main conference with his presentation on "C++20: C++ at 40". Bjarne again reiterated his vision for C++, and outlined some of the many nice language and library features we have to make development easier, and code clearer and less error-prone.
Matt Godbolt's presentation on "Compiler Explorer: Behind the Scenes" was good and entertaining. Matt showed how he'd evolved Compiler Explorer from a simple script to the current website, and demonstrated some nifty things about it along the way, including features you might not have known about such as the LLVM instruction cost view, or the new "run your code" facility.
In "If You Can't Open It, You Don't Own It", Matt Butler talked about security and trust, and how bad things can happen if something you trust is compromised. Mostly this was obvious if you thought about it, but not something we necessarily do think about, so it was nice to be reminded, especially with the concrete examples. His advice on what we can do to build more secure systems, and existing and proposed C++ features that help was also good.
Barbara Geller and Ansel Sermersheim made an enthusiastic duo presenting "High performance graphics and text rendering on the GPU for any C++ application". I am excited about the potential for their Copperspice wrapper for the Vulkan rendering library: rendering 3D graphics portably is hard, and text more so.
Andrew Sutton's presentation on "Reflections: Compile-time Introspection of Source Code" was an interesting end to Monday. There is a lot of scope for eliminating boilerplate if we can use reflection, so it is good to see the progress being made on it.
Tuesday morning began with a scary question posed by Michael Wong, Paul McKenney
and Maged Michael: "Will Your Code Survive the Attack of the Zombie Pointers?"
Currently, if you
Andrei Alexandrescu's presentation
"Speed is found in the minds of people" was
an insightful look at optimizing
I love the interactive slides in Hana Dusikova's
"A State of Compile Time Regular Expressions". She
is pushing the boundaries of compile-time coding to make our code perform better
In "Applied WebAssembly: Compiling and Running C++ in Your Web Browser", Ben Smith showed the use of WebAssembly as a target to allow you to write high-performance C++ code that will run in a suitable web browser on any platform, much like the "Write once, run anywhere" promise of Java. I am interested to see where this can lead.
Samy Al Bahra and Paul Khuong presented the final session I attended: "Abusing Your Memory Model for Fun and Profit". They discussed how they have written code that relies on the stronger memory ordering requirements imposed by X86 CPUs over and above the standard C++ memory model in order to write high-performance concurrent data structures. I am intrigued to see if any of their techniques can be used in a portable fashion, or used to improve Just::Thread Pro.
This year there were a few whiteboards around the conference area for people to use for impromptu discussions. One of them had a challenge written on it:
"Can you write a
This led to a lot of discussion, which Arthur O'Dwyer wrote up as a blog post. Though the premise of the question is wrong (we shouldn't want to constrain on such specifics), it was fun, interesting and enlightening trying to think how one might do it — it allows you to explore the corner cases of the language in ways that might turn out to be useful later.
As well as the workshop, I presented a talk on "Concurrency in C++20 and beyond", which was on Tuesday afternoon. It was in an intermediate-sized room, and I believe was well attended, though it was hard to see the audience with the bright stage lighting. There were a number of interesting questions from the audience addressing the issues raised in my presentation, which is always good, though the acoustics did make it hard to hear some of them.
Slides are available here.